The social networking site has introduced new artificial intelligence systems that can spot and delete sexual and violent images – and spare human moderators in the process.
Under the compelling headline “The labourers who keep dick pics and beheadings out of your Facebook feed”, journalist Adrien Chen delved last year into the little-known world of social media’s content moderators. These thousands of workers, most based in Asia, trawl through social networking sites in order to delete or flag offensive content. In the process, they are exposed to the very worst the internet has to offer – beheadings, violent pornography, images of abuse – all for wages as low as $300 a month.
But this month, Twitter has taken a first step towards automating this process, and thus sparing a huge unseen workforce from their daily bombardment of horrors. Almost exactly a year ago, Twitter bought start-up Madbits, which offers, in the words of its co-founders, a “visual intelligence technology that automatically understands, organises and extracts relevant information from raw media”.
Robot moderators
At the time, tech websites speculated that the Madbits would be used to develop facial recognition or tagging on Twitter photos. But in fact, the start-up’s first task was very different: it was instructed by Alex Roetter, Twitter’s head of engineering, to build a system which could find and filter out offensive images, defined by the company as “not safe for work”.