In its latest Community Standards Enforcement Report, released today, Facebook detailed the updates it has made to its AI systems for detecting hate speech and disinformation. The tech giant says 88.8% of all the hate speech it removed this quarter was detected by AI, up from 80.2% in the previous quarter. The AI can remove content automatically if the system has high confidence that it is hate speech, but most is still checked by a human being first.
Behind the scenes: The improvement is largely driven by two updates to Facebook’s AI systems. First, the company is now using massive natural-language models that can better decipher the nuance and meaning of a post. These models build on advances in AI research within the last two years that allow neural networks to be trained on language without any human supervision, getting rid of the bottleneck caused by manual data curation.
The second update is that Facebook’s systems can now analyze content that consists of images and text combined, such as hateful memes. AI is still limited in its ability to interpret such mixed-media content, but Facebook has also released a new data set of hateful memes and launched a competition to help crowdsource better algorithms for detecting them.
Covid lies: Despite these updates, however, AI hasn’t played as big a role in handling the surge of coronavirus misinformation, such as conspiracy theories about the virus’s origin and fake news of cures. Facebook has instead relied primarily on human reviewers at over 60 partner fact-checking organizations. Only once a person has flagged something, such as an image with a misleading headline, do AI systems take over to search for identical or similar items and automatically add warning labels or take them down. The team hasn’t yet been able to train a machine-learning model to find new instances of disinformation itself. “Building a novel classifier for something that understands content it’s never seen before takes time and a lot of data,” Mike Schroepfer, Facebook’s CTO, said on a press call.
Why it matters: The challenge reveals the limitations of AI-based content moderation. Such systems can detect content similar to what they’ve seen before, but they founder when new kinds of misinformation appear. In recent years, Facebook has invested heavily in developing AI systems that can adapt more quickly, but the problem is not just the company’s: it remains one of the biggest research challenges in the field.