Facebook Looks to Update Strategy on Combating Hate Speech
Posted on November 24, 2020
Category: Social Media
Facebook continually faces scrutiny over it’s role in facilitating and amplifying hate speech and misinformation. These tough questions have surrounded COVID-19, BLM, and now the election. We have seen misinformation campaigns lead to violence and the potential to undermine our democratic process. To what extent is Facebook complicit in the spread of misinformation and hate speech? Who are designated to review the community standards and how are they selected?
Facebook has outlined a measure of the prevalence of hate speech across the platform as part of its Community Standards Enforcement Report:
"Prevalence estimates the percentage of times people see violating content on our platform. We calculate hate speech prevalence by selecting a sample of content seen on Facebook and then labeling how much of it violates our hate speech policies. Based on this methodology, we estimated the prevalence of hate speech from July 2020 to September 2020 was 0.10% to 0.11%. In other words, out of every 10,000 views of content on Facebook, 10 to 11 of them included hate speech."
10% might not seem like much, but in the context of how large Facebook’s audience is, it’s a significant number. Facebook has grown to more than 2.7 Billion users. Based on these measurements, over 2.7 million people would have still seen hate speech content. Facebook has implemented the use of AI to automatically detect and delete hate speech before it gains any viewership.
"When we first began reporting our metrics for hate speech, in Q4 of 2017, our proactive detection rate was 23.6%. This means that of the hate speech we removed, 23.6% of it was found before a user reported it to us. Today we proactively detect about 95% of hate speech content we remove.
We’ve taken steps to combat white nationalism and white separatism; introduced new rules on content calling for violence against migrants; banned holocaust denial; and updated our policies to account for certain kinds of implicit hate speech, such as content depicting blackface, or stereotypes about Jewish people controlling the world."
Although Facebook has been actively improving its enforcement policy, it’s still behind with an ever growing number of daily users. If Facebook responds too slowly to potential dangers, the damage done may be too late to reverse with articles and posts having the potential to reach hundreds of millions of people overnight.
Facebook has implemented a Content Oversight board, a collection of independent, external experts who would come together to review appeals of content decisions made by Facebook's moderators. “If your content is removed from Facebook or Instagram and you have exhausted your appeals with Facebook, you’ll be able to appeal your case to the Oversight Board, a global body of experts separate from Facebook that will make independent and binding decisions on the cases they choose to hear."
This leads to the spread of misinformation.
"We’re now introducing new AI systems to automatically detect new variations of content that independent fact-checkers have already debunked. Once AI has spotted these new variations, we can flag them in turn to our fact-checking partners for their review.
We’re also taking steps now to make sure we’re prepared to deal with another type of misinformation: deepfakes. These videos, which use AI to show people doing and saying things they didn’t actually do or say, can be difficult for even a trained reviewer to spot. In addition to tasking our AI Red team to think ahead and anticipate potential problems, we’ve deployed a state-of-the-art deepfake detection model with eight deep neural networks, each using the EfficientNet backbone. It was trained on videos from a unique data set commissioned by Facebook for the Deepfake Detection Challenge (DFDC)."
These steps by Facebook have given some people a glimmer of hope, but even Facebook’s moderators have said they are still "years away from achieving the necessary level of sophistication to moderate content automatically.”
Facebook has shown to be a pioneer of new technology, but only time will tell if it can catch up to the rising spread of misinformation and hate speech.