Facebook’s 'reliance on AI' leaves minority groups vulnerable to hate speech
Tech isn’t stopping violent rhetoric on Facebook, and Zuckerberg refuses to correct false political ads, claims Avaaz
Global nonprofit advocacy organisation Avaaz has published a report claiming that Facebook is once again failing to prevent the spread of anti-Muslim hate speech on its platform in the Assam region of northeast India.
The group accused Facebook of relying too heavily on underdeveloped artificial intelligence (AI) technology to detect hate speech and using its understaffed team of human content moderators to review pre-flagged content rather than employing them as the first line of defence.
Since India's Hindu nationalist government excluded nearly 1.9 million Muslims (and other minorities) in the National Register of Citizens (NRC), the Muslim population in the country's northeastern region has come under threat of statelessness.
In July, the United Nations (UN) expressed concern over the NRC process while at the same time warning of the role of social media in the rise of hate speech in Assam.
"This process may exacerbate the xenophobic climate while fueling religious intolerance and discrimination in the country," it said in a statement that harkened back to Facebook's crisis just over a year ago, when the UN criticised it for playing a "determining role" in the violence against the Rohingya people in Myanmar.
In Avaaz's report, the group combed 800 Facebook posts relating to Assam and the NRC for keywords in Assamese, comparing them to the three tiers of prohibited hate speech defined in Facebook's Community Standards.
At least 26.5% of the posts constituted hate speech targeting religious and ethnic minorities, accumulating at least 5.4 million views for the posts' 99,650 shares.
Get the ITPro. daily newsletter
Receive our latest news, industry updates, featured resources and more. Sign up today to receive our FREE report on AI cyber crime & security - newly updated for 2024.
The comments especially targeted Bengali Muslims, calling them "criminals," "rapists," "terrorists," "pigs," and demanding that people "poison" daughters and legalise female foeticide.
Avaaz accused Facebook of relying too heavily on AI to flag hate speech that has not been reported by human users. Its limited staff of human content moderators, Avaaz said, is only used to review AI-detected content, rather than to actively uncover it.
"Facebook is being used as a megaphone for hate, pointed directly at vulnerable minorities in Assam," said senior Avaaz campaigner Alaphia Zoyab, "Despite the clear and present danger faced by these people, Facebook is refusing to dedicate the resources required to keep them safe."
A spokesperson for Facebook told TechCrunch: "We have invested in dedicated content reviewers, who have local language expertise and an understanding of the India's longstanding historical and social tensions. We've also made significant progress in proactively detecting hate speech on our services, which helps us get to potentially harmful content faster. But these tools aren't perfect yet."
Just over a year ago, Facebook CEO Mark Zuckerberg optimistically projected that, once the technology was developed enough to become reliable, AI would take over the hate speech detection process. At the time, Zuckerberg said it should take five to ten years. "Today we're just not there on that," he admitted. Recent failures to properly censor hate speech suggest that the social media company may have jumped the gun in relying on AI tech.
Avaaz has challenged Facebook to beef up its protections for minorities in Assam, suggesting the company implement a "human-led 'zero tolerance' policy" against hate speech and recruit more human moderators with expertise in local languages.
They further call on Facebook to correct disinformation in the platform's ads, a topic that has spurred interest since a letter was recently released in which Facebook employees plead with their executives to do just that.
Facebook's current policy on political ads allows politicians to post any claim they want, regardless of factuality. Zuckerberg backed this stance as a defender of free expression in his address at Georgetown University in Washington, D.C.
Roughly 250 employees, however, argued that refraining from fact-checking political ads "doesn't protect voices, but instead allows politicians to weaponize [the] platform by targeting people who believe that content posted by political figures is trustworthy".
Whether in terms of detecting and removing hate speech or correcting the advertisement of false information, Facebook has arguably shown that it has leaps and bounds to go before its platform is properly regulated.