Home » News » Facebook failed to curb divisive content in India

Facebook failed to curb divisive content in India

NEW DELHI (AP) – Facebook India has been selective in curbing hate expression, misinformation, and inflammatory posts, especially anti-Muslim content according to leaked documents obtained from The Associated Press. However, its employees have cast doubt on the company’s motives and interests.

From research as recent as March of this year to company memos that date back to 2019, the internal company documents on India highlights Facebook‘s constant struggles in quashing abusive content on its platforms in the world’s biggest democracy and the company’s largest growth market. India’s communal and religious tensions have a history that has seen violence stoked by social media.

The files show that Facebook was aware of these problems for many years. This raises questions about whether or not it has done enough in order to address them. Many digital experts and critics claim it has not done so, particularly in cases where members from Prime Minister Narendra Modi’s ruling Bharatiya Janata Party (or the BJP) are involved.

Facebook is gaining more importance in politics around the globe. India is no exception.

Modi is credited with using the platform to his party’s advantage during elections. However, reporting by The Wall Street Journal last season cast doubt on whether Facebook was selectively enforcing hate speech policies to avoid backlash from the BJP. Modi and Facebook CEO Mark Zuckerberg exuded warmth, as reflected in a 2015 photo of them hugging at their Facebook headquarters.

The leaked documents contain a treasure trove of company reports about hate speech and misinformation within India. Some of the information was amplified by the company’s “recommended” feature or algorithms. They also contain the concerns of company employees about how these issues were handled and the discontent they expressed regarding the platform’s viral “malcontent”.

According to the documents Facebook identified India as one of the most “at-risk countries” and prioritized Hindi and Bengali languages for “automation upon violating hostile speech.” However, Facebook did not have sufficient local language moderators and content-flagging to stop misinformation, which at times led to violence in real-world situations.

In a statement to AP, Facebook stated that it had “invested significant in technology to find hate expression in various languages including Hindi and Bengali”, which has led to a “reduced hate speech that people see in 2021..”

” Hate speech against marginalized groups is on the rise worldwide. A spokesperson for the company said that they are working to improve enforcement and will continue to update our policies as hate speech evolves online.

This AP story, along with others being published, is based on disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. A consortium of news agencies, including the AP, obtained the redacted versions.

Ahead of the February 2019 general election, when misinformation concerns were high, a Facebook worker wanted to know what a new user saw in their news feed. If they only followed pages and groups recommended by the platform, this Facebook employee wanted an understanding of what that person sees.

The employee created a test account and made it live for three consecutive weeks. During this time, an unusual event rocked India. A militant attack in disputed Kashmir had claimed the lives of more than 40 Indian soldiers. This brought India to war with Pakistan.

In a note titled “An Indian test user’s descent into a sea of Polarizing, Nationalistic messages,” an employee, whose name has been redacted, stated that they were shocked by the content flooding their news feed, which had “become a near constant barrage of violent, nationalist content, misinformation and violence and gore .”

Seemingly innocent and innocuous groups were recommended by Facebook rapidly morphed into something entirely, where hate speech and unverified rumors ran rampant.

The recommended groups were bombarded with fake news and anti-Pakistani rhetoric. Many of the contents were extremely graphic.

One featured a man with his bloodied head covered in a Pakistani flag and an Indian flag. Its “Popular Across Facebook” feature showed a slew of unverified content related to the retaliatory Indian strikes into Pakistan after the bombings, including an image of a napalm bomb from a video game clip debunked by one of Facebook‘s fact-check partners.

“Following the test user’s News Feed I’ve seen more images about dead people in the last three weeks than in my entire lifetime,” the researcher wrote.

It raised deep concerns about what such divisive content might lead to in real life. At the time, local media outlets were reporting on attacks on Kashmiris in the fallout.

“Should companies have additional responsibility to prevent integrity harms from recommended content?” The researcher asked them in their conclusion.

The memo was circulated among employees but did not answer this question. It did reveal how the default settings and algorithms of the platform were a contributing factor to such content. Employee pointed out that there were obvious “blind spots” in particular “local language content.” They stated that they hoped these findings would spark conversations about how to avoid “integrity harms,” particularly for people who are “significantly different” from the average U.S. citizen.

Despite the fact that the research took place over three weeks, it was still an average representation. However, they acknowledged that it showed how problematic and “unmoderated content” could “totally take over” during a “major crisis event .”

A spokesperson for the Facebook stated that the test study “inspires deeper, more thorough analysis of its recommendation systems” and “contributes to product improvements to them .”

“We continue to work on curbing hate speech and have strengthened our hate classifiers to include four Indian languages,” said the spokesperson.

Sign up for Daily Newsletters

Copyright, c) 2021 The Washington Times, LLC.


Leave a comment

Your email address will not be published. Required fields are marked *