In India, Facebook Struggles to Combat Misinformation and Hate Speech - The New York Times - 0 views
-
On Feb. 4, 2019, a Facebook researcher created a new user account to see what it was like to experience the social media site as a person living in Kerala, India.For the next three weeks, the account operated by a simple rule: Follow all the recommendations generated by Facebook’s algorithms to join groups, watch videos and explore new pages on the site.
-
The result was an inundation of hate speech, misinformation and celebrations of violence, which were documented in an internal Facebook report published later that month.AdvertisementContinue reading the main story“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the Facebook researcher wrote.
-
The report was one of dozens of studies and memos written by Facebook employees grappling with the effects of the platform on India. They provide stark evidence of one of the most serious criticisms levied by human rights activists and politicians against the world-spanning company: It moves into a country without fully understanding its potential impact on local culture and politics, and fails to deploy the resources to act on issues once they occur.
- ...19 more annotations...
-
Facebook’s problems on the subcontinent present an amplified version of the issues it has faced throughout the world, made worse by a lack of resources and a lack of expertise in India’s 22 officially recognized languages.
-
The documents include reports on how bots and fake accounts tied to the country’s ruling party and opposition figures were wreaking havoc on national elections
-
They also detail how a plan championed by Mark Zuckerberg, Facebook’s chief executive, to focus on “meaningful social interactions,” or exchanges between friends and family, was leading to more misinformation in India, particularly during the pandemic.
-
Facebook did not have enough resources in India and was unable to grapple with the problems it had introduced there, including anti-Muslim posts,
-
Eighty-seven percent of the company’s global budget for time spent on classifying misinformation is earmarked for the United States, while only 13 percent is set aside for the rest of the world — even though North American users make up only 10 percent of the social network’s daily active users
-
That lopsided focus on the United States has had consequences in a number of countries besides India. Company documents showed that Facebook installed measures to demote misinformation during the November election in Myanmar, including disinformation shared by the Myanmar military junta.
-
In Sri Lanka, people were able to automatically add hundreds of thousands of users to Facebook groups, exposing them to violence-inducing and hateful content
-
In India, “there is definitely a question about resourcing” for Facebook, but the answer is not “just throwing more money at the problem,” said Katie Harbath, who spent 10 years at Facebook as a director of public policy, and worked directly on securing India’s national elections. Facebook, she said, needs to find a solution that can be applied to countries around the world.
-
Two months later, after India’s national elections had begun, Facebook put in place a series of steps to stem the flow of misinformation and hate speech in the country, according to an internal document called Indian Election Case Study.
-
After the attack, anti-Pakistan content began to circulate in the Facebook-recommended groups that the researcher had joined. Many of the groups, she noted, had tens of thousands of users. A different report by Facebook, published in December 2019, found Indian Facebook users tended to join large groups, with the country’s median group size at 140,000 members.
-
Graphic posts, including a meme showing the beheading of a Pakistani national and dead bodies wrapped in white sheets on the ground, circulated in the groups she joined.After the researcher shared her case study with co-workers, her colleagues commented on the posted report that they were concerned about misinformation about the upcoming elections in India
-
According to a memo written after the trip, one of the key requests from users in India was that Facebook “take action on types of misinfo that are connected to real-world harm, specifically politics and religious group tension.”
-
The case study painted an optimistic picture of Facebook’s efforts, including adding more fact-checking partners — the third-party network of outlets with which Facebook works to outsource fact-checking — and increasing the amount of misinformation it removed.
-
The study did not note the immense problem the company faced with bots in India, nor issues like voter suppression. During the election, Facebook saw a spike in bots — or fake accounts — linked to various political groups, as well as efforts to spread misinformation that could have affected people’s understanding of the voting process.
-
, Facebook found that over 40 percent of top views, or impressions, in the Indian state of West Bengal were “fake/inauthentic.” One inauthentic account had amassed more than 30 million impressions.
-
A report published in March 2021 showed that many of the problems cited during the 2019 elections persisted.
-
Much of the material circulated around Facebook groups promoting Rashtriya Swayamsevak Sangh, an Indian right-wing and nationalist paramilitary group. The groups took issue with an expanding Muslim minority population in West Bengal and near the Pakistani border, and published posts on Facebook calling for the ouster of Muslim populations from India and promoting a Muslim population control law.
-
Facebook also hesitated to designate R.S.S. as a dangerous organization because of “political sensitivities” that could affect the social network’s operation in the country.
-
Of India’s 22 officially recognized languages, Facebook said it has trained its A.I. systems on five. (It said it had human reviewers for some others.) But in Hindi and Bengali, it still did not have enough data to adequately police the content, and much of the content targeting Muslims “is never flagged or actioned,” the Facebook report said.