Skip to main content

Home/ History Readings/ Group items tagged misinformation

Rss Feed Group items tagged

Javier E

How 2020 Forced Facebook and Twitter to Step In - The Atlantic - 0 views

  • mainstream platforms learned their lesson, accepting that they should intervene aggressively in more and more cases when users post content that might cause social harm.
  • During the wildfires in the American West in September, Facebook and Twitter took down false claims about their cause, even though the platforms had not done the same when large parts of Australia were engulfed in flames at the start of the year
  • Twitter, Facebook, and YouTube cracked down on QAnon, a sprawling, incoherent, and constantly evolving conspiracy theory, even though its borders are hard to delineate.
  • ...15 more annotations...
  • As platforms grow more comfortable with their power, they are recognizing that they have options beyond taking posts down or leaving them up. In addition to warning labels, Facebook implemented other “break glass” measures to stem misinformation as the election approached.
  • Nothing symbolizes this shift as neatly as Facebook’s decision in October (and Twitter’s shortly after) to start banning Holocaust denial. Almost exactly a year earlier, Zuckerberg had proudly tied himself to the First Amendment in a widely publicized “stand for free expression” at Georgetown University.
  • The evolution continues. Facebook announced earlier this month that it will join platforms such as YouTube and TikTok in removing, not merely labeling or down-ranking, false claims about COVID-19 vaccines.
  • the pandemic also showed that complete neutrality is impossible. Even though it’s not clear that removing content outright is the best way to correct misperceptions, Facebook and other platforms plainly want to signal that, at least in the current crisis, they don’t want to be seen as feeding people information that might kill them.
  • It tweaked its algorithm to boost authoritative sources in the news feed and turned off recommendations to join groups based around political or social issues. Facebook is reversing some of these steps now, but it cannot make people forget this toolbox exists in the future
  • Down-ranking, labeling, or deleting content on an internet platform does not address the social or political circumstances that caused it to be posted in the first place
  • Even before the pandemic, YouTube had begun adjusting its recommendation algorithm to reduce the spread of borderline and harmful content, and is introducing pop-up nudges to encourage user
  • Platforms don’t deserve praise for belatedly noticing dumpster fires that they helped create and affixing unobtrusive labels to them
  • Warning labels for misinformation might make some commentators feel a little better, but whether labels actually do much to contain the spread of false information is still unknown.
  • News reporting suggests that insiders at Facebook knew they could and should do more about misinformation, but higher-ups vetoed their ideas. YouTube barely acted to stem the flood of misinformation about election results on its platform.
  • When internet platforms announce new policies, assessing whether they can and will enforce them consistently has always been difficult. In essence, the companies are grading their own work. But too often what can be gleaned from the outside suggests that they’re failing.
  • And if 2020 finally made clear to platforms the need for greater content moderation, it also exposed the inevitable limits of content moderation.
  • Content moderation comes to every content platform eventually, and platforms are starting to realize this faster than ever.
  • even the most powerful platform will never be able to fully compensate for the failures of other governing institutions or be able to stop the leader of the free world from constructing an alternative reality when a whole media ecosystem is ready and willing to enable him. As Renée DiResta wrote in The Atlantic last month, “reducing the supply of misinformation doesn’t eliminate the demand.”
  • Even so, this year’s events showed that nothing is innate, inevitable, or immutable about platforms as they currently exist. The possibilities for what they might become—and what role they will play in society—are limited more by imagination than any fixed technological constraint, and the companies appear more willing to experiment than ever.
Javier E

In India, Facebook Struggles to Combat Misinformation and Hate Speech - The New York Times - 0 views

  • On Feb. 4, 2019, a Facebook researcher created a new user account to see what it was like to experience the social media site as a person living in Kerala, India.For the next three weeks, the account operated by a simple rule: Follow all the recommendations generated by Facebook’s algorithms to join groups, watch videos and explore new pages on the site.
  • The result was an inundation of hate speech, misinformation and celebrations of violence, which were documented in an internal Facebook report published later that month.AdvertisementContinue reading the main story“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the Facebook researcher wrote.
  • The report was one of dozens of studies and memos written by Facebook employees grappling with the effects of the platform on India. They provide stark evidence of one of the most serious criticisms levied by human rights activists and politicians against the world-spanning company: It moves into a country without fully understanding its potential impact on local culture and politics, and fails to deploy the resources to act on issues once they occur.
  • ...19 more annotations...
  • Facebook’s problems on the subcontinent present an amplified version of the issues it has faced throughout the world, made worse by a lack of resources and a lack of expertise in India’s 22 officially recognized languages.
  • The documents include reports on how bots and fake accounts tied to the country’s ruling party and opposition figures were wreaking havoc on national elections
  • They also detail how a plan championed by Mark Zuckerberg, Facebook’s chief executive, to focus on “meaningful social interactions,” or exchanges between friends and family, was leading to more misinformation in India, particularly during the pandemic.
  • Facebook did not have enough resources in India and was unable to grapple with the problems it had introduced there, including anti-Muslim posts,
  • Eighty-seven percent of the company’s global budget for time spent on classifying misinformation is earmarked for the United States, while only 13 percent is set aside for the rest of the world — even though North American users make up only 10 percent of the social network’s daily active users
  • That lopsided focus on the United States has had consequences in a number of countries besides India. Company documents showed that Facebook installed measures to demote misinformation during the November election in Myanmar, including disinformation shared by the Myanmar military junta.
  • In Sri Lanka, people were able to automatically add hundreds of thousands of users to Facebook groups, exposing them to violence-inducing and hateful content
  • In India, “there is definitely a question about resourcing” for Facebook, but the answer is not “just throwing more money at the problem,” said Katie Harbath, who spent 10 years at Facebook as a director of public policy, and worked directly on securing India’s national elections. Facebook, she said, needs to find a solution that can be applied to countries around the world.
  • Two months later, after India’s national elections had begun, Facebook put in place a series of steps to stem the flow of misinformation and hate speech in the country, according to an internal document called Indian Election Case Study.
  • After the attack, anti-Pakistan content began to circulate in the Facebook-recommended groups that the researcher had joined. Many of the groups, she noted, had tens of thousands of users. A different report by Facebook, published in December 2019, found Indian Facebook users tended to join large groups, with the country’s median group size at 140,000 members.
  • Graphic posts, including a meme showing the beheading of a Pakistani national and dead bodies wrapped in white sheets on the ground, circulated in the groups she joined.After the researcher shared her case study with co-workers, her colleagues commented on the posted report that they were concerned about misinformation about the upcoming elections in India
  • According to a memo written after the trip, one of the key requests from users in India was that Facebook “take action on types of misinfo that are connected to real-world harm, specifically politics and religious group tension.”
  • The case study painted an optimistic picture of Facebook’s efforts, including adding more fact-checking partners — the third-party network of outlets with which Facebook works to outsource fact-checking — and increasing the amount of misinformation it removed.
  • The study did not note the immense problem the company faced with bots in India, nor issues like voter suppression. During the election, Facebook saw a spike in bots — or fake accounts — linked to various political groups, as well as efforts to spread misinformation that could have affected people’s understanding of the voting process.
  • , Facebook found that over 40 percent of top views, or impressions, in the Indian state of West Bengal were “fake/inauthentic.” One inauthentic account had amassed more than 30 million impressions.
  • A report published in March 2021 showed that many of the problems cited during the 2019 elections persisted.
  • Much of the material circulated around Facebook groups promoting Rashtriya Swayamsevak Sangh, an Indian right-wing and nationalist paramilitary group. The groups took issue with an expanding Muslim minority population in West Bengal and near the Pakistani border, and published posts on Facebook calling for the ouster of Muslim populations from India and promoting a Muslim population control law.
  • Facebook also hesitated to designate R.S.S. as a dangerous organization because of “political sensitivities” that could affect the social network’s operation in the country.
  • Of India’s 22 officially recognized languages, Facebook said it has trained its A.I. systems on five. (It said it had human reviewers for some others.) But in Hindi and Bengali, it still did not have enough data to adequately police the content, and much of the content targeting Muslims “is never flagged or actioned,” the Facebook report said.
anonymous

Defying rules, anti-vaccine accounts thrive on social media - 0 views

  • For years, the same platforms have allowed anti-vaccination propaganda to flourish, making it difficult to stamp out such sentiments now. And their efforts to weed out other types of COVID-19 misinformation — often with fact-checks, informational labels and other restrained measures, has been woefully slow.
  • But since April 2020, it has removed a grand total of 8,400 tweets spreading COVID-related misinformation — a tiny fraction of the avalanche of pandemic-related falsehoods tweeted out daily by popular users with millions of followers, critics say.
  • “While they fail to take action, lives are being lost,” said Imran Ahmed, CEO of the Center for Countering Digital Hate, a watchdog group.
  • ...12 more annotations...
  • “It’s a hard situation because we have let this go for so long,” said Jeanine Guidry, an assistant professor at Virginia Commonwealth University who studies social media and health information. “People using social media have really been able to share what they want for nearly a decade.”
  • One such page, The Truth About Cancer, has more than a million Facebook followers after years of posting baseless suggestions that vaccines could cause autism or damage children’s brains. The page was identified in November as a “COVID-19 vaccine misinformation super spreader” by NewsGuard.
  • Facebook said it is taking taking “aggressive steps to fight misinformation across our apps by removing millions of pieces of COVID-19 and vaccine content on Facebook and Instagram during the pandemic.”
  • As U.S. vaccine supplies continue to increase, immunization efforts will soon shift from targeting a limited supply to the most vulnerable populations to getting as many shots into as many arms as possible.
  • Facebook also banned ads that discourage vaccines and said it has added warning labels to more than 167 million pieces of additional COVID-19 content thanks to our network of fact-checking partners.
  • Prior to the pandemic, however, social media platforms had done little to stamp out misinformation, said Andy Pattison, manager of digital solutions for the World Health Organization.
  • “It’s a very fine line between freedom of speech and eroding science,” Pattison said. Purveyors of misinformation, he said, “learn the rules, and they dance right on the edge, all the time.”
  • But blatantly false COVID-19 information continues to pop up. Earlier this month, several articles circulating online claimed that more elderly Israelis who took the Pfizer vaccine were “killed” by the shot than those who died from COVID-19 itself. One such article from an anti-vaccination website was shared nearly 12,000 times on Facebook, leading earlier this month to a spike of nearly 40,000 mentions of “vaccine deaths” across social platforms and the internet, according to an analysis by media intelligence firm Zignal Labs.
  • YouTube, which has generally avoided the same type scrutiny as its social media peers despite being a source of misinformation, said it has removed more than 30,000 videos since October, when it started banning false claims about COVID-19 vaccinations.
  • “Vaccine hesitancy and misinformation could be a big barrier to getting enough of the population vaccinated to end the crisis,” said Lisa Fazio, a professor of psychology at Vanderbilt University.
  • “If someone truly believes that the COVID vaccine is harmful and they feel a responsibility to share that with friends and family ... they will find a way,” Guidry said.
  • When the Center for Countering Digital Hate recently studied the crossover between different types of disinformation and hate speech, it found that Instagram tended to cross-pollinate misinformation via its algorithm.
yehbru

Far-Right Misinformation Drives Facebook Engagement : NPR - 0 views

  • After the events of Jan. 6, researcher Laura Edelson expected to see a spike in Facebook users engaging with the day's news, similar to Election Day.
  • "The thing was, most of that spike was concentrated among the partisan extremes and misinformation providers," Edelson told NPR's All Things Considered. "And when I really sit back and think about that, I think the idea that on a day like that, which was so scary and so uncertain, that the most extreme and least reputable sources were the ones Facebook users were engaging with, is pretty troubling."
  • A new study from Cybersecurity For Democracy found that far-right accounts known for spreading misinformation are not only thriving on Facebook, they're actually more successful than other kinds of accounts at getting likes, shares and other forms of user engagement.
  • ...5 more annotations...
  • "It's almost twice as much engagement per follower among the sources that have a reputation for spreading misinformation," Edelson said. "So, clearly, that portion of the news ecosystem is behaving very differently."
  • In response, Edelson called on Facebook to be transparent with how it tracks impressions and promotes content: "They can't say their data leads to a different conclusion but then not make that data public."
  • The researchers called this phenomenon the "misinformation penalty."
  • In all other partisan categories, though, "the sources that have a reputation for spreading misinformation just don't engage as well," Edelson said. "There could be a variety of reasons for that, but certainly the simplest explanation would be that users don't find them as credible and don't want to engage with them."
  • "I think any system that attempts to promote the most engaging content, from what we call tell, will wind up promoting misinformation."
blythewallick

YouTube ads of 100 top brands fund climate misinformation - study | Technology | The Gu... - 0 views

  • Some of the biggest companies in the world are funding climate misinformation by advertising on YouTube, according to a study from activist group Avaaz.
  • “This is not about free speech, this is about the free advertising YouTube is giving to factually inaccurate videos that risk confusing people about one of the biggest crises of our time,” said Julie Deruy, a senior campaigner at the group. “YouTube should not feature, suggest, promote, advertise or lead users to misinformation.”
  • “YouTube has previously taken welcome steps to protect its users from anti-vaccine and conspiracy theories,” Avaaz argued
  • ...5 more annotations...
  • Include climate misinformation in its “borderline content” policy, which limits the algorithmic distribution of videos that do not reach the bar required to fully remove them from the site.
  • Demonetise misinformation, “ensuring such content does not include advertising and is not financially incentivised. YouTube should start immediately with the option for advertisers to exclude their ads from videos with climate misinformation.”
  • Work with independent fact-checkers to inform users who have seen or interacted with verifiably false or misleading information.
  • Provide transparency to researchers by releasing data showing how many views are driven to misinformation by its own recommendation algorithms.
  • “In 2019 alone, the consumption on authoritative news publishers’ channels grew by 60%. As our systems appear to have done in the majority of cases in this report, we prioritise authoritative voices for millions of news and information queries, and surface information panels on topics prone to misinformation – including climate change – to provide users with context alongside their content. We continue to expand these efforts to more topics and countries.”
anonymous

Report: Instagram's algorithm pushes certain users to COVID-19 misinformation - UPI.com - 0 views

  • Instagram's algorithm recommended new users following COVID-19 misinformation to more of the same amid the pandemic, a report said Tuesday.
  • Instagram's algorithm has been "pushing radicalizing, extremist misinformation to users,"
  • as the pandemic swept the world, Instagram launched a new feature encouraging users to view conspiracy theories and lies about COVID and vaccines," Ahmed told The Guardian. "This feature was created in the name of profit, to keep people scrolling so more adverts could be served to them."
  • ...1 more annotation...
  • "We share the goal of reducing the spread of misinformation, but this research is five months out of date," a spokesperson for Instagram's parent company Facebook said in a statement, The Guardian reported. "It also uses a sample size of just 104 posts, compared to the 12m pieces of harmful misinformation about vaccines and COVID-19 that we've removed from Facebook and Instagram since the start of the pandemic."
Javier E

Facebook Papers: 'History Will Not Judge Us Kindly' - The Atlantic - 0 views

  • Facebook’s hypocrisies, and its hunger for power and market domination, are not secret. Nor is the company’s conflation of free speech and algorithmic amplification
  • But the events of January 6 proved for many people—including many in Facebook’s workforce—to be a breaking point.
  • these documents leave little room for doubt about Facebook’s crucial role in advancing the cause of authoritarianism in America and around the world. Authoritarianism predates the rise of Facebook, of course. But Facebook makes it much easier for authoritarians to win.
  • ...59 more annotations...
  • Again and again, the Facebook Papers show staffers sounding alarms about the dangers posed by the platform—how Facebook amplifies extremism and misinformation, how it incites violence, how it encourages radicalization and political polarization. Again and again, staffers reckon with the ways in which Facebook’s decisions stoke these harms, and they plead with leadership to do more.
  • And again and again, staffers say, Facebook’s leaders ignore them.
  • Facebook has dismissed the concerns of its employees in manifold ways.
  • One of its cleverer tactics is to argue that staffers who have raised the alarm about the damage done by their employer are simply enjoying Facebook’s “very open culture,” in which people are encouraged to share their opinions, a spokesperson told me. This stance allows Facebook to claim transparency while ignoring the substance of the complaints, and the implication of the complaints: that many of Facebook’s employees believe their company operates without a moral compass.
  • When you stitch together the stories that spanned the period between Joe Biden’s election and his inauguration, it’s easy to see Facebook as instrumental to the attack on January 6. (A spokesperson told me that the notion that Facebook played an instrumental role in the insurrection is “absurd.”)
  • what emerges from a close reading of Facebook documents, and observation of the manner in which the company connects large groups of people quickly, is that Facebook isn’t a passive tool but a catalyst. Had the organizers tried to plan the rally using other technologies of earlier eras, such as telephones, they would have had to identify and reach out individually to each prospective participant, then persuade them to travel to Washington. Facebook made people’s efforts at coordination highly visible on a global scale.
  • The platform not only helped them recruit participants but offered people a sense of strength in numbers. Facebook proved to be the perfect hype machine for the coup-inclined.
  • In November 2019, Facebook staffers noticed they had a serious problem. Facebook offers a collection of one-tap emoji reactions. Today, they include “like,” “love,” “care,” “haha,” “wow,” “sad,” and “angry.” Company researchers had found that the posts dominated by “angry” reactions were substantially more likely to go against community standards, including prohibitions on various types of misinformation, according to internal documents.
  • In July 2020, researchers presented the findings of a series of experiments. At the time, Facebook was already weighting the reactions other than “like” more heavily in its algorithm—meaning posts that got an “angry” reaction were more likely to show up in users’ News Feeds than posts that simply got a “like.” Anger-inducing content didn’t spread just because people were more likely to share things that made them angry; the algorithm gave anger-inducing content an edge. Facebook’s Integrity workers—employees tasked with tackling problems such as misinformation and espionage on the platform—concluded that they had good reason to believe targeting posts that induced anger would help stop the spread of harmful content.
  • By dialing anger’s weight back to zero in the algorithm, the researchers found, they could keep posts to which people reacted angrily from being viewed by as many users. That, in turn, translated to a significant (up to 5 percent) reduction in the hate speech, civic misinformation, bullying, and violent posts—all of which are correlated with offline violence—to which users were exposed.
  • Facebook rolled out the change in early September 2020, documents show; a Facebook spokesperson confirmed that the change has remained in effect. It was a real victory for employees of the Integrity team.
  • But it doesn’t normally work out that way. In April 2020, according to Frances Haugen’s filings with the SEC, Facebook employees had recommended tweaking the algorithm so that the News Feed would deprioritize the surfacing of content for people based on their Facebook friends’ behavior. The idea was that a person’s News Feed should be shaped more by people and groups that a person had chosen to follow. Up until that point, if your Facebook friend saw a conspiracy theory and reacted to it, Facebook’s algorithm might show it to you, too. The algorithm treated any engagement in your network as a signal that something was worth sharing. But now Facebook workers wanted to build circuit breakers to slow this form of sharing.
  • Experiments showed that this change would impede the distribution of hateful, polarizing, and violence-inciting content in people’s News Feeds. But Zuckerberg “rejected this intervention that could have reduced the risk of violence in the 2020 election,” Haugen’s SEC filing says. An internal message characterizing Zuckerberg’s reasoning says he wanted to avoid new features that would get in the way of “meaningful social interactions.” But according to Facebook’s definition, its employees say, engagement is considered “meaningful” even when it entails bullying, hate speech, and reshares of harmful content.
  • This episode, like Facebook’s response to the incitement that proliferated between the election and January 6, reflects a fundamental problem with the platform
  • Facebook’s megascale allows the company to influence the speech and thought patterns of billions of people. What the world is seeing now, through the window provided by reams of internal documents, is that Facebook catalogs and studies the harm it inflicts on people. And then it keeps harming people anyway.
  • “I am worried that Mark’s continuing pattern of answering a different question than the question that was asked is a symptom of some larger problem,” wrote one Facebook employee in an internal post in June 2020, referring to Zuckerberg. “I sincerely hope that I am wrong, and I’m still hopeful for progress. But I also fully understand my colleagues who have given up on this company, and I can’t blame them for leaving. Facebook is not neutral, and working here isn’t either.”
  • It is quite a thing to see, the sheer number of Facebook employees—people who presumably understand their company as well as or better than outside observers—who believe their employer to be morally bankrupt.
  • I spoke with several former Facebook employees who described the company’s metrics-driven culture as extreme, even by Silicon Valley standards
  • Facebook workers are under tremendous pressure to quantitatively demonstrate their individual contributions to the company’s growth goals, they told me. New products and features aren’t approved unless the staffers pitching them demonstrate how they will drive engagement.
  • e worries have been exacerbated lately by fears about a decline in new posts on Facebook, two former employees who left the company in recent years told me. People are posting new material less frequently to Facebook, and its users are on average older than those of other social platforms.
  • One of Facebook’s Integrity staffers wrote at length about this dynamic in a goodbye note to colleagues in August 2020, describing how risks to Facebook users “fester” because of the “asymmetrical” burden placed on employees to “demonstrate legitimacy and user value” before launching any harm-mitigation tactics—a burden not shared by those developing new features or algorithm changes with growth and engagement in mind
  • The note said:We were willing to act only after things had spiraled into a dire state … Personally, during the time that we hesitated, I’ve seen folks from my hometown go further and further down the rabbithole of QAnon and Covid anti-mask/anti-vax conspiracy on FB. It has been painful to observe.
  • Current and former Facebook employees describe the same fundamentally broken culture—one in which effective tactics for making Facebook safer are rolled back by leadership or never approved in the first place.
  • That broken culture has produced a broken platform: an algorithmic ecosystem in which users are pushed toward ever more extreme content, and where Facebook knowingly exposes its users to conspiracy theories, disinformation, and incitement to violence.
  • One example is a program that amounts to a whitelist for VIPs on Facebook, allowing some of the users most likely to spread misinformation to break Facebook’s rules without facing consequences. Under the program, internal documents show, millions of high-profile users—including politicians—are left alone by Facebook even when they incite violence
  • whitelisting influential users with massive followings on Facebook isn’t just a secret and uneven application of Facebook’s rules; it amounts to “protecting content that is especially likely to deceive, and hence to harm, people on our platforms.”
  • Facebook workers tried and failed to end the program. Only when its existence was reported in September by The Wall Street Journal did Facebook’s Oversight Board ask leadership for more information about the practice. Last week, the board publicly rebuked Facebook for not being “fully forthcoming” about the program.
  • As a result, Facebook has stoked an algorithm arms race within its ranks, pitting core product-and-engineering teams, such as the News Feed team, against their colleagues on Integrity teams, who are tasked with mitigating harm on the platform. These teams establish goals that are often in direct conflict with each other.
  • “We can’t pretend we don’t see information consumption patterns, and how deeply problematic they are for the longevity of democratic discourse,” a user-experience researcher wrote in an internal comment thread in 2019, in response to a now-infamous memo from Andrew “Boz” Bosworth, a longtime Facebook executive. “There is no neutral position at this stage, it would be powerfully immoral to commit to amorality.”
  • Zuckerberg has defined Facebook’s mission as making “social infrastructure to give people the power to build a global community that works for all of us,” but in internal research documents his employees point out that communities aren’t always good for society:
  • When part of a community, individuals typically act in a prosocial manner. They conform, they forge alliances, they cooperate, they organize, they display loyalty, they expect obedience, they share information, they influence others, and so on. Being in a group changes their behavior, their abilities, and, importantly, their capability to harm themselves or others
  • Thus, when people come together and form communities around harmful topics or identities, the potential for harm can be greater.
  • The infrastructure choices that Facebook is making to keep its platform relevant are driving down the quality of the site, and exposing its users to more dangers
  • hose dangers are also unevenly distributed, because of the manner in which certain subpopulations are algorithmically ushered toward like-minded groups
  • And the subpopulations of Facebook users who are most exposed to dangerous content are also most likely to be in groups where it won’t get reported.
  • And it knows that 3 percent of Facebook users in the United States are super-consumers of conspiracy theories, accounting for 37 percent of known consumption of misinformation on the platform.
  • Zuckerberg’s positioning of Facebook’s role in the insurrection is odd. He lumps his company in with traditional media organizations—something he’s ordinarily loath to do, lest the platform be expected to take more responsibility for the quality of the content that appears on it—and suggests that Facebook did more, and did better, than journalism outlets in its response to January 6. What he fails to say is that journalism outlets would never be in the position to help investigators this way, because insurrectionists don’t typically use newspapers and magazines to recruit people for coups.
  • Facebook wants people to believe that the public must choose between Facebook as it is, on the one hand, and free speech, on the other. This is a false choice. Facebook has a sophisticated understanding of measures it could take to make its platform safer without resorting to broad or ideologically driven censorship tactics.
  • Facebook knows that no two people see the same version of the platform, and that certain subpopulations experience far more dangerous versions than others do
  • Facebook knows that people who are isolated—recently widowed or divorced, say, or geographically distant from loved ones—are disproportionately at risk of being exposed to harmful content on the platform.
  • It knows that repeat offenders are disproportionately responsible for spreading misinformation.
  • All of this makes the platform rely more heavily on ways it can manipulate what its users see in order to reach its goals. This explains why Facebook is so dependent on the infrastructure of groups, as well as making reshares highly visible, to keep people hooked.
  • It could consistently enforce its policies regardless of a user’s political power.
  • Facebook could ban reshares.
  • It could choose to optimize its platform for safety and quality rather than for growth.
  • It could tweak its algorithm to prevent widespread distribution of harmful content.
  • Facebook could create a transparent dashboard so that all of its users can see what’s going viral in real time.
  • It could make public its rules for how frequently groups can post and how quickly they can grow.
  • It could also automatically throttle groups when they’re growing too fast, and cap the rate of virality for content that’s spreading too quickly.
  • Facebook could shift the burden of proof toward people and communities to demonstrate that they’re good actors—and treat reach as a privilege, not a right
  • You must be vigilant about the informational streams you swim in, deliberate about how you spend your precious attention, unforgiving of those who weaponize your emotions and cognition for their own profit, and deeply untrusting of any scenario in which you’re surrounded by a mob of people who agree with everything you’re saying.
  • It could do all of these things. But it doesn’t.
  • Lately, people have been debating just how nefarious Facebook really is. One argument goes something like this: Facebook’s algorithms aren’t magic, its ad targeting isn’t even that good, and most people aren’t that stupid.
  • All of this may be true, but that shouldn’t be reassuring. An algorithm may just be a big dumb means to an end, a clunky way of maneuvering a massive, dynamic network toward a desired outcome. But Facebook’s enormous size gives it tremendous, unstable power.
  • Facebook takes whole populations of people, pushes them toward radicalism, and then steers the radicalized toward one another.
  • When the most powerful company in the world possesses an instrument for manipulating billions of people—an instrument that only it can control, and that its own employees say is badly broken and dangerous—we should take notice.
  • The lesson for individuals is this:
  • Facebook could say that its platform is not for everyone. It could sound an alarm for those who wander into the most dangerous corners of Facebook, and those who encounter disproportionately high levels of harmful content
  • Without seeing how Facebook works at a finer resolution, in real time, we won’t be able to understand how to make the social web compatible with democracy.
Javier E

Is Anything Still True? On the Internet, No One Knows Anymore - WSJ - 0 views

  • Creating and disseminating convincing propaganda used to require the resources of a state. Now all it takes is a smartphone.
  • Generative artificial intelligence is now capable of creating fake pictures, clones of our voices, and even videos depicting and distorting world events. The result: From our personal circles to the political circuses, everyone must now question whether what they see and hear is true.
  • exposure to AI-generated fakes can make us question the authenticity of everything we see. Real images and real recordings can be dismissed as fake. 
  • ...20 more annotations...
  • “When you show people deepfakes and generative AI, a lot of times they come out of the experiment saying, ‘I just don’t trust anything anymore,’” says David Rand, a professor at MIT Sloan who studies the creation, spread and impact of misinformation.
  • The signs that an image is AI-generated are easy to miss for a user simply scrolling past, who has an instant to decide whether to like or boost a post on social media. And as generative AI continues to improve, it’s likely that such signs will be harder to spot in the future.
  • The combination of easily-generated fake content and the suspicion that anything might be fake allows people to choose what they want to believe, adds DiResta, leading to what she calls “bespoke realities.”
  • Examples of misleading content created by generative AI are not hard to come by, especially on social media
  • This problem, which has grown more acute in the age of generative AI, is known as the “liar’s dividend,
  • “What our work suggests is that most regular people do not want to share false things—the problem is they are not paying attention,”
  • People’s attention is already limited, and the way social media works—encouraging us to gorge on content, while quickly deciding whether or not to share it—leaves us precious little capacity to determine whether or not something is true
  • are now using its existence as a pretext to dismiss accurate information
  • in the course of a lawsuit over the death of a man using Tesla’s “full self-driving” system, Elon Musk’s lawyers responded to video evidence of Musk making claims about this software by suggesting that the proliferation of “deepfakes” of Musk was grounds to dismiss such evidence. They advanced that argument even though the clip of Musk was verifiably real
  • If the crisis of authenticity were limited to social media, we might be able to take solace in communication with those closest to us. But even those interactions are now potentially rife with AI-generated fakes.
  • what sounds like a call from a grandchild requesting bail money may be scammers who have scraped recordings of the grandchild’s voice from social media to dupe a grandparent into sending money.
  • companies like Alphabet, the parent company of Google, are trying to spin the altering of personal images as a good thing. 
  • With its latest Pixel phone, the company unveiled a suite of new and upgraded tools that can automatically replace a person’s face in one image with their face from another, or quickly remove someone from a photo entirely.
  • Joseph Stalin, who was fond of erasing people he didn’t like from official photos, would have loved this technology.
  • In Google’s defense, it is adding a record of whether an image was altered to data attached to it. But such metadata is only accessible in the original photo and some copies, and is easy enough to strip out.
  • The rapid adoption of many different AI tools means that we are now forced to question everything that we are exposed to in any medium, from our immediate communities to the geopolitical, said Hany Farid, a professor at the University of California, Berkeley who
  • To put our current moment in historical context, he notes that the PC revolution made it easy to store and replicate information, the internet made it easy to publish it, the mobile revolution made it easier than ever to access and spread, and the rise of AI has made creating misinformation a cinch. And each revolution arrived faster than the one before it.
  • Not everyone agrees that arming the public with easy access to AI will exacerbate our current difficulties with misinformation. The primary argument of such experts is that there is already vastly more misinformation on the internet than a person can consume, so throwing more into the mix won’t make things worse.
  • it’s not exactly reassuring, especially given that trust in institutions is already at one of the lowest points in the past 70 years, according to the nonpartisan Pew Research Center, and polarization—a measure of how much we distrust one another—is at a high point.
  • “What happens when we have eroded trust in media, government, and experts?” says Farid. “If you don’t trust me and I don’t trust you, how do we respond to pandemics, or climate change, or have fair and open elections? This is how authoritarianism arises—when you erode trust in institutions.”
hannahcarter11

Black and Hispanic Communities Grapple With Vaccine Misinformation - The New York Times - 0 views

  • Black and Hispanic communities, which were hit harder by the pandemic and whose vaccination rates are lagging that for white people, are confronting vaccine conspiracy theories, rumors and misleading news reports on social media outlets like Facebook, Instagram, YouTube and Twitter and in private online messaging, health authorities and misinformation researchers said.
  • The misinformation varies, like claims that vaccines can alter DNA — which is not true — and that the vaccines don’t work, or that people of color are being used as guinea pigs.
  • Foreign news outlets and anti-vaccine activists have also aggressively tried to cast doubt on the safety and efficacy of vaccines made in the United States and Europe.
  • ...11 more annotations...
  • Misinformation has complicated efforts by some states to reach out to Black and Hispanic residents, particularly when health officials have provided special registration codes for vaccine appointments. Instead of a benefit, in some cases the codes have become the basis for new false narratives.
  • Anti-vaccine activists have drawn on historical examples, including Nazi doctors who ran experiments in concentration camps, and the Baltimore hospital where, 70 years ago, cancer cells were collected from Henrietta Lacks, a Black mother of five, without her consent.
  • The state figures vary widely. In Texas, where people who identify as Hispanic make up 42 percent of the population, only 20 percent of the vaccinations had gone to that group. In Mississippi, where Black people make up 38 percent of the population, they received 22 percent of the vaccinations
  • According to an analysis by The New York Times, the vaccination rate for Black Americans is half that of white people, and the gap for Hispanic people is even larger
  • Research conducted by the nonprofit Kaiser Family Foundation in mid-February showed a striking disparity between racial groups receiving the vaccine in 34 states that reported the data.
  • An experiment conducted in 1943 on nearly 400 Black men in Tuskegee, Ala., is one of the most researched examples of medical mistreatment of the Black community. Over four decades, scientists observed the men, whom they knew were infected with syphilis, but didn’t offer treatments so that they could study the disease’s progression. When the experiment came to light in the 1970s, it was condemned by the medical community as a major violation of ethical standards.
  • While Tuskegee averaged several hundred mentions a week on Facebook and Twitter, there were several noticeable spikes that coincided with the introduction of Covid-19 vaccines, according to Zignal Labs, a media insights company.
  • Last month, a poll by the NORC Center for Public Affairs Research found that 23 percent of Republicans said they would “definitely” not get vaccinated, while 21 percent said they “probably” would not get a coronavirus vaccine.
  • Native American groups have been battling vaccine fears in their communities, and doctors have reported that some of their Chinese-American patients have been bringing in articles in Chinese-language media outlets questioning vaccines made in the United States.
  • Many Black and Hispanic people were already struggling to make appointments and reach vaccination sites that are often in whiter, wealthier neighborhoods
  • Misinformation about who is allowed to receive the vaccine, when it is available and how it was safety tested has added even more difficulty, Ms. Mitchell, said.
aleija

How to Deal With a Crisis of Misinformation - The New York Times - 0 views

  • False news is on the rise. We can fight the spread with a simple exercise: Slow down and be skeptical.
  • There’s a disease that has been spreading for years now. Like any resilient virus, it evolves to find new ways to attack us. It’s not in our bodies, but on the web.
  • But misinformation has now crept into much darker, sinister corners and taken on forms like the internet meme, which is often a screenshot overlaid with sensational text or manipulated with doctored images.
  • ...4 more annotations...
  • “The meme is probably the most dangerous,” Mr. Duke said. “In seven or 20 words, somebody can say something that’s not true, and people will believe it and share it. It takes two minutes to create.”
  • Before the pandemic, the group would present a few examples of misinformation every few days. Now each student is reporting multiple examples a day.
  • The rise of false news is bad news for all of us. Misinformation can be a detriment to our well-being in a time when people are desperately seeking information such as health guidelines to share with their loved ones about the coronavirus. It can also stoke anger and cause us to commit violence. Also important: It could mislead us about voting in a pandemic that has turned our world upside down.
  • We have to employ more sophisticated methods of consuming information, like doing our own fact-checking and choosing reliable news sources.
aidenborst

What to Expect From Facebook, Twitter and YouTube on Election Day - The New York Times - 1 views

  • Facebook, YouTube and Twitter were misused by Russians to inflame American voters with divisive messages before the 2016 presidential election. The companies have spent the past four years trying to ensure that this November isn’t a repeat.
  • Since 2016, Facebook has poured billions of dollars into beefing up its security operations to fight misinformation and other harmful content. It now has more than 35,000 people working on this area, the company said.
  • Facebook has made changes up till the last minute. Last week, it said it had turned off political and social group recommendations and temporarily removed a feature in Instagram’s hashtag pages to slow the spread of misinformation.
  • ...11 more annotations...
  • Facebook’s app will also look different on Tuesday. To prevent candidates from prematurely and inaccurately declaring victory, the company plans to add a notification at the top of News Feeds letting people know that no winner has been chosen until election results are verified by news outlets like Reuters and The Associated Press
  • After the polls close, Facebook plans to suspend all political ads from circulating on the social network and its photo-sharing site
  • Twitter has also worked to combat misinformation since 2016, in some cases going far further than Facebook. Last year, for instance, it banned political advertising entirely, saying the reach of political messages “should be earned, not bought.”
  • In October, Twitter began experimenting with additional techniques to slow the spread of misinformation.
  • On Tuesday, Mr. Mohan plans to check in regularly with his teams to keep an eye on anything unusual, he said. There will be no “war room,” and he expects that most decisions to keep or remove videos will be clear and that the usual processes for making those decisions will be sufficient.
  • Twitter plans to add labels to tweets from candidates who claim victory before the election is called by authoritative sources.
  • Twitter will eventually allow people to retweet again without prompting them to add their own context. But many of the changes for the election — like the ban on political ads and the fact-checking labels — are permanent
  • For Google’s YouTube, it wasn’t the 2016 election that sounded a wake-up call about the toxic content spreading across its website. That moment came in 2017 when a group of men drove a van into pedestrians on London Bridge after being inspired by YouTube videos of inflammatory sermons from an Islamic cleric.
  • It has overhauled its policies to target misinformation, while tweaking its algorithms to slow the spread of what it deems borderline content — videos that do not blatantly violate its rules but butt up against them.
  • In September, Twitter added an Election Hub that users can use to look for curated information about polling, voting and candidates.
  • Starting on Tuesday and continuing as needed, YouTube will display a fact-check information panel above election-related search results and below videos discussing the results, the company said.
cartergramiak

Conservative News Sites Fuel Voter Fraud Misinformation - The New York Times - 0 views

  • Harvard researchers described a “propaganda feedback loop” in right-wing media. The authors of the study, published this month through the school’s Berkman Klein Center for Internet and Society, reported that popular news outlets, rather than social media platforms, were the main drivers of a disinformation campaign meant to sow doubts about the integrity of the election
  • So far in October, Breitbart has published nearly 30 articles with the tag “voter fraud.”
  • As the country faces a third wave of Covid-19 cases, tens of millions of Americans plan to mail their ballots, and more than 25 states have expanded access to universal mail voting. The voting system, stressed by greater demand, has struggled in places with ballots sent to incorrect addresses or improperly filled out
  • ...17 more annotations...
  • Election experts have calculated that, in a 20-year period, fraud involving mailed ballots has affected 0.00006 percent of individual votes, or one case per state every six or seven years.
  • Among the billions of votes cast from 2000 to 2012, there were 491 cases of absentee-ballot fraud, according to an investigation conducted at Arizona State University’s journalism schoo
  • intentional voter fraud is extremely uncommon and rarely organized, according to decades of research.
  • In June, The Washington Post and the nonprofit Electronic Registration information Center analyzed data from three vote-by-mail states and found 372 possible cases of double voting or voting on behalf of dead people in 2016 and 2018, or 0.0025 percent of the 14.6 million mailed ballots.
  • Mr. Trump’s effort to discredit mail-in voting follows decades of disinformation about voter impersonation, voting by noncitizens and double voting, often promoted by Republican leaders.
  • Voting by mail under normal circumstances does not appear to give either major party an advantage, according to a study this spring by Stanford University’s Institute for Economic Policy Research.
  • But many conservative outlets have promoted the idea that fraud involving mailed ballots could tip the scales in favor of Democrats.
  • Stephen J. Stedman, a senior fellow at the Freeman Spogli Institute for International Studies at Stanford, said he thought “about disinformation in this country as almost an information ecology — it’s not an organic thing from the bottom up.”
  • In a similar cycle, the Fox News host Sean Hannity and conservative publications magnified the reach of a deceptive video released last month by Project Veritas, a group run by the conservative activist James O’Keefe. The video claimed without named sources or verifiable evidence that the campaign for Representative Ilhan Omar, a Minnesota Democrat, was collecting ballots illegally.
  • Mr. Stedman said right-leaning outlets sometimes conflated fraud with the statistically insignificant administrative mishaps that occur in every American election
  • Breitbart, The Washington Examiner and others amplify false claims of rampant cheating in what a new Harvard study calls a “propaganda feedback loop.”
  • The Washington Examiner, Breitbart News, The Gateway Pundit and The Washington Times are among the sites that have posted articles with headlines giving weight to the conspiracy theory that voter fraud is rampant and could swing the election to the left, a theory that has been repeatedly debunked by data.
  • “EXCLUSIVE: California Man Finds THOUSANDS of What Appear to be Unopened Ballots in Garbage Dumpster — Workers Quickly Try to Cover Them Up — We are Working to Verify.” The envelopes turned out to be empty and discarded legally in 2018. Gateway Pundit later updated the headline, but not before its original speculation had gone viral.
  • “DESTROYED: Tons of Trump mail-in ballot applications SHREDDED in back of tractor-trailer headed for Pennsylvania.” The material was actually printing waste from a direct mail company.
  • “FEDS: Military Ballots Discarded in ‘Troubling’ Discovery. All Opened Ballots were Cast for Trump.” Headlines on the same issue in The Washington Times were similar: “Feds investigating discarded mail-in ballots cast for Trump in Pennsylvania” and “FBI downplays election fraud as suspected ballot issues found in Pennsylvania, Texas.” A Washington Times opinion piece on the matter had the headline “Trump ballots in trash, oh my.”
  • Pennsylvania’s elections chief that the discarded ballots were a “bad error” by a seasonal contractor, not “intentional fraud.” Mr. Trump cited the discarded Pennsylvania ballots several times as an example of fraud, including in last month’s presidential debate.
  • RIGGED ELECTION!” He linked to a Breitbart article that included a transcript of Attorney General William P. Barr’s telling the Fox News host Maria Bartiromo that voting by mail “absolutely opens the floodgates to fraud.”
Javier E

Extreme weather, pandemic have exposed flaws in science communication - The Washington ... - 0 views

  • just how much of the population is vulnerable to misinformation. Meanwhile, climate misinformation has persisted for decades and continues to proliferate on the Internet and social media, even as the influence of climate change is now plainly seen in more frequent and intense extreme weather events.
  • To win the war against misinformation in the long run, though, we must educate the next generation of information consumers.
  • “Online misinformation might seem like an incurable virus, but social media companies, policymakers and nonprofits are beginning to address the problem more directly,”
  • ...2 more annotations...
  • “What still needs more attention, however, is more and earlier education.”
  • Many schools have incorporated media literacy into their curriculum, but hearing directly from a practitioner connects those lessons to real life. Scientists and communicators who have young children or are otherwise connected with teachers or schools should volunteer to visit classrooms to talk about misinformation, what it is, how to spot it, and why it’s so dangerous.
criscimagnael

Jan. 6 Committee Subpoenas Twitter, Meta, Alphabet and Reddit - The New York Times - 0 views

  • The House committee investigating the Jan. 6 attack on the Capitol issued subpoenas on Thursday to four major social media companies — Alphabet, Meta, Reddit and Twitter — criticizing them for allowing extremism to spread on their platforms and saying they have failed to cooperate adequately with the inquiry.
  • In letters accompanying the subpoenas, the panel named Facebook, a unit of Meta, and YouTube, which is owned by Alphabet’s Google subsidiary, as among the worst offenders that contributed to the spread of misinformation and violent extremism.
  • The committee sent letters in August to 15 social media companies — including sites where misinformation about election fraud spread, such as the pro-Trump website TheDonald.win — seeking documents pertaining to efforts to overturn the election and any domestic violent extremists associated with the Jan. 6 rally and attack.
  • ...16 more annotations...
  • “It’s disappointing that after months of engagement, we still do not have the documents and information necessary to answer those basic questions,”
  • On Twitter, many of Mr. Trump’s followers used the site to amplify and spread false allegations of election fraud, while connecting with other Trump supporters and conspiracy theorists using the site. And on YouTube, some users broadcast the events of Jan. 6 using the platform’s video streaming technology.
  • In the year since the events of Jan. 6, social media companies have been heavily scrutinized for whether their sites played an instrumental role in organizing the attack.
  • In the months surrounding the 2020 election, employees inside Meta raised warning signs that Facebook posts and comments containing “combustible election misinformation” were spreading quickly across the social network, according to a cache of documents and photos reviewed by The New York Times.
  • Frances Haugen, a former Facebook employee turned whistle-blower, said the company relaxed its safeguards too quickly after the election, which then led it to be used in the storming of the Capitol.
  • In the days after the attack, Reddit banned a discussion forum dedicated to former President Donald J. Trump, where tens of thousands of Mr. Trump’s supporters regularly convened to express solidarity with him.
  • After months of discussions with the companies, only the four large corporations were issued subpoenas on Thursday, because the committee said the firms were “unwilling to commit to voluntarily and expeditiously” cooperating with its work.
  • The committee said letters to the four firms accompanied the subpoenas.The panel said YouTube served as a platform for “significant communications by its users that were relevant to the planning and execution of Jan. 6 attack on the United States Capitol,” including livestreams of the attack as it was taking place.
  • The panel said Facebook and other Meta platforms were used to share messages of “hate, violence and incitement; to spread misinformation, disinformation and conspiracy theories around the election; and to coordinate or attempt to coordinate the Stop the Steal movement.”
  • “Meta has declined to commit to a deadline for producing or even identifying these materials,” Mr. Thompson wrote to Mark Zuckerberg, Meta’s chief executive.
  • The panel said it was focused on Reddit because the platform hosted the r/The_Donald subreddit community that grew significantly before migrating in 2020 to the website TheDonald.win, which ultimately hosted significant discussion and planning related to the Jan. 6 attack.
  • “Unfortunately, the select committee believes Twitter has failed to disclose critical information,” the panel stated.
  • In recent years, Big Tech and Washington have had a history of butting heads. Some Republicans have accused sites including Facebook, Instagram and Twitter of silencing conservative voices.
  • The Federal Trade Commission is investigating whether a number of tech companies have grown too big, and in the process abused their market power to stifle competition. And a bipartisan group of senators and representatives continues to say sites like Facebook and YouTube are not doing enough to curb the spread of misinformation and conspiracy theories.
  • Meta said that it had “produced documents to the committee on a schedule committee staff requested — and we will continue to do so.”
  • The panel has interviewed more than 340 witnesses and issued dozens of subpoenas, including for bank and phone records.
lmunch

How Voting by Mail Tops Election Misinformation - The New York Times - 0 views

  • Of all the election misinformation this year, false and misleading information about voting by mail has been the most rampant, according to Zignal Labs, a media insights company.
  • Of the 13.4 million mentions of voting by mail on social media; news on television, print and online; blogs and online forums between January and September, nearly a fourth — or 3.1 million mentions — have most likely been misinformation, Zignal Labs said.
  • The misleading information about voting by mail was not uniform. It broke down into six main categories
  • ...7 more annotations...
  • mentions of absentee voting or ballots
  • mentions of voter fraud, such as mentions of misleading stories about criminal conduct involving mail-in ballots
  • mentions of voter IDs, such as the baseless idea that in states with strict voter ID laws, mail-in ballots have been dumped out
  • mentions of foreign interference
  • mentions of ballot “harvesting,”
  • mentions of a “rigged election”
  • Facebook, YouTube and Twitter have made combating false information about voting a priority, including highlighting accurate information on how to vote and how to register to vote.
katherineharron

What Matters: Here's what connects Covid denial and election denial - CNNPolitics - 0 views

  • There are two core strains of denialism apparent in mainstream America today: that the election was a fraud and that Covid doesn't exist.
  • What ties these lies together:President Donald Trump won't admit defeat in the election or missteps on Covid, creating a bedrock of inaccuracyThe democratization of information on the internet enables everyone to publish their thoughts, even if they're totally made upAs the country gets more tribal in its politics, people find satisfaction in blaming villains, regardless of facts.
  • Either Trump is spinning an alternate reality for followers who agree with him or he is just channeling and amplifying what he hears from them. Regardless, in his four years in office, he has totally normalized bad information.
  • ...13 more annotations...
  • Climate change, the Russia investigation, his own impeachment, the election he won four years ago, President Barack Obama's birth certificate -- Trump's said so many things are hoaxes or fakes that he may personally not know what is real and what is imagined anymore.
  • Certainly the news Wednesday that President-elect Joe Biden's son Hunter is under investigation by US attorneys in Delaware over his business dealings with Chinese nationals will fuel renewed efforts to smear the President-elect through his son. Misinformation needs a kernel of truth to flourish. Here's what we actually know about the investigation into Hunter Biden.
  • Dr. Anthony Fauci complained Tuesday about trying to reach people in communities where hospitals are nearly overrun, but denialists stubbornly reject masks and social distancing.
  • The Supreme Court, which is controlled by conservatives, shut the door on Trump's election fraud fantasy and his efforts to get state legislators to bypass the voters have so far failed.
  • "The fact that the justices issued a one-sentence order with no separate opinions is a powerful sign that the court intends to stay out of election-related disputes, and that it's going to leave things to the electoral process going forward," CNN legal analyst Steve Vladeck said after the ruling.
  • The Texas lawsuit is concerned only with the ones in key states where Biden won, which has been described as hypocrisy, but that seems like not strong enough a word here.
  • If the Supreme Court's Pennsylvania ruling is any indication, this Texas suit is just the latest in a series of increasingly desperate last gasps as Trump hops from one dead-end lawsuit to the next.
  • The cliché descriptor for the internet is that the world's information is at our fingertips. Which is true. But it also means the world's misinformation is at our fingertips. If you want to make a lie seem legit, it's easy to find a handful of pieces of misinformation or out-of-context articles and videos to bolster pretty much any false narrative.
  • People have all different motivations for peddling misinformation. Sometimes it's political, sometimes financial, sometimes a mix of both -- and of course some people just share it and want to believe it because it confirms their biases. With Trump, for instance, his reasons for pushing misinformation are both political and financial -- he doesn't want to admit he lost and he is fundraising off the back of the lies.
  • a lot of Americans are dreaming of the post-Trump era where he fizzles out of their daily lives. I don't think that is going to happen on social media. Trump has too big a footprint.
  • He drives so much of the right-wing ecosystem and I still think he and his proxies, like his sons, are going to hold a lot of influence.
  • The covert nature of these operations means it's always hard to tell, but certainly the experts we have spoken to this year believe Russian trolls and their ilk have been amplifying existing divisive narratives in the US rather than creating their own
  • I think the problem is going to get worse before it gets better. It's depressing, but I think a lot of people do not want facts
Javier E

How Facebook Failed the World - The Atlantic - 0 views

  • In the United States, Facebook has facilitated the spread of misinformation, hate speech, and political polarization. It has algorithmically surfaced false information about conspiracy theories and vaccines, and was instrumental in the ability of an extremist mob to attempt a violent coup at the Capitol. That much is now painfully familiar.
  • these documents show that the Facebook we have in the United States is actually the platform at its best. It’s the version made by people who speak our language and understand our customs, who take our civic problems seriously because those problems are theirs too. It’s the version that exists on a free internet, under a relatively stable government, in a wealthy democracy. It’s also the version to which Facebook dedicates the most moderation resources.
  • Elsewhere, the documents show, things are different. In the most vulnerable parts of the world—places with limited internet access, where smaller user numbers mean bad actors have undue influence—the trade-offs and mistakes that Facebook makes can have deadly consequences.
  • ...23 more annotations...
  • According to the documents, Facebook is aware that its products are being used to facilitate hate speech in the Middle East, violent cartels in Mexico, ethnic cleansing in Ethiopia, extremist anti-Muslim rhetoric in India, and sex trafficking in Dubai. It is also aware that its efforts to combat these things are insufficient. A March 2021 report notes, “We frequently observe highly coordinated, intentional activity … by problematic actors” that is “particularly prevalent—and problematic—in At-Risk Countries and Contexts”; the report later acknowledges, “Current mitigation strategies are not enough.”
  • As recently as late 2020, an internal Facebook report found that only 6 percent of Arabic-language hate content on Instagram was detected by Facebook’s systems. Another report that circulated last winter found that, of material posted in Afghanistan that was classified as hate speech within a 30-day range, only 0.23 percent was taken down automatically by Facebook’s tools. In both instances, employees blamed company leadership for insufficient investment.
  • last year, according to the documents, only 13 percent of Facebook’s misinformation-moderation staff hours were devoted to the non-U.S. countries in which it operates, whose populations comprise more than 90 percent of Facebook’s users.
  • Among the consequences of that pattern, according to the memo: The Hindu-nationalist politician T. Raja Singh, who posted to hundreds of thousands of followers on Facebook calling for India’s Rohingya Muslims to be shot—in direct violation of Facebook’s hate-speech guidelines—was allowed to remain on the platform despite repeated requests to ban him, including from the very Facebook employees tasked with monitoring hate speech.
  • The granular, procedural, sometimes banal back-and-forth exchanges recorded in the documents reveal, in unprecedented detail, how the most powerful company on Earth makes its decisions. And they suggest that, all over the world, Facebook’s choices are consistently driven by public perception, business risk, the threat of regulation, and the specter of “PR fires,” a phrase that appears over and over in the documents.
  • “It’s an open secret … that Facebook’s short-term decisions are largely motivated by PR and the potential for negative attention,” an employee named Sophie Zhang wrote in a September 2020 internal memo about Facebook’s failure to act on global misinformation threats.
  • In a memo dated December 2020 and posted to Workplace, Facebook’s very Facebooklike internal message board, an employee argued that “Facebook’s decision-making on content policy is routinely influenced by political considerations.”
  • To hear this employee tell it, the problem was structural: Employees who are primarily tasked with negotiating with governments over regulation and national security, and with the press over stories, were empowered to weigh in on conversations about building and enforcing Facebook’s rules regarding questionable content around the world. “Time and again,” the memo quotes a Facebook researcher saying, “I’ve seen promising interventions … be prematurely stifled or severely constrained by key decisionmakers—often based on fears of public and policy stakeholder responses.”
  • And although Facebook users post in at least 160 languages, the company has built robust AI detection in only a fraction of those languages, the ones spoken in large, high-profile markets such as the U.S. and Europe—a choice, the documents show, that means problematic content is seldom detected.
  • Employees weren’t placated. In dozens and dozens of comments, they questioned the decisions Facebook had made regarding which parts of the company to involve in content moderation, and raised doubts about its ability to moderate hate speech in India. They called the situation “sad” and Facebook’s response “inadequate,” and wondered about the “propriety of considering regulatory risk” when it comes to violent speech.
  • A 2020 Wall Street Journal article reported that Facebook’s top public-policy executive in India had raised concerns about backlash if the company were to do so, saying that cracking down on leaders from the ruling party might make running the business more difficult.
  • “I have a very basic question,” wrote one worker. “Despite having such strong processes around hate speech, how come there are so many instances that we have failed? It does speak on the efficacy of the process.”
  • Two other employees said that they had personally reported certain Indian accounts for posting hate speech. Even so, one of the employees wrote, “they still continue to thrive on our platform spewing hateful content.”
  • Taken together, Frances Haugen’s leaked documents show Facebook for what it is: a platform racked by misinformation, disinformation, conspiracy thinking, extremism, hate speech, bullying, abuse, human trafficking, revenge porn, and incitements to violence
  • It is a company that has pursued worldwide growth since its inception—and then, when called upon by regulators, the press, and the public to quell the problems its sheer size has created, it has claimed that its scale makes completely addressing those problems impossible.
  • Instead, Facebook’s 60,000-person global workforce is engaged in a borderless, endless, ever-bigger game of whack-a-mole, one with no winners and a lot of sore arms.
  • Zhang details what she found in her nearly three years at Facebook: coordinated disinformation campaigns in dozens of countries, including India, Brazil, Mexico, Afghanistan, South Korea, Bolivia, Spain, and Ukraine. In some cases, such as in Honduras and Azerbaijan, Zhang was able to tie accounts involved in these campaigns directly to ruling political parties. In the memo, posted to Workplace the day Zhang was fired from Facebook for what the company alleged was poor performance, she says that she made decisions about these accounts with minimal oversight or support, despite repeated entreaties to senior leadership. On multiple occasions, she said, she was told to prioritize other work.
  • A Facebook spokesperson said that the company tries “to keep people safe even if it impacts our bottom line,” adding that the company has spent $13 billion on safety since 2016. “​​Our track record shows that we crack down on abuse abroad with the same intensity that we apply in the U.S.”
  • Zhang's memo, though, paints a different picture. “We focus upon harm and priority regions like the United States and Western Europe,” she wrote. But eventually, “it became impossible to read the news and monitor world events without feeling the weight of my own responsibility.”
  • Indeed, Facebook explicitly prioritizes certain countries for intervention by sorting them into tiers, the documents show. Zhang “chose not to prioritize” Bolivia, despite credible evidence of inauthentic activity in the run-up to the country’s 2019 election. That election was marred by claims of fraud, which fueled widespread protests; more than 30 people were killed and more than 800 were injured.
  • “I have blood on my hands,” Zhang wrote in the memo. By the time she left Facebook, she was having trouble sleeping at night. “I consider myself to have been put in an impossible spot—caught between my loyalties to the company and my loyalties to the world as a whole.”
  • What happened in the Philippines—and in Honduras, and Azerbaijan, and India, and Bolivia—wasn’t just that a very large company lacked a handle on the content posted to its platform. It was that, in many cases, a very large company knew what was happening and failed to meaningfully intervene.
  • solving problems for users should not be surprising. The company is under the constant threat of regulation and bad press. Facebook is doing what companies do, triaging and acting in its own self-interest.
criscimagnael

TikTok Ukraine War Videos Raise Questions About Spread of Misinformation - The New York... - 0 views

  • “What I see on TikTok is more real, more authentic than other social media,” said Ms. Hernandez, a student in Los Angeles. “I feel like I see what people there are seeing.”
  • But what Ms. Hernandez was actually viewing and hearing in the TikTok videos was footage of Ukrainian tanks taken from video games, as well as a soundtrack that was first uploaded to the app more than a year ago.
  • TikTok, the Chinese-owned video app known for viral dance and lip-syncing videos, has emerged as one of the most popular platforms for sharing videos and photos of the Russia-Ukraine war. Over the past week, hundreds of thousands of videos about the conflict have been uploaded to the app from across the world, according to a review by The Times. The New Yorker has called the invasion the world’s “first TikTok war.”
  • ...11 more annotations...
  • Many popular TikTok videos of the invasion — including of Ukrainians livestreaming from their bunkers — offer real accounts of the action, according to researchers who study the platform. But other videos have been impossible to authenticate and substantiate. Some simply appear to be exploiting the interest in the invasion for views, the researchers said.
  • The clip was then used in many TikTok videos, some of which included a note stating that all 13 soldiers had died. Ukrainian officials later said in a Facebook post that the men were alive and had been taken prisoner, but the TikTok videos have not been corrected.
  • “People trust it. The result is that a lot of people are seeing false information about Ukraine and believing it.”
  • TikTok and other social media platforms are also under pressure from U.S. lawmakers and Ukrainian officials to curb Russian misinformation about the war, especially from state-backed media outlets such as Russia Today and Sputnik.
  • For years, TikTok largely escaped sustained scrutiny about its content. Unlike Facebook, which has been around since 2004, and YouTube, which was founded in 2005, TikTok only became widely used in the past five years.
  • The app has navigated some controversies in the past. It has faced questions over harmful fads that appeared to originate on its platform, as well as whether it allows underage users and adequately protects their privacy.
  • That includes TikTok’s algorithm for its “For You” page, which suggests videos based on what people have previously seen, liked or shared. Viewing one video with misinformation likely leads to more videos with misinformation being shown, Ms. Richards said.
  • But audio can be misused and taken out of context, Ms. Richards said.
  • “Video is the hardest format to moderate for all platforms,” said Alex Stamos, the director of the Stanford Internet Observatory and a former head of security at Facebook. “When combined with the fact that TikTok’s algorithm is the primary factor for what content a user sees, as opposed to friendships or follows on the big U.S. platforms, this makes TikTok a uniquely potent platform for viral propaganda.”
  • “I feel like lately, the videos I’m seeing are designed to get me riled up, or to emotionally manipulate me,” she said. “I get worried so now, sometimes, I find myself Googling something or checking the comments to see if it is real before I trust it.”
  • “I guess I don’t really know what war looks like,” she said. “But we go to TikTok to learn about everything, so it makes sense we would trust it about this too.”
anonymous

Election Lawsuits Are A New Tactic To Fight Disinformation : NPR - 0 views

  • The victims of some of the most pernicious conspiracy theories of 2020 are fighting back in court. Voting equipment companies have filed a series of massive defamation lawsuits against allies of former President Trump in an effort to exert accountability over falsehoods about the companies' role in the election and repair damage to their brands.
  • On Friday, Fox News became the latest target and was served with a $1.6 billion defamation lawsuit by Denver-based Dominion Voting Systems after several of the network's hosts entertained on air conspiracy theories pushed by former President Trump that the company had rigged the results of the November election against him in key states.
  • Dominion has also sued Trump associates Rudy Giuliani, Sidney Powell and Mike Lindell for billions in damages. The company is one of the top providers of voting equipment to states and counties around the country and typically relies on procurement decisions made by elected officials from both political parties.
  • ...15 more annotations...
  • Earlier this month, Republican commissioners in one Ohio county sought to block the county election board's purchase of new Dominion equipment. A Dominion employee who was forced into hiding due to death threats has sued Giuliani, Powell and the Trump campaign. Another voting systems company, Smartmatic, has also filed a defamation lawsuit against Fox News.
  • Some see these legal fights as another way to take on viral misinformation, one that's already starting to show some results although some journalists are uneasy that a news organization could be targeted.
  • Skarnulis hopes that in addition to helping Coomer clear his name and return to a normal life, the suits will also serve as a warning.
  • The number of defamation lawsuits and the large damage claims associated with them is novel, said journalism and public policy professor Bill Adair, head of the journalism program at Duke University.
  • He does worry that using defamation suits to combat untruths spread by media outlets could become a weapon against journalists just doing their jobs. "As a journalist, I'm a little bit nervous. The idea of using defamation lawsuits makes us a little bit concerned."But even with that discomfort, Adair has come to believe the lawsuits do have a role to play.
  • The defamation suits already do appear to be having an effect. An anchor for Newsmax walked out on a live interview with My Pillow CEO Lindell when he started making unsubstantiated claims about Dominion voting machines. Fox News, the Fox Business Network and Newsmax also aired segments that contradicted the disinformation their own hosts had amplified.
  • Last month, Fox Business also cancelled a show hosted by Trump ally Lou Dobbs, who had amplified the conspiracy theories and interviewed Powell and Giuliani about them.
  • One challenge for the plaintiffs is that defamation lawsuits are difficult to win. They need to show the person they're suing knew a statement was false when she made it, or had serious doubts about its truthfulness.
  • Media organizations have a First Amendment right to report the news, and that includes repeating what important people say, even if those statements are false, said George Freeman, the former in-house counsel for The New York Times, who now heads the Medial Law Resource Center.
  • Pro-Trump outlets are likely to claim that constitutional protection for their defense but Freeman believes they may have crossed a legal line in their presentation of election fraud claims and in some instances applauding obvious falsehoods.
  • Still Freeman said he thinks the strongest defamation cases aren't against the media companies, but against one of the people they gave a lot of airtime to, Rudy Giuliani.
  • In a January call announcing the lawsuit against Giuliani, Dominion's attorney, Tom Clare, said that the court can consider circumstantial evidence too. The complaint includes a detailed timeline that shows Giuliani continued to make his claims in the face of public assurances from election security experts, hand recounts, and numerous court rulings rejecting fraud cases.
  • While the current lawsuits could have an impact in this instance, experts on misinformation say there are several reasons why defamation cases aren't a central tool in the fight against falsehoods.
  • Many conspiracy theories don't target a specific person or company, so there's no one to file a lawsuit against. Legal action is also expensive. Coomer's legal team expects his bills will exceed $2 million. And when a victim does sue, a case can take years.
  • The parents of children killed in the Sandy Hook shooting have filed multiple defamation lawsuits against Alex Jones of the conspiracy site, InfoWars. But after numerous challenges and delays, the cases are all still in the pre-trial phase. With Dominion and Smartmatic vowing not to settle before they get their day in court, this approach to fighting election misinformation may still be grinding forward even as the country enters the next presidential election. But for Adair and others, any effort to discourage future misinformation campaigns is worth pursuing.
mimiterranova

J&J Vaccine Pause Creates 'Perfect Storm' For Misinformation : NPR - 0 views

  • The most popular link on Facebook about the Johnson & Johnson news was shared by a conspiracy theorist and self-described "news analyst & hip-hop artist" named An0maly who thinks the pandemic is a cover for government control. It's a stark example of what experts warn could be a coming deluge of false or misleading information related to the one-shot vaccine.
  • In the case of the post by An0maly, a Facebook representative said the company has taken action against previous posts of his that have broken the social media platform's rules. It broadly removed more than 16 million pieces of content over the past year related to COVID-19 misinformation, but because this specific post did not contain any factually incorrect information, it would stay up.
  • But that story shifted on Tuesday after federal health officials recommended a temporary halt in the use of the Johnson & Johnson vaccine after a handful of reports about blood clots surfaced among the millions who have received the shot.
  • ...7 more annotations...
  • Millions of Americans were already skeptical of the vaccines before the Johnson & Johnson news, and a vast online network exists to feed that skepticism with bad information and conspiracy theories.
  • "The social media companies have taken a hard line against disinformation; they have not taken a similarly hard line against fallacies."
  • Now, Roberts said, whenever the CDC comes out with guidance about the Johnson & Johnson vaccine, health officials will be fighting ingrained doubts.
  • The Johnson & Johnson pause is also fertile ground for conspiracies because it is a developing topic with a number of unanswered questions.
  • Because health officials are still investigating the clotting issue, and determining guidance about the vaccine, there isn't much trustworthy information the government or credible outlets can provide to fill that void.
  • Many anti-vaccine activists have adopted this tactic as a way of getting around social media networks' policies designed to halt the spread of false information
  • "Every time there's going to be a new bit of negative [vaccine] information or circumstances that sow doubt, it's like we're caught on the back foot and we have to come together again and push forward,"
1 - 20 of 230 Next › Last »
Showing 20 items per page