Skip to main content

Home/ History Readings/ Group items tagged Facebook

Rss Feed Group items tagged

Javier E

Facebook Papers: 'History Will Not Judge Us Kindly' - The Atlantic - 0 views

  • Facebook’s hypocrisies, and its hunger for power and market domination, are not secret. Nor is the company’s conflation of free speech and algorithmic amplification
  • But the events of January 6 proved for many people—including many in Facebook’s workforce—to be a breaking point.
  • these documents leave little room for doubt about Facebook’s crucial role in advancing the cause of authoritarianism in America and around the world. Authoritarianism predates the rise of Facebook, of course. But Facebook makes it much easier for authoritarians to win.
  • ...59 more annotations...
  • Again and again, the Facebook Papers show staffers sounding alarms about the dangers posed by the platform—how Facebook amplifies extremism and misinformation, how it incites violence, how it encourages radicalization and political polarization. Again and again, staffers reckon with the ways in which Facebook’s decisions stoke these harms, and they plead with leadership to do more.
  • And again and again, staffers say, Facebook’s leaders ignore them.
  • Facebook has dismissed the concerns of its employees in manifold ways.
  • One of its cleverer tactics is to argue that staffers who have raised the alarm about the damage done by their employer are simply enjoying Facebook’s “very open culture,” in which people are encouraged to share their opinions, a spokesperson told me. This stance allows Facebook to claim transparency while ignoring the substance of the complaints, and the implication of the complaints: that many of Facebook’s employees believe their company operates without a moral compass.
  • When you stitch together the stories that spanned the period between Joe Biden’s election and his inauguration, it’s easy to see Facebook as instrumental to the attack on January 6. (A spokesperson told me that the notion that Facebook played an instrumental role in the insurrection is “absurd.”)
  • what emerges from a close reading of Facebook documents, and observation of the manner in which the company connects large groups of people quickly, is that Facebook isn’t a passive tool but a catalyst. Had the organizers tried to plan the rally using other technologies of earlier eras, such as telephones, they would have had to identify and reach out individually to each prospective participant, then persuade them to travel to Washington. Facebook made people’s efforts at coordination highly visible on a global scale.
  • The platform not only helped them recruit participants but offered people a sense of strength in numbers. Facebook proved to be the perfect hype machine for the coup-inclined.
  • In November 2019, Facebook staffers noticed they had a serious problem. Facebook offers a collection of one-tap emoji reactions. Today, they include “like,” “love,” “care,” “haha,” “wow,” “sad,” and “angry.” Company researchers had found that the posts dominated by “angry” reactions were substantially more likely to go against community standards, including prohibitions on various types of misinformation, according to internal documents.
  • In July 2020, researchers presented the findings of a series of experiments. At the time, Facebook was already weighting the reactions other than “like” more heavily in its algorithm—meaning posts that got an “angry” reaction were more likely to show up in users’ News Feeds than posts that simply got a “like.” Anger-inducing content didn’t spread just because people were more likely to share things that made them angry; the algorithm gave anger-inducing content an edge. Facebook’s Integrity workers—employees tasked with tackling problems such as misinformation and espionage on the platform—concluded that they had good reason to believe targeting posts that induced anger would help stop the spread of harmful content.
  • By dialing anger’s weight back to zero in the algorithm, the researchers found, they could keep posts to which people reacted angrily from being viewed by as many users. That, in turn, translated to a significant (up to 5 percent) reduction in the hate speech, civic misinformation, bullying, and violent posts—all of which are correlated with offline violence—to which users were exposed.
  • Facebook rolled out the change in early September 2020, documents show; a Facebook spokesperson confirmed that the change has remained in effect. It was a real victory for employees of the Integrity team.
  • But it doesn’t normally work out that way. In April 2020, according to Frances Haugen’s filings with the SEC, Facebook employees had recommended tweaking the algorithm so that the News Feed would deprioritize the surfacing of content for people based on their Facebook friends’ behavior. The idea was that a person’s News Feed should be shaped more by people and groups that a person had chosen to follow. Up until that point, if your Facebook friend saw a conspiracy theory and reacted to it, Facebook’s algorithm might show it to you, too. The algorithm treated any engagement in your network as a signal that something was worth sharing. But now Facebook workers wanted to build circuit breakers to slow this form of sharing.
  • Experiments showed that this change would impede the distribution of hateful, polarizing, and violence-inciting content in people’s News Feeds. But Zuckerberg “rejected this intervention that could have reduced the risk of violence in the 2020 election,” Haugen’s SEC filing says. An internal message characterizing Zuckerberg’s reasoning says he wanted to avoid new features that would get in the way of “meaningful social interactions.” But according to Facebook’s definition, its employees say, engagement is considered “meaningful” even when it entails bullying, hate speech, and reshares of harmful content.
  • This episode, like Facebook’s response to the incitement that proliferated between the election and January 6, reflects a fundamental problem with the platform
  • Facebook’s megascale allows the company to influence the speech and thought patterns of billions of people. What the world is seeing now, through the window provided by reams of internal documents, is that Facebook catalogs and studies the harm it inflicts on people. And then it keeps harming people anyway.
  • “I am worried that Mark’s continuing pattern of answering a different question than the question that was asked is a symptom of some larger problem,” wrote one Facebook employee in an internal post in June 2020, referring to Zuckerberg. “I sincerely hope that I am wrong, and I’m still hopeful for progress. But I also fully understand my colleagues who have given up on this company, and I can’t blame them for leaving. Facebook is not neutral, and working here isn’t either.”
  • It is quite a thing to see, the sheer number of Facebook employees—people who presumably understand their company as well as or better than outside observers—who believe their employer to be morally bankrupt.
  • I spoke with several former Facebook employees who described the company’s metrics-driven culture as extreme, even by Silicon Valley standards
  • Facebook workers are under tremendous pressure to quantitatively demonstrate their individual contributions to the company’s growth goals, they told me. New products and features aren’t approved unless the staffers pitching them demonstrate how they will drive engagement.
  • e worries have been exacerbated lately by fears about a decline in new posts on Facebook, two former employees who left the company in recent years told me. People are posting new material less frequently to Facebook, and its users are on average older than those of other social platforms.
  • One of Facebook’s Integrity staffers wrote at length about this dynamic in a goodbye note to colleagues in August 2020, describing how risks to Facebook users “fester” because of the “asymmetrical” burden placed on employees to “demonstrate legitimacy and user value” before launching any harm-mitigation tactics—a burden not shared by those developing new features or algorithm changes with growth and engagement in mind
  • The note said:We were willing to act only after things had spiraled into a dire state … Personally, during the time that we hesitated, I’ve seen folks from my hometown go further and further down the rabbithole of QAnon and Covid anti-mask/anti-vax conspiracy on FB. It has been painful to observe.
  • Current and former Facebook employees describe the same fundamentally broken culture—one in which effective tactics for making Facebook safer are rolled back by leadership or never approved in the first place.
  • That broken culture has produced a broken platform: an algorithmic ecosystem in which users are pushed toward ever more extreme content, and where Facebook knowingly exposes its users to conspiracy theories, disinformation, and incitement to violence.
  • One example is a program that amounts to a whitelist for VIPs on Facebook, allowing some of the users most likely to spread misinformation to break Facebook’s rules without facing consequences. Under the program, internal documents show, millions of high-profile users—including politicians—are left alone by Facebook even when they incite violence
  • whitelisting influential users with massive followings on Facebook isn’t just a secret and uneven application of Facebook’s rules; it amounts to “protecting content that is especially likely to deceive, and hence to harm, people on our platforms.”
  • Facebook workers tried and failed to end the program. Only when its existence was reported in September by The Wall Street Journal did Facebook’s Oversight Board ask leadership for more information about the practice. Last week, the board publicly rebuked Facebook for not being “fully forthcoming” about the program.
  • As a result, Facebook has stoked an algorithm arms race within its ranks, pitting core product-and-engineering teams, such as the News Feed team, against their colleagues on Integrity teams, who are tasked with mitigating harm on the platform. These teams establish goals that are often in direct conflict with each other.
  • “We can’t pretend we don’t see information consumption patterns, and how deeply problematic they are for the longevity of democratic discourse,” a user-experience researcher wrote in an internal comment thread in 2019, in response to a now-infamous memo from Andrew “Boz” Bosworth, a longtime Facebook executive. “There is no neutral position at this stage, it would be powerfully immoral to commit to amorality.”
  • Zuckerberg has defined Facebook’s mission as making “social infrastructure to give people the power to build a global community that works for all of us,” but in internal research documents his employees point out that communities aren’t always good for society:
  • When part of a community, individuals typically act in a prosocial manner. They conform, they forge alliances, they cooperate, they organize, they display loyalty, they expect obedience, they share information, they influence others, and so on. Being in a group changes their behavior, their abilities, and, importantly, their capability to harm themselves or others
  • Thus, when people come together and form communities around harmful topics or identities, the potential for harm can be greater.
  • The infrastructure choices that Facebook is making to keep its platform relevant are driving down the quality of the site, and exposing its users to more dangers
  • hose dangers are also unevenly distributed, because of the manner in which certain subpopulations are algorithmically ushered toward like-minded groups
  • And the subpopulations of Facebook users who are most exposed to dangerous content are also most likely to be in groups where it won’t get reported.
  • And it knows that 3 percent of Facebook users in the United States are super-consumers of conspiracy theories, accounting for 37 percent of known consumption of misinformation on the platform.
  • Zuckerberg’s positioning of Facebook’s role in the insurrection is odd. He lumps his company in with traditional media organizations—something he’s ordinarily loath to do, lest the platform be expected to take more responsibility for the quality of the content that appears on it—and suggests that Facebook did more, and did better, than journalism outlets in its response to January 6. What he fails to say is that journalism outlets would never be in the position to help investigators this way, because insurrectionists don’t typically use newspapers and magazines to recruit people for coups.
  • Facebook wants people to believe that the public must choose between Facebook as it is, on the one hand, and free speech, on the other. This is a false choice. Facebook has a sophisticated understanding of measures it could take to make its platform safer without resorting to broad or ideologically driven censorship tactics.
  • Facebook knows that no two people see the same version of the platform, and that certain subpopulations experience far more dangerous versions than others do
  • Facebook knows that people who are isolated—recently widowed or divorced, say, or geographically distant from loved ones—are disproportionately at risk of being exposed to harmful content on the platform.
  • It knows that repeat offenders are disproportionately responsible for spreading misinformation.
  • All of this makes the platform rely more heavily on ways it can manipulate what its users see in order to reach its goals. This explains why Facebook is so dependent on the infrastructure of groups, as well as making reshares highly visible, to keep people hooked.
  • It could consistently enforce its policies regardless of a user’s political power.
  • Facebook could ban reshares.
  • It could choose to optimize its platform for safety and quality rather than for growth.
  • It could tweak its algorithm to prevent widespread distribution of harmful content.
  • Facebook could create a transparent dashboard so that all of its users can see what’s going viral in real time.
  • It could make public its rules for how frequently groups can post and how quickly they can grow.
  • It could also automatically throttle groups when they’re growing too fast, and cap the rate of virality for content that’s spreading too quickly.
  • Facebook could shift the burden of proof toward people and communities to demonstrate that they’re good actors—and treat reach as a privilege, not a right
  • You must be vigilant about the informational streams you swim in, deliberate about how you spend your precious attention, unforgiving of those who weaponize your emotions and cognition for their own profit, and deeply untrusting of any scenario in which you’re surrounded by a mob of people who agree with everything you’re saying.
  • It could do all of these things. But it doesn’t.
  • Lately, people have been debating just how nefarious Facebook really is. One argument goes something like this: Facebook’s algorithms aren’t magic, its ad targeting isn’t even that good, and most people aren’t that stupid.
  • All of this may be true, but that shouldn’t be reassuring. An algorithm may just be a big dumb means to an end, a clunky way of maneuvering a massive, dynamic network toward a desired outcome. But Facebook’s enormous size gives it tremendous, unstable power.
  • Facebook takes whole populations of people, pushes them toward radicalism, and then steers the radicalized toward one another.
  • When the most powerful company in the world possesses an instrument for manipulating billions of people—an instrument that only it can control, and that its own employees say is badly broken and dangerous—we should take notice.
  • The lesson for individuals is this:
  • Facebook could say that its platform is not for everyone. It could sound an alarm for those who wander into the most dangerous corners of Facebook, and those who encounter disproportionately high levels of harmful content
  • Without seeing how Facebook works at a finer resolution, in real time, we won’t be able to understand how to make the social web compatible with democracy.
Javier E

Facebook's problem isn't Trump - it's the algorithm - Popular Information - 0 views

  • Facebook is in the business of making money. And it's very good at it. In the first three months of 2021, Facebook raked in over $11 billion in profits, almost entirely from displaying targeted advertising to its billions of users. 
  • In order to keep the money flowing, Facebook also needs to moderate content. When people use Facebook to livestream a murder, incite a genocide, or plan a white supremacist rally, it is not a good look.
  • But content moderation is a tricky business. This is especially true on Facebook where billions of pieces of content are posted every day. In a lot of cases, it is difficult to determine what content is truly harmful. No matter what you do, someone is unhappy. And it's a distraction from Facebook's core business of selling ads.
  • ...17 more annotations...
  • In 2019, Facebook came up with a solution to offload the most difficult content moderation decisions. The company created the "Oversight Board," a quasi-judicial body that Facebook claims is independent. The Board, stocked with impressive thinkers from around the world, would issue "rulings" about whether certain Facebook content moderation decisions were correct.
  • the decision, which is nearly 12,000 words long, illustrates that whether Trump is ultimately allowed to return to Facebook is of limited significance. The more important questions are about the nature of the algorithm that gives people with views like Trump such a powerful voice on Facebook. 
  • The Oversight Board was Facebook's idea. It spent years constructing the organization, selected its chairs, and funded its endowment. But now that the Oversight Board is finally up and running and taking on high-profile cases, Facebook is choosing to ignore questions that the Oversight Board believes are essential to doing its job.
  • This is a key passage (emphasis added): 
  • duces no original reporting. But, on Facebook in April, The Daily Wire received more than double the distribution of the Washington Post and the New York Times combined:
  • A critical issue, as the Oversight Board suggests, is not simply Trump's posts but how those kinds of posts are amplified by Facebook's algorithms. Equally important is how Facebook's algorithms amplify false, paranoid, violent, right-wing content from people other than Trump — including those that follow Trump on Facebook.
  • The jurisdiction of the Oversight Board excludes both the algorithm and Facebook's business practices.
  • Facebook stated to the Board that it considered Mr. Trump’s “repeated use of Facebook and other platforms to undermine confidence in the integrity of the election (necessitating repeated application by Facebook of authoritative labels correcting the misinformation) represented an extraordinary abuse of the platform.” The Board sought clarification from Facebook about the extent to which the platform’s design decisions, including algorithms, policies, procedures and technical features, amplified Mr. Trump’s posts after the election and whether Facebook had conducted any internal analysis of whether such design decisions may have contributed to the events of January 6. Facebook declined to answer these questions. This makes it difficult for the Board to assess whether less severe measures, taken earlier, may have been sufficient to protect the rights of others.
  • Donald Trump's Facebook page is a symptom, not the cause, of the problem. Its algorithm favors low-quality, far-right content. Trump is just one of many beneficiaries.
  • NewsWhip is a social media analytics service which tracks which websites get the most engagement on Facebook. It just released its analysis for April and it shows low-quality right-wing aggregation sites dominate major news organizations.
  • The Oversight Board has no power to compel Facebook to answer. It's an important reminder that, for all the pomp and circumstance, the Oversight Board is not a court. The scope of its authority is limited by Facebook executives' willingness to play along. 
  • This actually understates how much better The Daily Wire's content performs on Facebook than the Washington Post and the New York Times. The Daily Wire published just 1,385 pieces of content in April compared to over 6,000 by the Washington Post and the New York Times. Each piece of content The Daily Wire published in April received 54,084 engagements on Facebook, compared to 2,943 for the New York Times and 1,973 for the Washington Post. 
  • It's important to note here that Facebook's algorithm is not reflecting reality — it's creating a reality that doesn't exist anywhere else. In the rest of the world, Western Journal is not more popular than the New York Times, NBC News, the BBC, and the Washington Post. That's only true on Facebook.
  • Facebook has made a conscious decision to surface low-quality content and recognizes its dangers.
  • Shortly after the November election, Facebook temporarily tweaked its algorithm to emphasize "'news ecosystem quality' scores, or N.E.Q., a secret internal ranking it assigns to news publishers based on signals about the quality of their journalism." The purpose was to attempt to cut down on election misinformation being spread on the platform by Trump and his allies. The result was "a spike in visibility for big, mainstream publishers like CNN, The New York Times and NPR, while posts from highly engaged hyperpartisan pages, such as Breitbart and Occupy Democrats, became less visible." 
  • BuzzFeed reported that some Facebook staff members wanted to make the change permanent. But that suggestion was opposed by Joel Kaplan, a top Facebook executive and Republican operative who frequently intervenes on behalf of right-wing publishers. The algorithm change was quickly rolled back.
  • Other proposed changes to the Facebook algorithm over the years have been rejected or altered because of their potential negative impact on right-wing sites like The Daily Wire. 
Javier E

Here's a Look Inside Facebook's Data Wars - The New York Times - 0 views

  • On one side were executives, including Mr. Silverman and Brian Boland, a Facebook vice president in charge of partnerships strategy, who argued that Facebook should publicly share as much information as possible about what happens on its platform — good, bad or ugly.
  • On the other side were executives, including the company’s chief marketing officer and vice president of analytics, Alex Schultz, who worried that Facebook was already giving away too much.
  • One day in April, the people behind CrowdTangle, a data analytics tool owned by Facebook, learned that transparency had limits.
  • ...27 more annotations...
  • They argued that journalists and researchers were using CrowdTangle, a kind of turbocharged search engine that allows users to analyze Facebook trends and measure post performance, to dig up information they considered unhelpful — showing, for example, that right-wing commentators like Ben Shapiro and Dan Bongino were getting much more engagement on their Facebook pages than mainstream news outlets.
  • These executives argued that Facebook should selectively disclose its own data in the form of carefully curated reports, rather than handing outsiders the tools to discover it themselves.Team Selective Disclosure won, and CrowdTangle and its supporters lost.
  • the CrowdTangle story is important, because it illustrates the way that Facebook’s obsession with managing its reputation often gets in the way of its attempts to clean up its platform
  • The company, blamed for everything from election interference to vaccine hesitancy, badly wants to rebuild trust with a skeptical public. But the more it shares about what happens on its platform, the more it risks exposing uncomfortable truths that could further damage its image.
  • Facebook’s executives were more worried about fixing the perception that Facebook was amplifying harmful content than figuring out whether it actually was amplifying harmful content. Transparency, they said, ultimately took a back seat to image management.
  • the executives who pushed hardest for transparency appear to have been sidelined. Mr. Silverman, CrowdTangle’s co-founder and chief executive, has been taking time off and no longer has a clearly defined role at the company, several people with knowledge of the situation said. (Mr. Silverman declined to comment about his status.) And Mr. Boland, who spent 11 years at Facebook, left the company in November.
  • “One of the main reasons that I left Facebook is that the most senior leadership in the company does not want to invest in understanding the impact of its core products,” Mr. Boland said, in his first interview since departing. “And it doesn’t want to make the data available for others to do the hard work and hold them accountable.”
  • Mr. Boland, who oversaw CrowdTangle as well as other Facebook transparency efforts, said the tool fell out of favor with influential Facebook executives around the time of last year’s presidential election, when journalists and researchers used it to show that pro-Trump commentators were spreading misinformation and hyperpartisan commentary with stunning success.
  • “People were enthusiastic about the transparency CrowdTangle provided until it became a problem and created press cycles Facebook didn’t like,” he said. “Then, the tone at the executive level changed.”
  • Facebook was happy that I and other journalists were finding its tool useful. With only about 25,000 users, CrowdTangle is one of Facebook’s smallest products, but it has become a valuable resource for power users including global health organizations, election officials and digital marketers, and it has made Facebook look transparent compared with rival platforms like YouTube and TikTok, which don’t release nearly as much data.
  • Last fall, the leaderboard was full of posts by Mr. Trump and pro-Trump media personalities. Since Mr. Trump was barred from Facebook in January, it has been dominated by a handful of right-wing polemicists like Mr. Shapiro, Mr. Bongino and Sean Hannity, with the occasional mainstream news article, cute animal story or K-pop fan blog sprinkled in.
  • But the mood shifted last year when I started a Twitter account called @FacebooksTop10, on which I posted a daily leaderboard showing the sources of the most-engaged link posts by U.S. pages, based on CrowdTangle data.
  • The account went semi-viral, racking up more than 35,000 followers. Thousands of people retweeted the lists, including conservatives who were happy to see pro-Trump pundits beating the mainstream media and liberals who shared them with jokes like “Look at all this conservative censorship!” (If you’ve been under a rock for the past two years, conservatives in the United States frequently complain that Facebook is censoring them.)
  • Inside Facebook, the account drove executives crazy. Some believed that the data was being misconstrued and worried that it was painting Facebook as a far-right echo chamber. Others worried that the lists might spook investors by suggesting that Facebook’s U.S. user base was getting older and more conservative. Every time a tweet went viral, I got grumpy calls from Facebook executives who were embarrassed by the disparity between what they thought Facebook was — a clean, well-lit public square where civility and tolerance reign — and the image they saw reflected in the Twitter lists.
  • Mr. Boland, the former Facebook vice president, said that was a convenient deflection. He said that in internal discussions, Facebook executives were less concerned about the accuracy of the data than about the image of Facebook it presented.“It told a story they didn’t like,” he said of the Twitter account, “and frankly didn’t want to admit was true.”
  • Several executives proposed making reach data public on CrowdTangle, in hopes that reporters would cite that data instead of the engagement data they thought made Facebook look bad.But Mr. Silverman, CrowdTangle’s chief executive, replied in an email that the CrowdTangle team had already tested a feature to do that and found problems with it. One issue was that false and misleading news stories also rose to the top of those lists.“Reach leaderboard isn’t a total win from a comms point of view,” Mr. Silverman wrote.
  • executives argued that my Top 10 lists were misleading. They said CrowdTangle measured only “engagement,” while the true measure of Facebook popularity would be based on “reach,” or the number of people who actually see a given post. (With the exception of video views, reach data isn’t public, and only Facebook employees and page owners have access to it.)
  • Mr. Schultz, Facebook’s chief marketing officer, had the dimmest view of CrowdTangle. He wrote that he thought “the only way to avoid stories like this” would be for Facebook to publish its own reports about the most popular content on its platform, rather than releasing data through CrowdTangle.“If we go down the route of just offering more self-service data you will get different, exciting, negative stories in my opinion,” he wrote.
  • there’s a problem with reach data: Most of it is inaccessible and can’t be vetted or fact-checked by outsiders. We simply have to trust that Facebook’s own, private data tells a story that’s very different from the data it shares with the public.
  • Mr. Zuckerberg is right about one thing: Facebook is not a giant right-wing echo chamber.But it does contain a giant right-wing echo chamber — a kind of AM talk radio built into the heart of Facebook’s news ecosystem, with a hyper-engaged audience of loyal partisans who love liking, sharing and clicking on posts from right-wing pages, many of which have gotten good at serving up Facebook-optimized outrage bait at a consistent clip.
  • CrowdTangle’s data made this echo chamber easier for outsiders to see and quantify. But it didn’t create it, or give it the tools it needed to grow — Facebook did — and blaming a data tool for these revelations makes no more sense than blaming a thermometer for bad weather.
  • It’s worth noting that these transparency efforts are voluntary, and could disappear at any time. There are no regulations that require Facebook or any other social media companies to reveal what content performs well on their platforms, and American politicians appear to be more interested in fighting over claims of censorship than getting access to better data.
  • It’s also worth noting that Facebook can turn down the outrage dials and show its users calmer, less divisive news any time it wants. (In fact, it briefly did so after the 2020 election, when it worried that election-related misinformation could spiral into mass violence.) And there is some evidence that it is at least considering more permanent changes.
  • The project, which some employees refer to as the “Top 10” project, is still underway, the people said, and it’s unclear whether its findings have been put in place. Mr. Osborne, the Facebook spokesman, said that the team looks at a variety of ranking changes, and that the experiment wasn’t driven by a desire to change the Top 10 lists.
  • This year, Mr. Hegeman, the executive in charge of Facebook’s news feed, asked a team to figure out how tweaking certain variables in the core news feed ranking algorithm would change the resulting Top 10 lists, according to two people with knowledge of the project.
  • As for CrowdTangle, the tool is still available, and Facebook is not expected to cut off access to journalists and researchers in the short term, according to two people with knowledge of the company’s plans.
  • Mr. Boland, however, said he wouldn’t be surprised if Facebook executives decided to kill off CrowdTangle entirely or starve it of resources, rather than dealing with the headaches its data creates.
Javier E

Facebook Is a Doomsday Machine - The Atlantic - 0 views

  • megadeath is not the only thing that makes the Doomsday Machine petrifying. The real terror is in its autonomy, this idea that it would be programmed to detect a series of environmental inputs, then to act, without human interference. “There is no chance of human intervention, control, and final decision,” wrote the military strategist Herman Kahn in his 1960 book, On Thermonuclear War, which laid out the hypothetical for a Doomsday Machine. The concept was to render nuclear war unwinnable, and therefore unthinkable.
  • No machine should be that powerful by itself—but no one person should be either.
  • so far, somewhat miraculously, we have figured out how to live with the bomb. Now we need to learn how to survive the social web.
  • ...41 more annotations...
  • There’s a notion that the social web was once useful, or at least that it could have been good, if only we had pulled a few levers: some moderation and fact-checking here, a bit of regulation there, perhaps a federal antitrust lawsuit. But that’s far too sunny and shortsighted a view.
  • Today’s social networks, Facebook chief among them, were built to encourage the things that make them so harmful. It is in their very architecture.
  • I realized only recently that I’ve been thinking far too narrowly about the problem.
  • Megascale is nearly the existential threat that megadeath is. No single machine should be able to control the fate of the world’s population—and that’s what both the Doomsday Machine and Facebook are built to do.
  • Facebook does not exist to seek truth and report it, or to improve civic health, or to hold the powerful to account, or to represent the interests of its users, though these phenomena may be occasional by-products of its existence.
  • The company’s early mission was to “give people the power to share and make the world more open and connected.” Instead, it took the concept of “community” and sapped it of all moral meaning.
  • Facebook—along with Google and YouTube—is perfect for amplifying and spreading disinformation at lightning speed to global audiences.
  • Facebook decided that it needed not just a very large user base, but a tremendous one, unprecedented in size. That decision set Facebook on a path to escape velocity, to a tipping point where it can harm society just by existing.
  • No one, not even Mark Zuckerberg, can control the product he made. I’ve come to realize that Facebook is not a media company. It’s a Doomsday Machine.
  • Scale and engagement are valuable to Facebook because they’re valuable to advertisers. These incentives lead to design choices such as reaction buttons that encourage users to engage easily and often, which in turn encourage users to share ideas that will provoke a strong response.
  • Every time you click a reaction button on Facebook, an algorithm records it, and sharpens its portrait of who you are.
  • The hyper-targeting of users, made possible by reams of their personal data, creates the perfect environment for manipulation—by advertisers, by political campaigns, by emissaries of disinformation, and of course by Facebook itself, which ultimately controls what you see and what you don’t see on the site.
  • there aren’t enough moderators speaking enough languages, working enough hours, to stop the biblical flood of shit that Facebook unleashes on the world, because 10 times out of 10, the algorithm is faster and more powerful than a person.
  • At megascale, this algorithmically warped personalized informational environment is extraordinarily difficult to moderate in a meaningful way, and extraordinarily dangerous as a result.
  • These dangers are not theoretical, and they’re exacerbated by megascale, which makes the platform a tantalizing place to experiment on people
  • Even after U.S. intelligence agencies identified Facebook as a main battleground for information warfare and foreign interference in the 2016 election, the company has failed to stop the spread of extremism, hate speech, propaganda, disinformation, and conspiracy theories on its site.
  • it wasn’t until October of this year, for instance, that Facebook announced it would remove groups, pages, and Instragram accounts devoted to QAnon, as well as any posts denying the Holocaust.
  • In the days after the 2020 presidential election, Zuckerberg authorized a tweak to the Facebook algorithm so that high-accuracy news sources such as NPR would receive preferential visibility in people’s feeds, and hyper-partisan pages such as Breitbart News’s and Occupy Democrats’ would be buried, according to The New York Times, offering proof that Facebook could, if it wanted to, turn a dial to reduce disinformation—and offering a reminder that Facebook has the power to flip a switch and change what billions of people see online.
  • reducing the prevalence of content that Facebook calls “bad for the world” also reduces people’s engagement with the site. In its experiments with human intervention, the Times reported, Facebook calibrated the dial so that just enough harmful content stayed in users’ news feeds to keep them coming back for more.
  • Facebook’s stated mission—to make the world more open and connected—has always seemed, to me, phony at best, and imperialist at worst.
  • Facebook is a borderless nation-state, with a population of users nearly as big as China and India combined, and it is governed largely by secret algorithms
  • How much real-world violence would never have happened if Facebook didn’t exist? One of the people I’ve asked is Joshua Geltzer, a former White House counterterrorism official who is now teaching at Georgetown Law. In counterterrorism circles, he told me, people are fond of pointing out how good the United States has been at keeping terrorists out since 9/11. That’s wrong, he said. In fact, “terrorists are entering every single day, every single hour, every single minute” through Facebook.
  • Evidence of real-world violence can be easily traced back to both Facebook and 8kun. But 8kun doesn’t manipulate its users or the informational environment they’re in. Both sites are harmful. But Facebook might actually be worse for humanity.
  • In previous eras, U.S. officials could at least study, say, Nazi propaganda during World War II, and fully grasp what the Nazis wanted people to believe. Today, “it’s not a filter bubble; it’s a filter shroud,” Geltzer said. “I don’t even know what others with personalized experiences are seeing.”
  • Mary McCord, the legal director at the Institute for Constitutional Advocacy and Protection at Georgetown Law, told me that she thinks 8kun may be more blatant in terms of promoting violence but that Facebook is “in some ways way worse” because of its reach. “There’s no barrier to entry with Facebook,” she said. “In every situation of extremist violence we’ve looked into, we’ve found Facebook postings. And that reaches tons of people. The broad reach is what brings people into the fold and normalizes extremism and makes it mainstream.” In other words, it’s the megascale that makes Facebook so dangerous.
  • Facebook’s megascale gives Zuckerberg an unprecedented degree of influence over the global population. If he isn’t the most powerful person on the planet, he’s very near the top.
  • “The thing he oversees has such an effect on cognition and people’s beliefs, which can change what they do with their nuclear weapons or their dollars.”
  • Facebook’s new oversight board, formed in response to backlash against the platform and tasked with making decisions concerning moderation and free expression, is an extension of that power. “The first 10 decisions they make will have more effect on speech in the country and the world than the next 10 decisions rendered by the U.S. Supreme Court,” Geltzer said. “That’s power. That’s real power.”
  • Facebook is also a business, and a place where people spend time with one another. Put it this way: If you owned a store and someone walked in and started shouting Nazi propaganda or recruiting terrorists near the cash register, would you, as the shop owner, tell all of the other customers you couldn’t possibly intervene?
  • In 2004, Zuckerberg said Facebook ran advertisements only to cover server costs. But over the next two years Facebook completely upended and redefined the entire advertising industry. The pre-social web destroyed classified ads, but the one-two punch of Facebook and Google decimated local news and most of the magazine industry—publications fought in earnest for digital pennies, which had replaced print dollars, and social giants scooped them all up anyway.
  • In other words, if the Dunbar number for running a company or maintaining a cohesive social life is 150 people; the magic number for a functional social platform is maybe 20,000 people. Facebook now has 2.7 billion monthly users.
  • in 2007, Zuckerberg said something in an interview with the Los Angeles Times that now takes on a much darker meaning: “The things that are most powerful aren’t the things that people would have done otherwise if they didn’t do them on Facebook. Instead, it’s the things that would never have happened otherwise.”
  • We’re still in the infancy of this century’s triple digital revolution of the internet, smartphones, and the social web, and we find ourselves in a dangerous and unstable informational environment, powerless to resist forces of manipulation and exploitation that we know are exerted on us but remain mostly invisible
  • The Doomsday Machine offers a lesson: We should not accept this current arrangement. No single machine should be able to control so many people.
  • we need a new philosophical and moral framework for living with the social web—a new Enlightenment for the information age, and one that will carry us back to shared reality and empiricism.
  • localized approach is part of what made megascale possible. Early constraints around membership—the requirement at first that users attended Harvard, and then that they attended any Ivy League school, and then that they had an email address ending in .edu—offered a sense of cohesiveness and community. It made people feel more comfortable sharing more of themselves. And more sharing among clearly defined demographics was good for business.
  • we need to adopt a broader view of what it will take to fix the brokenness of the social web. That will require challenging the logic of today’s platforms—and first and foremost challenging the very concept of megascale as a way that humans gather.
  • The web’s existing logic tells us that social platforms are free in exchange for a feast of user data; that major networks are necessarily global and centralized; that moderators make the rules. None of that need be the case.
  • We need people who dismantle these notions by building alternatives. And we need enough people to care about these other alternatives to break the spell of venture capital and mass attention that fuels megascale and creates fatalism about the web as it is now.
  • We must also find ways to repair the aspects of our society and culture that the social web has badly damaged. This will require intellectual independence, respectful debate, and the same rebellious streak that helped establish Enlightenment values centuries ago.
  • Right now, too many people are allowing algorithms and tech giants to manipulate them, and reality is slipping from our grasp as a result. This century’s Doomsday Machine is here, and humming along.
Javier E

As Facebook Raised a Privacy Wall, It Carved an Opening for Tech Giants - The New York ... - 0 views

  • For years, Facebook gave some of the world’s largest technology companies more intrusive access to users’ personal data than it has disclosed, effectively exempting those business partners from its usual privacy rules, according to internal records and interviews.
  • The special arrangements are detailed in hundreds of pages of Facebook documents obtained by The New York Times. The records, generated in 2017 by the company’s internal system for tracking partnerships, provide the most complete picture yet of the social network’s data-sharing practices. They also underscore how personal data has become the most prized commodity of the digital age, traded on a vast scale by some of the most powerful companies in Silicon Valley and beyond.
  • Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users’ private messages.
  • ...27 more annotations...
  • Facebook also assumed extraordinary power over the personal information of its 2.2 billion users — control it has wielded with little transparency or outside oversight.
  • The partnerships were so important that decisions about forming them were vetted at high levels, sometimes by Mr. Zuckerberg and Sheryl Sandberg, the chief operating officer, Facebook officials said. While many of the partnerships were announced publicly, the details of the sharing arrangements typically were confidential
  • Zuckerberg, the chief executive, assured lawmakers in April that people “have complete control” over everything they share on Facebook.
  • the documents, as well as interviews with about 50 former employees of Facebook and its corporate partners, reveal that Facebook allowed certain companies access to data despite those protections
  • Data privacy experts disputed Facebook’s assertion that most partnerships were exempted from the regulatory requirements
  • “This is just giving third parties permission to harvest data without you being informed of it or giving consent to it,” said David Vladeck, who formerly ran the F.T.C.’s consumer protection bureau. “I don’t understand how this unconsented-to data harvesting can at all be justified under the consent decree.
  • “I don’t believe it is legitimate to enter into data-sharing partnerships where there is not prior informed consent from the user,” said Roger McNamee, an early investor in Facebook. “No one should trust Facebook until they change their business model.”
  • Few companies have better data than Facebook and its rival, Google, whose popular products give them an intimate view into the daily lives of billions of people — and allow them to dominate the digital advertising market
  • Facebook has never sold its user data, fearful of user backlash and wary of handing would-be competitors a way to duplicate its most prized asset. Instead, internal documents show, it did the next best thing: granting other companies access to parts of the social network in ways that advanced its own interests.
  • as the social network has disclosed its data sharing deals with other kinds of businesses — including internet companies such as Yahoo — Facebook has labeled them integration partners, too
  • Among the revelations was that Facebook obtained data from multiple partners for a controversial friend-suggestion tool called “People You May Know.”
  • The feature, introduced in 2008, continues even though some Facebook users have objected to it, unsettled by its knowledge of their real-world relationships. Gizmodo and other news outlets have reported cases of the tool’s recommending friend connections between patients of the same psychiatrist, estranged family members, and a harasser and his victim.
  • The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier.
  • agreements with about a dozen companies did. Some enabled partners to see users’ contact information through their friends — even after the social network, responding to complaints, said in 2014 that it was stripping all applications of that power.
  • Pam Dixon, executive director of the World Privacy Forum, a nonprofit privacy research group, said that Facebook would have little power over what happens to users’ information after sharing it broadly. “It travels,” Ms. Dixon said. “It could be customized. It could be fed into an algorithm and decisions could be made about you based on that data.”
  • Facebook’s agreement with regulators is a result of the company’s early experiments with data sharing. In late 2009, it changed the privacy settings of the 400 million people then using the service, making some of their information accessible to all of the internet. Then it shared that information, including users’ locations and religious and political leanings, with Microsoft and other partners.
  • But the privacy program faced some internal resistance from the start, according to four former Facebook employees with direct knowledge of the company’s efforts. Some engineers and executives, they said, considered the privacy reviews an impediment to quick innovation and growth. And the core team responsible for coordinating the reviews — numbering about a dozen people by 2016 — was moved around within Facebook’s sprawling organization, sending mixed signals about how seriously the company took it, the ex-employees said.
  • Microsoft officials said that Bing was using the data to build profiles of Facebook users on Microsoft servers. They declined to provide details, other than to say the information was used in “feature development” and not for advertising. Microsoft has since deleted the data, the officials said.
  • For some advocates, the torrent of user data flowing out of Facebook has called into question not only Facebook’s compliance with the F.T.C. agreement, but also the agency’s approach to privacy regulation.
  • “We brought Facebook under the regulatory authority of the F.T.C. after a tremendous amount of work. The F.T.C. has failed to act.
  • Facebook, in turn, used contact lists from the partners, including Amazon, Yahoo and the Chinese company Huawei — which has been flagged as a security threat by American intelligence officials — to gain deeper insight into people’s relationships and suggest more connections, the records show.
  • Facebook records show Yandex had access in 2017 to Facebook’s unique user IDs even after the social network stopped sharing them with other applications, citing privacy risks. A spokeswoman for Yandex, which was accused last year by Ukraine’s security service of funneling its user data to the Kremlin, said the company was unaware of the access
  • In October, Facebook said Yandex was not an integration partner. But in early December, as The Times was preparing to publish this article, Facebook told congressional lawmakers that it was
  • But federal regulators had reason to know about the partnerships — and to question whether Facebook was adequately safeguarding users’ privacy. According to a letter that Facebook sent this fall to Senator Ron Wyden, the Oregon Democrat, PricewaterhouseCoopers reviewed at least some of Facebook’s data partnerships.
  • The first assessment, sent to the F.T.C. in 2013, found only “limited” evidence that Facebook had monitored those partners’ use of data. The finding was redacted from a public copy of the assessment, which gave Facebook’s privacy program a passing grade over all.
  • Mr. Wyden and other critics have questioned whether the assessments — in which the F.T.C. essentially outsources much of its day-to-day oversight to companies like PricewaterhouseCoopers — are effective. As with other businesses under consent agreements with the F.T.C., Facebook pays for and largely dictated the scope of its assessments, which are limited mostly to documenting that Facebook has conducted the internal privacy reviews it claims it had
  • Facebook officials said that while the social network audited partners only rarely, it managed them closely.
Javier E

Opinion | It's Time to Break Up Facebook - The New York Times - 1 views

  • For many people today, it’s hard to imagine government doing much of anything right, let alone breaking up a company like Facebook. This isn’t by coincidence.
  • Starting in the 1970s, a small but dedicated group of economists, lawyers and policymakers sowed the seeds of our cynicism. Over the next 40 years, they financed a network of think tanks, journals, social clubs, academic centers and media outlets to teach an emerging generation that private interests should take precedence over public ones
  • Their gospel was simple: “Free” markets are dynamic and productive, while government is bureaucratic and ineffective. By the mid-1980s, they had largely managed to relegate energetic antitrust enforcement to the history books.
  • ...51 more annotations...
  • This shift, combined with business-friendly tax and regulatory policy, ushered in a period of mergers and acquisitions that created megacorporations
  • In the past 20 years, more than 75 percent of American industries, from airlines to pharmaceuticals, have experienced increased concentration, and the average size of public companies has tripled. The results are a decline in entrepreneurship, stalled productivity growth, and higher prices and fewer choices for consumers.
  • Because Facebook so dominates social networking, it faces no market-based accountability. This means that every time Facebook messes up, we repeat an exhausting pattern: first outrage, then disappointment and, finally, resignation.
  • Over a decade later, Facebook has earned the prize of domination. It is worth half a trillion dollars and commands, by my estimate, more than 80 percent of the world’s social networking revenue. It is a powerful monopoly, eclipsing all of its rivals and erasing competition from the social networking category.
  • Facebook’s monopoly is also visible in its usage statistics. About 70 percent of American adults use social media, and a vast majority are on Facebook products
  • Over two-thirds use the core site, a third use Instagram, and a fifth use WhatsApp.
  • As a result of all this, would-be competitors can’t raise the money to take on Facebook. Investors realize that if a company gets traction, Facebook will copy its innovations, shut it down or acquire it for a relatively modest sum
  • Facebook’s dominance is not an accident of history. The company’s strategy was to beat every competitor in plain view, and regulators and the government tacitly — and at times explicitly — approved
  • The F.T.C.’s biggest mistake was to allow Facebook to acquire Instagram and WhatsApp. In 2012, the newer platforms were nipping at Facebook’s heels because they had been built for the smartphone, where Facebook was still struggling to gain traction. Mark responded by buying them, and the F.T.C. approved.
  • Neither Instagram nor WhatsApp had any meaningful revenue, but both were incredibly popular. The Instagram acquisition guaranteed Facebook would preserve its dominance in photo networking, and WhatsApp gave it a new entry into mobile real-time messaging.
  • When it hasn’t acquired its way to dominance, Facebook has used its monopoly position to shut out competing companies or has copied their technology.
  • In 2014, the rules favored curiosity-inducing “clickbait” headlines. In 2016, they enabled the spread of fringe political views and fake news, which made it easier for Russian actors to manipulate the American electorate.
  • As markets become more concentrated, the number of new start-up businesses declines. This holds true in other high-tech areas dominated by single companies, like search (controlled by Google) and e-commerce (taken over by Amazon)
  • I don’t blame Mark for his quest for domination. He has demonstrated nothing more nefarious than the virtuous hustle of a talented entrepreneur
  • It’s on our government to ensure that we never lose the magic of the invisible hand. How did we allow this to happen
  • a narrow reliance on whether or not consumers have experienced price gouging fails to take into account the full cost of market domination
  • It doesn’t recognize that we also want markets to be competitive to encourage innovation and to hold power in check. And it is out of step with the history of antitrust law. Two of the last major antitrust suits, against AT&T and IBM in the 1980s, were grounded in the argument that they had used their size to stifle innovation and crush competition.
  • It is a disservice to the laws and their intent to retain such a laserlike focus on price effects as the measure of all that antitrust was meant to do.”
  • Facebook is the perfect case on which to reverse course, precisely because Facebook makes its money from targeted advertising, meaning users do not pay to use the service. But it is not actually free, and it certainly isn’t harmless.
  • We pay for Facebook with our data and our attention, and by either measure it doesn’t come cheap.
  • The choice is mine, but it doesn’t feel like a choice. Facebook seeps into every corner of our lives to capture as much of our attention and data as possible and, without any alternative, we make the trade.
  • The vibrant marketplace that once drove Facebook and other social media companies to compete to come up with better products has virtually disappeared. This means there’s less chance of start-ups developing healthier, less exploitative social media platforms. It also means less accountability on issues like privacy.
  • The most problematic aspect of Facebook’s power is Mark’s unilateral control over speech. There is no precedent for his ability to monitor, organize and even censor the conversations of two billion people.
  • Facebook engineers write algorithms that select which users’ comments or experiences end up displayed in the News Feeds of friends and family. These rules are proprietary and so complex that many Facebook employees themselves don’t understand them.
  • What started out as lighthearted entertainment has become the primary way that people of all ages communicate online.
  • In January 2018, Mark announced that the algorithms would favor non-news content shared by friends and news from “trustworthy” sources, which his engineers interpreted — to the confusion of many — as a boost for anything in the category of “politics, crime, tragedy.”
  • As if Facebook’s opaque algorithms weren’t enough, last year we learned that Facebook executives had permanently deleted their own messages from the platform, erasing them from the inboxes of recipients; the justification was corporate security concerns.
  • No one at Facebook headquarters is choosing what single news story everyone in America wakes up to, of course. But they do decide whether it will be an article from a reputable outlet or a clip from “The Daily Show,” a photo from a friend’s wedding or an incendiary call to kill others.
  • Mark knows that this is too much power and is pursuing a twofold strategy to mitigate it. He is pivoting Facebook’s focus toward encouraging more private, encrypted messaging that Facebook’s employees can’t see, let alone control
  • Second, he is hoping for friendly oversight from regulators and other industry executives.
  • In an op-ed essay in The Washington Post in March, he wrote, “Lawmakers often tell me we have too much power over speech, and I agree.” And he went even further than before, calling for more government regulation — not just on speech, but also on privacy and interoperability, the ability of consumers to seamlessly leave one network and transfer their profiles, friend connections, photos and other data to another.
  • I don’t think these proposals were made in bad faith. But I do think they’re an attempt to head off the argument that regulators need to go further and break up the company. Facebook isn’t afraid of a few more rules. It’s afraid of an antitrust case and of the kind of accountability that real government oversight would bring.
  • We don’t expect calcified rules or voluntary commissions to work to regulate drug companies, health care companies, car manufacturers or credit card providers. Agencies oversee these industries to ensure that the private market works for the public good. In these cases, we all understand that government isn’t an external force meddling in an organic market; it’s what makes a dynamic and fair market possible in the first place. This should be just as true for social networking as it is for air travel or pharmaceuticals.
  • Just breaking up Facebook is not enough. We need a new agency, empowered by Congress to regulate tech companies. Its first mandate should be to protect privacy.
  • First, Facebook should be separated into multiple companies. The F.T.C., in conjunction with the Justice Department, should enforce antitrust laws by undoing the Instagram and WhatsApp acquisitions and banning future acquisitions for several years.
  • How would a breakup work? Facebook would have a brief period to spin off the Instagram and WhatsApp businesses, and the three would become distinct companies, most likely publicly traded.
  • Facebook is indeed more valuable when there are more people on it: There are more connections for a user to make and more content to be shared. But the cost of entering the social network business is not that high. And unlike with pipes and electricity, there is no good argument that the country benefits from having only one dominant social networking company.
  • others worry that the breakup of Facebook or other American tech companies could be a national security problem. Because advancements in artificial intelligence require immense amounts of data and computing power, only large companies like Facebook, Google and Amazon can afford these investments, they say. If American companies become smaller, the Chinese will outpace us.
  • The American government needs to do two things: break up Facebook’s monopoly and regulate the company to make it more accountable to the American people.
  • But the biggest winners would be the American people. Imagine a competitive market in which they could choose among one network that offered higher privacy standards, another that cost a fee to join but had little advertising and another that would allow users to customize and tweak their feeds as they saw fit
  • The cost of breaking up Facebook would be next to zero for the government, and lots of people stand to gain economically. A ban on short-term acquisitions would ensure that competitors, and the investors who take a bet on them, would have the space to flourish. Digital advertisers would suddenly have multiple companies vying for their dollars.
  • The Europeans have made headway on privacy with the General Data Protection Regulation, a law that guarantees users a minimal level of protection. A landmark privacy bill in the United States should specify exactly what control Americans have over their digital information, require clearer disclosure to users and provide enough flexibility to the agency to exercise effective oversight over time
  • The agency should also be charged with guaranteeing basic interoperability across platforms.
  • Finally, the agency should create guidelines for acceptable speech on social media
  • We will have to create similar standards that tech companies can use. These standards should of course be subject to the review of the courts, just as any other limits on speech are. But there is no constitutional right to harass others or live-stream violence.
  • These are difficult challenges. I worry that government regulators will not be able to keep up with the pace of digital innovation
  • I worry that more competition in social networking might lead to a conservative Facebook and a liberal one, or that newer social networks might be less secure if government regulation is weak
  • Professor Wu has written that this “policeman at the elbow” led IBM to steer clear “of anything close to anticompetitive conduct, for fear of adding to the case against it.”
  • Finally, an aggressive case against Facebook would persuade other behemoths like Google and Amazon to think twice about stifling competition in their own sectors, out of fear that they could be next.
  • The alternative is bleak. If we do not take action, Facebook’s monopoly will become even more entrenched. With much of the world’s personal communications in hand, it can mine that data for patterns and trends, giving it an advantage over competitors for decades to come.
  • This movement of public servants, scholars and activists deserves our support. Mark Zuckerberg cannot fix Facebook, but our government can.
Javier E

What Mark Zuckerberg Didn't Say About What Facebook Knows About You - WSJ - 0 views

  • When testifying before the Senate Tuesday, Mr. Zuckerberg said, “I think everyone should have control over how their information is used.” He also said, “You have full access to understand all—every piece of information that Facebook might know about you—and you can get rid of all of it.”
  • Not exactly. There are important classes of information Facebook collects on us that we can’t control. We don’t get to “opt in” or remove every specific piece. Often, we aren’t even informed of their existence—except in the abstract—and we aren’t shown how the social network uses this harvested information.
  • The website log is a good example, in part because of its sheer mass. The browsing histories of hundreds of millions—possibly billions—of people are gathered by a variety of advertising trackers, which Facebook has been offering to web publishers ever since it introduced the “Like” button in 2009.
  • ...17 more annotations...
  • They’ve become, as predicted, a nearly web-wide system for tracking all users—even when you don’t click the button.
  • “If you downloaded this file [of sites Facebook knows you visited], it would look like a quarter to half your browsing history,” Mr. Garcia-Martinez adds.
  • Another reason Facebook doesn’t give you this data: The company claims recovering it from its databases is difficult.
  • In one case, it took Facebook 106 days to deliver to a Belgian mathematician, Paul-Olivier Dehaye, all the data the company had gathered on him through its most common tracking system. Facebook doesn’t say how long it stores this information.
  • When you opt out of interest-based ads, the system that uses your browsing history to target you, Facebook continues tracking you anyway. It just no longer uses the data to show you ads.
  • There is more data Facebook collects that it doesn’t explain. It encourages users to upload their phone contacts, including names, phone numbers and email addresses
  • Facebook never discloses if such personal information about you has been uploaded by other users from their contact lists, how many times that might have happened or who might have uploaded it.
  • This data enables Facebook not only to keep track of active users across its multiple products, but also to fill in the missing links. If three people named Smith all upload contact info for the same fourth Smith, chances are this person is related
  • Facebook now knows that person exists, even if he or she has never been on Facebook. And of course, people without Facebook accounts certainly can’t see what information the company has in these so-called shadow profiles.
  • “In general, we collect data on people who have not signed up for Facebook for security purposes,” Mr. Zuckerberg told Congress
  • There’s also a form of location data you can’t control unless you delete your whole account. This isn’t the app’s easy-to-turn-off GPS tracking. It’s the string of IP addresses, a form of device identification on the internet, that can show where your computer or phone is each time it connects to Facebook.
  • Facebook says it uses your IP address to target ads when you are near a specific place, but as you can see in your downloaded Facebook data, the log of stored IP addresses can go back years.
  • Location is a powerful signal for Facebook, allowing it to infer how you are connected to other people, even if you don’t identify them as family members, co-workers or lovers
  • All this data, plus the elements Facebook lets you control, can potentially reveal everything from your wealth to whether you are depressed.
  • That level of precision is at the heart of Facebook’s recent troubles: Just because Facebook uses it to accomplish a seemingly innocent task—in Mr. Zuckerberg’s words, making ad “experiences better, and more relevant”— doesn’t mean we shouldn’t be worried.
  • Regulators the world over are coming to similar conclusions: Our personal data has become too sensitive—and too lucrative—to be left without restraints in the hands of self-interested corporations.
  • Facebook, Alphabet Inc.’s Google and a host of smaller companies that compete with and support the giants in the digital ad space have become addicted to the kind of information that helps microtarget ads.
Javier E

Washington Monthly | How to Fix Facebook-Before It Fixes Us - 0 views

  • Smartphones changed the advertising game completely. It took only a few years for billions of people to have an all-purpose content delivery system easily accessible sixteen hours or more a day. This turned media into a battle to hold users’ attention as long as possible.
  • And it left Facebook and Google with a prohibitive advantage over traditional media: with their vast reservoirs of real-time data on two billion individuals, they could personalize the content seen by every user. That made it much easier to monopolize user attention on smartphones and made the platforms uniquely attractive to advertisers. Why pay a newspaper in the hopes of catching the attention of a certain portion of its audience, when you can pay Facebook to reach exactly those people and no one else?
  • Wikipedia defines an algorithm as “a set of rules that precisely defines a sequence of operations.” Algorithms appear value neutral, but the platforms’ algorithms are actually designed with a specific value in mind: maximum share of attention, which optimizes profits.
  • ...58 more annotations...
  • They do this by sucking up and analyzing your data, using it to predict what will cause you to react most strongly, and then giving you more of that.
  • Algorithms that maximize attention give an advantage to negative messages. People tend to react more to inputs that land low on the brainstem. Fear and anger produce a lot more engagement and sharing than joy
  • The result is that the algorithms favor sensational content over substance.
  • for mass media, this was constrained by one-size-fits-all content and by the limitations of delivery platforms. Not so for internet platforms on smartphones. They have created billions of individual channels, each of which can be pushed further into negativity and extremism without the risk of alienating other audience members
  • On Facebook, it’s your news feed, while on Google it’s your individually customized search results. The result is that everyone sees a different version of the internet tailored to create the illusion that everyone else agrees with them.
  • It took Brexit for me to begin to see the danger of this dynamic. I’m no expert on British politics, but it seemed likely that Facebook might have had a big impact on the vote because one side’s message was perfect for the algorithms and the other’s wasn’t. The “Leave” campaign made an absurd promise—there would be savings from leaving the European Union that would fund a big improvement in the National Health System—while also exploiting xenophobia by casting Brexit as the best way to protect English culture and jobs from immigrants. It was too-good-to-be-true nonsense mixed with fearmongering.
  • Facebook was a much cheaper and more effective platform for Leave in terms of cost per user reached. And filter bubbles would ensure that people on the Leave side would rarely have their questionable beliefs challenged. Facebook’s model may have had the power to reshape an entire continent.
  • Tristan Harris, formerly the design ethicist at Google. Tristan had just appeared on 60 Minutes to discuss the public health threat from social networks like Facebook. An expert in persuasive technology, he described the techniques that tech platforms use to create addiction and the ways they exploit that addiction to increase profits. He called it “brain hacking.”
  • The most important tool used by Facebook and Google to hold user attention is filter bubbles. The use of algorithms to give consumers “what they want” leads to an unending stream of posts that confirm each user’s existing beliefs
  • Continuous reinforcement of existing beliefs tends to entrench those beliefs more deeply, while also making them more extreme and resistant to contrary facts
  • No one stopped them from siphoning off the profits of content creators. No one stopped them from gathering data on every aspect of every user’s internet life. No one stopped them from amassing market share not seen since the days of Standard Oil.
  • Facebook takes the concept one step further with its “groups” feature, which encourages like-minded users to congregate around shared interests or beliefs. While this ostensibly provides a benefit to users, the larger benefit goes to advertisers, who can target audiences even more effectively.
  • We theorized that the Russians had identified a set of users susceptible to its message, used Facebook’s advertising tools to identify users with similar profiles, and used ads to persuade those people to join groups dedicated to controversial issues. Facebook’s algorithms would have favored Trump’s crude message and the anti-Clinton conspiracy theories that thrilled his supporters, with the likely consequence that Trump and his backers paid less than Clinton for Facebook advertising per person reached.
  • The ads were less important, though, than what came next: once users were in groups, the Russians could have used fake American troll accounts and computerized “bots” to share incendiary messages and organize events.
  • Trolls and bots impersonating Americans would have created the illusion of greater support for radical ideas than actually existed.
  • Real users “like” posts shared by trolls and bots and share them on their own news feeds, so that small investments in advertising and memes posted to Facebook groups would reach tens of millions of people.
  • A similar strategy prevailed on other platforms, including Twitter. Both techniques, bots and trolls, take time and money to develop—but the payoff would have been huge.
  • 2016 was just the beginning. Without immediate and aggressive action from Washington, bad actors of all kinds would be able to use Facebook and other platforms to manipulate the American electorate in future elections.
  • Renee DiResta, an expert in how conspiracy theories spread on the internet. Renee described how bad actors plant a rumor on sites like 4chan and Reddit, leverage the disenchanted people on those sites to create buzz, build phony news sites with “press” versions of the rumor, push the story onto Twitter to attract the real media, then blow up the story for the masses on Facebook.
  • It was sophisticated hacker technique, but not expensive. We hypothesized that the Russians were able to manipulate tens of millions of American voters for a sum less than it would take to buy an F-35 fighter jet.
  • Algorithms can be beautiful in mathematical terms, but they are only as good as the people who create them. In the case of Facebook and Google, the algorithms have flaws that are increasingly obvious and dangerous.
  • Thanks to the U.S. government’s laissez-faire approach to regulation, the internet platforms were able to pursue business strategies that would not have been allowed in prior decades. No one stopped them from using free products to centralize the internet and then replace its core functions.
  • To the contrary: the platforms help people self-segregate into like-minded filter bubbles, reducing the risk of exposure to challenging ideas.
  • No one stopped them from running massive social and psychological experiments on their users. No one demanded that they police their platforms. It has been a sweet deal.
  • Facebook and Google are now so large that traditional tools of regulation may no longer be effective.
  • The largest antitrust fine in EU history bounced off Google like a spitball off a battleship.
  • It reads like the plot of a sci-fi novel: a technology celebrated for bringing people together is exploited by a hostile power to drive people apart, undermine democracy, and create misery. This is precisely what happened in the United States during the 2016 election.
  • We had constructed a modern Maginot Line—half the world’s defense spending and cyber-hardened financial centers, all built to ward off attacks from abroad—never imagining that an enemy could infect the minds of our citizens through inventions of our own making, at minimal cost
  • Not only was the attack an overwhelming success, but it was also a persistent one, as the political party that benefited refuses to acknowledge reality. The attacks continue every day, posing an existential threat to our democratic processes and independence.
  • Facebook, Google, Twitter, and other platforms were manipulated by the Russians to shift outcomes in Brexit and the U.S. presidential election, and unless major changes are made, they will be manipulated again. Next time, there is no telling who the manipulators will be.
  • Unfortunately, there is no regulatory silver bullet. The scope of the problem requires a multi-pronged approach.
  • Polls suggest that about a third of Americans believe that Russian interference is fake news, despite unanimous agreement to the contrary by the country’s intelligence agencies. Helping those people accept the truth is a priority. I recommend that Facebook, Google, Twitter, and others be required to contact each person touched by Russian content with a personal message that says, “You, and we, were manipulated by the Russians. This really happened, and here is the evidence.” The message would include every Russian message the user received.
  • This idea, which originated with my colleague Tristan Harris, is based on experience with cults. When you want to deprogram a cult member, it is really important that the call to action come from another member of the cult, ideally the leader.
  • decentralization had a cost: no one had an incentive to make internet tools easy to use. Frustrated by those tools, users embraced easy-to-use alternatives from Facebook and Google. This allowed the platforms to centralize the internet, inserting themselves between users and content, effectively imposing a tax on both sides. This is a great business model for Facebook and Google—and convenient in the short term for customers—but we are drowning in evidence that there are costs that society may not be able to afford.
  • Second, the chief executive officers of Facebook, Google, Twitter, and others—not just their lawyers—must testify before congressional committees in open session
  • This is important not just for the public, but also for another crucial constituency: the employees who keep the tech giants running. While many of the folks who run Silicon Valley are extreme libertarians, the people who work there tend to be idealists. They want to believe what they’re doing is good. Forcing tech CEOs like Mark Zuckerberg to justify the unjustifiable, in public—without the shield of spokespeople or PR spin—would go a long way to puncturing their carefully preserved cults of personality in the eyes of their employees.
  • We also need regulatory fixes. Here are a few ideas.
  • First, it’s essential to ban digital bots that impersonate humans. They distort the “public square” in a way that was never possible in history, no matter how many anonymous leaflets you printed.
  • At a minimum, the law could require explicit labeling of all bots, the ability for users to block them, and liability on the part of platform vendors for the harm bots cause.
  • Second, the platforms should not be allowed to make any acquisitions until they have addressed the damage caused to date, taken steps to prevent harm in the future, and demonstrated that such acquisitions will not result in diminished competition.
  • An underappreciated aspect of the platforms’ growth is their pattern of gobbling up smaller firms—in Facebook’s case, that includes Instagram and WhatsApp; in Google’s, it includes YouTube, Google Maps, AdSense, and many others—and using them to extend their monopoly power.
  • This is important, because the internet has lost something very valuable. The early internet was designed to be decentralized. It treated all content and all content owners equally. That equality had value in society, as it kept the playing field level and encouraged new entrants.
  • There’s no doubt that the platforms have the technological capacity to reach out to every affected person. No matter the cost, platform companies must absorb it as the price for their carelessness in allowing the manipulation.
  • Third, the platforms must be transparent about who is behind political and issues-based communication.
  • Transparency with respect to those who sponsor political advertising of all kinds is a step toward rebuilding trust in our political institutions.
  • Fourth, the platforms must be more transparent about their algorithms. Users deserve to know why they see what they see in their news feeds and search results. If Facebook and Google had to be up-front about the reason you’re seeing conspiracy theories—namely, that it’s good for business—they would be far less likely to stick to that tactic
  • Allowing third parties to audit the algorithms would go even further toward maintaining transparency. Facebook and Google make millions of editorial choices every hour and must accept responsibility for the consequences of those choices. Consumers should also be able to see what attributes are causing advertisers to target them.
  • Fifth, the platforms should be required to have a more equitable contractual relationship with users. Facebook, Google, and others have asserted unprecedented rights with respect to end-user license agreements (EULAs), the contracts that specify the relationship between platform and user.
  • All software platforms should be required to offer a legitimate opt-out, one that enables users to stick with the prior version if they do not like the new EULA.
  • “Forking” platforms between old and new versions would have several benefits: increased consumer choice, greater transparency on the EULA, and more care in the rollout of new functionality, among others. It would limit the risk that platforms would run massive social experiments on millions—or billions—of users without appropriate prior notification. Maintaining more than one version of their services would be expensive for Facebook, Google, and the rest, but in software that has always been one of the costs of success. Why should this generation get a pass?
  • Sixth, we need a limit on the commercial exploitation of consumer data by internet platforms. Customers understand that their “free” use of platforms like Facebook and Google gives the platforms license to exploit personal data. The problem is that platforms are using that data in ways consumers do not understand, and might not accept if they did.
  • Not only do the platforms use your data on their own sites, but they also lease it to third parties to use all over the internet. And they will use that data forever, unless someone tells them to stop.
  • There should be a statute of limitations on the use of consumer data by a platform and its customers. Perhaps that limit should be ninety days, perhaps a year. But at some point, users must have the right to renegotiate the terms of how their data is used.
  • Seventh, consumers, not the platforms, should own their own data. In the case of Facebook, this includes posts, friends, and events—in short, the entire social graph. Users created this data, so they should have the right to export it to other social networks.
  • It would be analogous to the regulation of the AT&T monopoly’s long-distance business, which led to lower prices and better service for consumers.
  • Eighth, and finally, we should consider that the time has come to revive the country’s traditional approach to monopoly. Since the Reagan era, antitrust law has operated under the principle that monopoly is not a problem so long as it doesn’t result in higher prices for consumers.
  • Under that framework, Facebook and Google have been allowed to dominate several industries—not just search and social media but also email, video, photos, and digital ad sales, among others—increasing their monopolies by buying potential rivals like YouTube and Instagram.
  • While superficially appealing, this approach ignores costs that don’t show up in a price tag. Addiction to Facebook, YouTube, and other platforms has a cost. Election manipulation has a cost. Reduced innovation and shrinkage of the entrepreneurial economy has a cost. All of these costs are evident today. We can quantify them well enough to appreciate that the costs to consumers of concentration on the internet are unacceptably high.
Javier E

In India, Facebook Struggles to Combat Misinformation and Hate Speech - The New York Times - 0 views

  • On Feb. 4, 2019, a Facebook researcher created a new user account to see what it was like to experience the social media site as a person living in Kerala, India.For the next three weeks, the account operated by a simple rule: Follow all the recommendations generated by Facebook’s algorithms to join groups, watch videos and explore new pages on the site.
  • The result was an inundation of hate speech, misinformation and celebrations of violence, which were documented in an internal Facebook report published later that month.AdvertisementContinue reading the main story“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the Facebook researcher wrote.
  • The report was one of dozens of studies and memos written by Facebook employees grappling with the effects of the platform on India. They provide stark evidence of one of the most serious criticisms levied by human rights activists and politicians against the world-spanning company: It moves into a country without fully understanding its potential impact on local culture and politics, and fails to deploy the resources to act on issues once they occur.
  • ...19 more annotations...
  • Facebook’s problems on the subcontinent present an amplified version of the issues it has faced throughout the world, made worse by a lack of resources and a lack of expertise in India’s 22 officially recognized languages.
  • The documents include reports on how bots and fake accounts tied to the country’s ruling party and opposition figures were wreaking havoc on national elections
  • They also detail how a plan championed by Mark Zuckerberg, Facebook’s chief executive, to focus on “meaningful social interactions,” or exchanges between friends and family, was leading to more misinformation in India, particularly during the pandemic.
  • Facebook did not have enough resources in India and was unable to grapple with the problems it had introduced there, including anti-Muslim posts,
  • Eighty-seven percent of the company’s global budget for time spent on classifying misinformation is earmarked for the United States, while only 13 percent is set aside for the rest of the world — even though North American users make up only 10 percent of the social network’s daily active users
  • That lopsided focus on the United States has had consequences in a number of countries besides India. Company documents showed that Facebook installed measures to demote misinformation during the November election in Myanmar, including disinformation shared by the Myanmar military junta.
  • In Sri Lanka, people were able to automatically add hundreds of thousands of users to Facebook groups, exposing them to violence-inducing and hateful content
  • In India, “there is definitely a question about resourcing” for Facebook, but the answer is not “just throwing more money at the problem,” said Katie Harbath, who spent 10 years at Facebook as a director of public policy, and worked directly on securing India’s national elections. Facebook, she said, needs to find a solution that can be applied to countries around the world.
  • Two months later, after India’s national elections had begun, Facebook put in place a series of steps to stem the flow of misinformation and hate speech in the country, according to an internal document called Indian Election Case Study.
  • After the attack, anti-Pakistan content began to circulate in the Facebook-recommended groups that the researcher had joined. Many of the groups, she noted, had tens of thousands of users. A different report by Facebook, published in December 2019, found Indian Facebook users tended to join large groups, with the country’s median group size at 140,000 members.
  • Graphic posts, including a meme showing the beheading of a Pakistani national and dead bodies wrapped in white sheets on the ground, circulated in the groups she joined.After the researcher shared her case study with co-workers, her colleagues commented on the posted report that they were concerned about misinformation about the upcoming elections in India
  • According to a memo written after the trip, one of the key requests from users in India was that Facebook “take action on types of misinfo that are connected to real-world harm, specifically politics and religious group tension.”
  • The case study painted an optimistic picture of Facebook’s efforts, including adding more fact-checking partners — the third-party network of outlets with which Facebook works to outsource fact-checking — and increasing the amount of misinformation it removed.
  • The study did not note the immense problem the company faced with bots in India, nor issues like voter suppression. During the election, Facebook saw a spike in bots — or fake accounts — linked to various political groups, as well as efforts to spread misinformation that could have affected people’s understanding of the voting process.
  • , Facebook found that over 40 percent of top views, or impressions, in the Indian state of West Bengal were “fake/inauthentic.” One inauthentic account had amassed more than 30 million impressions.
  • A report published in March 2021 showed that many of the problems cited during the 2019 elections persisted.
  • Much of the material circulated around Facebook groups promoting Rashtriya Swayamsevak Sangh, an Indian right-wing and nationalist paramilitary group. The groups took issue with an expanding Muslim minority population in West Bengal and near the Pakistani border, and published posts on Facebook calling for the ouster of Muslim populations from India and promoting a Muslim population control law.
  • Facebook also hesitated to designate R.S.S. as a dangerous organization because of “political sensitivities” that could affect the social network’s operation in the country.
  • Of India’s 22 officially recognized languages, Facebook said it has trained its A.I. systems on five. (It said it had human reviewers for some others.) But in Hindi and Bengali, it still did not have enough data to adequately police the content, and much of the content targeting Muslims “is never flagged or actioned,” the Facebook report said.
Javier E

Where Countries Are Tinderboxes and Facebook Is a Match - The New York Times - 0 views

  • For months, we had been tracking riots and lynchings around the world linked to misinformation and hate speech on Facebook, which pushes whatever content keeps users on the site longest — a potentially damaging practice in countries with weak institutions.
  • Time and again, communal hatreds overrun the newsfeed — the primary portal for news and information for many users — unchecked as local media are displaced by Facebook and governments find themselves with little leverage over the company
  • Some users, energized by hate speech and misinformation, plot real-world attacks.
  • ...23 more annotations...
  • A reconstruction of Sri Lanka’s descent into violence, based on interviews with officials, victims and ordinary users caught up in online anger, found that Facebook’s newsfeed played a central role in nearly every step from rumor to killing.
  • Facebook officials, they say, ignored repeated warnings of the potential for violence, resisting pressure to hire moderators or establish emergency points of contact.
  • Sri Lankans say they see little evidence of change. And in other countries, as Facebook expands, analysts and activists worry they, too, may see violence.
  • As Facebook pushes into developing countries, it tends to be initially received as a force for good.In Sri Lanka, it keeps families in touch even as many work abroad. It provides for unprecedented open expression and access to information. Government officials say it was essential for the democratic transition that swept them into office in 2015.
  • where institutions are weak or undeveloped, Facebook’s newsfeed can inadvertently amplify dangerous tendencies. Designed to maximize user time on site, it promotes whatever wins the most attention. Posts that tap into negative, primal emotions like anger or fear, studies have found, produce the highest engagement, and so proliferate.
  • n developing countries, Facebook is often perceived as synonymous with the internet and reputable sources are scarce, allowing emotionally charged rumors to run rampant. Shared among trusted friends and family members, they can become conventional wisdom.
  • “There needs to be some kind of engagement with countries like Sri Lanka by big companies who look at us only as markets,” he said. “We’re a society, we’re not just a market.”
  • Last year, in rural Indonesia, rumors spread on Facebook and WhatsApp, a Facebook-owned messaging tool, that gangs were kidnapping local children and selling their organs. Some messages included photos of dismembered bodies or fake police fliers. Almost immediately, locals in nine villages lynched outsiders they suspected of coming for their children.
  • Near-identical social media rumors have also led to attacks in India and Mexico. Lynchings are increasingly filmed and posted back to Facebook, where they go viral as grisly tutorials.
  • One post declared, “Kill all Muslims, don’t even save an infant.” A prominent extremist urged his followers to descend on the city of Kandy to “reap without leaving an iota behind.”
  • where people do not feel they can rely on the police or courts to keep them safe, research shows, panic over a perceived threat can lead some to take matters into their own hands — to lynch.
  • “You report to Facebook, they do nothing,” one of the researchers, Amalini De Sayrah, said. “There’s incitements to violence against entire communities and Facebook says it doesn’t violate community standards.”
  • In government offices across town, officials “felt a sense of helplessness,” Sudarshana Gunawardana, the head of public information, recounted. Before Facebook, he said, officials facing communal violence “could ask media heads to be sensible, they could have their own media strategy.”
  • now it was as if his country’s information policies were set at Facebook headquarters in Menlo Park, Calif. The officials rushed out statements debunking the sterilization rumors but could not match Facebook’s influence
  • Desperate, the researchers flagged the video and subsequent posts using Facebook’s on-site reporting tool.Though they and government officials had repeatedly asked Facebook to establish direct lines, the company had insisted this tool would be sufficient, they said. But nearly every report got the same response: the content did not violate Facebook’s standards.
  • Facebook’s most consequential impact may be in amplifying the universal tendency toward tribalism. Posts dividing the world into “us” and “them” rise naturally, tapping into users’ desire to belong.
  • Its gamelike interface rewards engagement, delivering a dopamine boost when users accrue likes and responses, training users to indulge behaviors that win affirmation
  • And because its algorithm unintentionally privileges negativity, the greatest rush comes by attacking outsiders: The other sports team. The other political party. The ethnic minority.
  • Mass media has long been used to mobilize mass violence. Facebook, by democratizing communication tools, gives anyone with a smartphone the ability to broadcast hate.
  • Facebook did not create Sri Lanka’s history of ethnic distrust any more than it created anti-Rohingya sentiment in Myanmar.
  • In India, Facebook-based misinformation has been linked repeatedly to religious violence, including riots in 2012 that left several dead, foretelling what has since become a wider trend.
  • “We don’t completely blame Facebook,” said Harindra Dissanayake, a presidential adviser in Sri Lanka. “The germs are ours, but Facebook is the wind, you know?”
  • Mr. Kumarasinghe died on March 3, online emotions surged into calls for action: attend the funeral to show support. Sinhalese arrived by the busload, fanning out to nearby towns. Online, they migrated from Facebook to private WhatsApp groups, where they could plan in secret.
Javier E

Why Facebook won't let you turn off its news feed algorithm - The Washington Post - 0 views

  • In at least two experiments over the years, Facebook has explored what happens when it turns off its controversial news feed ranking system — the software that decides for each user which posts they’ll see and in what order, internal documents show. That leaves users to see all the posts from all of their friends in simple, chronological order.
  • The internal research documents, some previously unreported, help to explain why Facebook seems so wedded to its automated ranking system, known as the news feed algorithm.
  • previously reported internal documents, which Haugen provided to regulators and media outlets, including The Washington Post, have shown how Facebook crafts its ranking system to keep users hooked, sometimes at the cost of angering or misinforming them.
  • ...25 more annotations...
  • In testimony to U.S. Congress and abroad, whistleblower Frances Haugen has pointed to the algorithm as central to the social network’s problems, arguing that it systematically amplifies and rewards hateful, divisive, misleading and sometimes outright false content by putting it at the top of users’ feeds.
  • The political push raises an old question for Facebook: Why not just give users the power to turn off their feed ranking algorithms voluntarily? Would letting users opt to see every post from the people they follow, in chronological order, be so bad?
  • The documents suggest that Facebook’s defense of algorithmic rankings stems not only from its business interests, but from a paternalistic conviction, backed by data, that its sophisticated personalization software knows what users want better than the users themselves
  • Since 2009, three years after it launched the news feed, Facebook has used software that predicts which posts each user will find most interesting and places those at the top of their feeds while burying others. That system, which has evolved in complexity to take in as many as 10,000 pieces of information about each post, has fueled the news feed’s growth into a dominant information source.
  • The proliferation of false information, conspiracy theories and partisan propaganda on Facebook and other social networks has led some to wonder whether we wouldn’t all be better off with a simpler, older system: one that simply shows people all the messages, pictures and videos from everyone they follow, in the order they were posted.
  • That was more or less how Instagram worked until 2016, and Twitter until 2017.
  • But Facebook has long resisted it.
  • they appear to have been informed mostly by data on user engagement, at least until recently
  • That employee, who said they had worked on and studied the news feed for two years, went on to question whether automated ranking might also come with costs that are harder to measure than the benefits. “Even asking this question feels slightly blasphemous at Facebook,” they added.
  • “Whenever we’ve tried to compare ranked and unranked feeds, ranked feeds just seem better,” wrote an employee in a memo titled, “Is ranking good?”, which was posted to the company’s internal network, Facebook Workplace, in 2018
  • In 2014, another internal report, titled “Feed ranking is good,” summarized the results of tests that found allowing users to turn off the algorithm led them to spend less time in their news feeds, post less often and interact less.
  • Without an algorithm deciding which posts to show at the top of users’ feeds, concluded the report’s author, whose name was redacted, “Facebook would probably be shrinking.”
  • there’s a catch: The setting only applies for as long as you stay logged in. When you leave and come back, the ranking algorithm will be back on.
  • What many users may not realize is that Facebook actually does offer an option to see a mostly chronological feed, called “most recent,”
  • The longer Facebook left the user’s feed in chronological order, the less time they spent on it, the less they posted, and the less often they returned to Facebook.
  • A separate report from 2018, first described by Alex Kantrowitz’s newsletter Big Technology, found that turning off the algorithm unilaterally for a subset of Facebook users, and showing them posts mostly in the order they were posted, led to “massive engagement drops.” Notably, it also found that users saw more low-quality content in their feeds, at least at first, although the company’s researchers were able to mitigate that with more aggressive “integrity” measures.
  • Nick Clegg, the company’s vice president of global affairs, said in a TV interview last month that if Facebook were to remove the news feed algorithm, “the first thing that would happen is that people would see more, not less, hate speech; more, not less, misinformation; more, not less, harmful content. Why? Because those algorithmic systems precisely are designed like a great sort of giant spam filter to identify and deprecate and downgrade bad content.”
  • because the algorithm has always been there, Facebook users haven’t been given the time or the tools to curate their feeds for themselves in thoughtful ways. In other words, Facebook has never really given a chronological news feed a fair shot to succeed
  • Some critics say that’s a straw-man argument. Simply removing automated rankings for a subset of users, on a social network that has been built to rely heavily on those systems, is not the same as designing a service to work well without them,
  • Ben Grosser, a professor of new media at University of Illinois at Urbana-Champaign. Those users’ feeds are no longer curated, but the posts they’re seeing are still influenced by the algorithm’s reward systems. That is, they’re still seeing content from people and publishers who are vying for the likes, shares and comments that drive Facebook’s recommendati
  • “My experience from watching a chronological feed within a social network that isn’t always trying to optimize for growth is that a lot of these problems” — such as hate speech, trolling and manipulative media — “just don’t exist.”
  • Facebook has not taken an official stand on the legislation that would require social networks to offer a chronological feed option, but Clegg said in an op-ed last month that the company is open to regulation around algorithms, transparency, and user controls.Twitter, for its part, signaled potential support for the bills.
  • “I think users have the right to expect social media experiences free of recommendation algorithms,” Maréchal added. “As a user, I want to have as much control over my own experience as possible, and recommendation algorithms take that control away from me.”
  • “Only companies themselves can do the experiments to find the answers. And as talented as industry researchers are, we can’t trust executives to make decisions in the public interest based on that research, or to let the public and policymakers access that research.”
  • ns.
Javier E

Inside Facebook's (Totally Insane, Unintentionally Gigantic, Hyperpartisan) Political-M... - 1 views

  • According to the company, its site is used by more than 200 million people in the United States each month, out of a total population of 320 million. A 2016 Pew study found that 44 percent of Americans read or watch news on Facebook.
  • we can know, based on these facts alone, that Facebook is hosting a huge portion of the political conversation in America.
  • Using a tool called CrowdTangle, which tracks engagement for Facebook pages across the network, you can see which pages are most shared, liked and commented on, and which pages dominate the conversation around election topics.
  • ...22 more annotations...
  • Individually, these pages have meaningful audiences, but cumulatively, their audience is gigantic: tens of millions of people. On Facebook, they rival the reach of their better-funded counterparts in the political media, whether corporate giants like CNN or The New York Times, or openly ideological web operations like Breitbart or Mic.
  • these new publishers are happy to live inside the world that Facebook has created. Their pages are accommodated but not actively courted by the company and are not a major part of its public messaging about media. But they are, perhaps, the purest expression of Facebook’s design and of the incentives coded into its algorithm — a system that has already reshaped the web and has now inherited, for better or for worse, a great deal of America’s political discourse.
  • In 2010, Facebook released widgets that publishers could embed on their sites, reminding readers to share, and these tools were widely deployed. By late 2012, when Facebook passed a billion users, referrals from the social network were sending visitors to publishers’ websites at rates sometimes comparable to Google, the web’s previous de facto distribution hub. Publishers took note of what worked on Facebook and adjusted accordingly.
  • While web publishers have struggled to figure out how to take advantage of Facebook’s audience, these pages have thrived. Unburdened of any allegiance to old forms of news media and the practice, or performance, of any sort of ideological balance, native Facebook page publishers have a freedom that more traditional publishers don’t: to engage with Facebook purely on its terms.
  • Rafael Rivero is an acquaintance of Provost’s who, with his twin brother, Omar, runs a page called Occupy Democrats, which passed three million followers in June. This accelerating growth is attributed by Rivero, and by nearly every left-leaning page operator I spoke with, not just to interest in the election but especially to one campaign in particular: “Bernie Sanders is the Facebook candidate,
  • Now that the nomination contest is over, Rivero has turned to making anti-Trump content. A post from earlier this month got straight to the point: “Donald Trump is unqualified, unstable and unfit to lead. Share if you agree!” More than 40,000 people did.“It’s like a meme war,” Rivero says, “and politics is being won and lost on social media.”
  • truly Facebook-native political pages have begun to create and refine a new approach to political news: cherry-picking and reconstituting the most effective tactics and tropes from activism, advocacy and journalism into a potent new mixture. This strange new class of media organization slots seamlessly into the news feed and is especially notable in what it asks, or doesn’t ask, of its readers. The point is not to get them to click on more stories or to engage further with a brand. The point is to get them to share the post that’s right in front of them. Everything else is secondary.
  • The flood of visitors aligned with two core goals of most media companies: to reach people and to make money. But as Facebook’s growth continued, its influence was intensified by broader trends in internet use, primarily the use of smartphones, on which Facebook became more deeply enmeshed with users’ daily routines. Soon, it became clear that Facebook wasn’t just a source of readership; it was, increasingly, where readers lived.
  • For media companies, the ability to reach an audience is fundamentally altered, made greater in some ways and in others more challenging. For a dedicated Facebook user, a vast array of sources, spanning multiple media and industries, is now processed through the same interface and sorting mechanism, alongside updates from friends, family, brands and celebrities.
  • All have eventually run up against the same reality: A company that can claim nearly every internet-using adult as a user is less a partner than a context — a self-contained marketplace to which you have been granted access but which functions according to rules and incentives that you cannot control.
  • It is a framework built around personal connections and sharing, where value is both expressed and conferred through the concept of engagement. Of course, engagement, in one form or another, is what media businesses have always sought, and provocation has always sold news. But now the incentives are literalized in buttons and written into software.
  • Each day, according to Facebook’s analytics, posts from the Make America Great page are seen by 600,000 to 1.7 million people. In July, articles posted to the page, which has about 450,000 followers, were shared, commented on or liked more than four million times, edging out, for example, the Facebook page of USA Today
  • Nicoloff’s business model is not dissimilar from the way most publishers use Facebook: build a big following, post links to articles on an outside website covered in ads and then hope the math works out in your favor. For many, it doesn’t: Content is expensive, traffic is unpredictable and website ads are both cheap and alienating to readers.
  • In July, visitors arriving to Nicoloff’s website produced a little more than $30,000 in revenue. His costs, he said, total around $8,000, partly split between website hosting fees and advertising buys on Facebook itself.
  • of course, there’s the content, which, at a few dozen posts a day, Nicoloff is far too busy to produce himself. “I have two people in the Philippines who post for me,” Nicoloff said, “a husband-and-wife combo.” From 9 a.m. Eastern time to midnight, the contractors scour the internet for viral political stories, many explicitly pro-Trump. If something seems to be going viral elsewhere, it is copied to their site and promoted with an urgent headline.
  • In the end, Nicoloff takes home what he jokingly described as a “doctor’s salary” — in a good month, more than $20,000.
  • In their angry, cascading comment threads, Make America Great’s followers express no such ambivalence. Nearly every page operator I spoke to was astonished by the tone their commenters took, comparing them to things like torch-wielding mobs and sharks in a feeding frenzy
  • A dozen or so of the sites are published in-house, but posts from the company’s small team of writers are free to be shared among the entire network. The deal for a would-be Liberty Alliance member is this: You bring the name and the audience, and the company will build you a prefab site, furnish it with ads, help you fill it with content and keep a cut of the revenue. Coca told me the company brought in $12 million in revenue last year.
  • Because the pages are run independently, the editorial product is varied. But it is almost universally tuned to the cadences and styles that seem to work best on partisan Facebook. It also tracks closely to conservative Facebook media’s big narratives, which, in turn, track with the Trump campaign’s messaging: Hillary Clinton is a crook and possibly mentally unfit; ISIS is winning; Black Lives Matter is the real racist movement; Donald Trump alone can save us; the system — all of it — is rigged.
  • It’s an environment that’s at best indifferent and at worst hostile to traditional media brands; but for this new breed of page operator, it’s mostly upside. In front of largely hidden and utterly sympathetic audiences, incredible narratives can take shape, before emerging, mostly formed, into the national discourse.
  • How much of what happens on the platform is a reflection of a political mood and widely held beliefs, simply captured in a new medium, and how much of it might be created, or intensified, by the environment it provides? What is Facebook doing to our politics?
  • for the page operators, the question is irrelevant to the task at hand. Facebook’s primacy is a foregone conclusion, and the question of Facebook’s relationship to political discourse is absurd — they’re one and the same. As Rafael Rivero put it to me, “Facebook is where it’s all happening.”
Javier E

Facebook Executives Shut Down Efforts to Make the Site Less Divisive - WSJ - 0 views

  • A Facebook Inc. team had a blunt message for senior executives. The company’s algorithms weren’t bringing people together. They were driving people apart.
  • “Our algorithms exploit the human brain’s attraction to divisiveness,” read a slide from a 2018 presentation. “If left unchecked,” it warned, Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.”
  • That presentation went to the heart of a question dogging Facebook almost since its founding: Does its platform aggravate polarization and tribal behavior? The answer it found, in some cases, was yes.
  • ...27 more annotations...
  • in the end, Facebook’s interest was fleeting. Mr. Zuckerberg and other senior executives largely shelved the basic research, according to previously unreported internal documents and people familiar with the effort, and weakened or blocked efforts to apply its conclusions to Facebook products.
  • At Facebook, “There was this soul-searching period after 2016 that seemed to me this period of really sincere, ‘Oh man, what if we really did mess up the world?’
  • Another concern, they and others said, was that some proposed changes would have disproportionately affected conservative users and publishers, at a time when the company faced accusations from the right of political bias.
  • Americans were drifting apart on fundamental societal issues well before the creation of social media, decades of Pew Research Center surveys have shown. But 60% of Americans think the country’s biggest tech companies are helping further divide the country, while only 11% believe they are uniting it, according to a Gallup-Knight survey in March.
  • Facebook policy chief Joel Kaplan, who played a central role in vetting proposed changes, argued at the time that efforts to make conversations on the platform more civil were “paternalistic,” said people familiar with his comments.
  • The high number of extremist groups was concerning, the presentation says. Worse was Facebook’s realization that its algorithms were responsible for their growth. The 2016 presentation states that “64% of all extremist group joins are due to our recommendation tools” and that most of the activity came from the platform’s “Groups You Should Join” and “Discover” algorithms: “Our recommendation systems grow the problem.”
  • In a sign of how far the company has moved, Mr. Zuckerberg in January said he would stand up “against those who say that new types of communities forming on social media are dividing us.” People who have heard him speak privately said he argues social media bears little responsibility for polarization.
  • Fixing the polarization problem would be difficult, requiring Facebook to rethink some of its core products. Most notably, the project forced Facebook to consider how it prioritized “user engagement”—a metric involving time spent, likes, shares and comments that for years had been the lodestar of its system.
  • Even before the teams’ 2017 creation, Facebook researchers had found signs of trouble. A 2016 presentation that names as author a Facebook researcher and sociologist, Monica Lee, found extremist content thriving in more than one-third of large German political groups on the platform.
  • Swamped with racist, conspiracy-minded and pro-Russian content, the groups were disproportionately influenced by a subset of hyperactive users, the presentation notes. Most of them were private or secret.
  • One proposal Mr. Uribe’s team championed, called “Sparing Sharing,” would have reduced the spread of content disproportionately favored by hyperactive users, according to people familiar with it. Its effects would be heaviest on content favored by users on the far right and left. Middle-of-the-road users would gain influence.
  • The Common Ground team sought to tackle the polarization problem directly, said people familiar with the team. Data scientists involved with the effort found some interest groups—often hobby-based groups with no explicit ideological alignment—brought people from different backgrounds together constructively. Other groups appeared to incubate impulses to fight, spread falsehoods or demonize a population of outsiders.
  • Mr. Pariser said that started to change after March 2018, when Facebook got in hot water after disclosing that Cambridge Analytica, the political-analytics startup, improperly obtained Facebook data about tens of millions of people. The shift has gained momentum since, he said: “The internal pendulum swung really hard to ‘the media hates us no matter what we do, so let’s just batten down the hatches.’ ”
  • Building these features and combating polarization might come at a cost of lower engagement, the Common Ground team warned in a mid-2018 document, describing some of its own proposals as “antigrowth” and requiring Facebook to “take a moral stance.”
  • Taking action would require Facebook to form partnerships with academics and nonprofits to give credibility to changes affecting public conversation, the document says. This was becoming difficult as the company slogged through controversies after the 2016 presidential election.
  • Asked to combat fake news, spam, clickbait and inauthentic users, the employees looked for ways to diminish the reach of such ills. One early discovery: Bad behavior came disproportionately from a small pool of hyperpartisan users.
  • A second finding in the U.S. saw a larger infrastructure of accounts and publishers on the far right than on the far left. Outside observers were documenting the same phenomenon. The gap meant even seemingly apolitical actions such as reducing the spread of clickbait headlines—along the lines of “You Won’t Believe What Happened Next”—affected conservative speech more than liberal content in aggregate.
  • Every significant new integrity-ranking initiative had to seek the approval of not just engineering managers but also representatives of the public policy, legal, marketing and public-relations departments.
  • “Engineers that were used to having autonomy maybe over-rotated a bit” after the 2016 election to address Facebook’s perceived flaws, she said. The meetings helped keep that in check. “At the end of the day, if we didn’t reach consensus, we’d frame up the different points of view, and then they’d be raised up to Mark.”
  • Disapproval from Mr. Kaplan’s team or Facebook’s communications department could scuttle a project, said people familiar with the effort. Negative policy-team reviews killed efforts to build a classification system for hyperpolarized content. Likewise, the Eat Your Veggies process shut down efforts to suppress clickbait about politics more than on other topics.
  • Under Facebook’s engagement-based metrics, a user who likes, shares or comments on 1,500 pieces of content has more influence on the platform and its algorithms than one who interacts with just 15 posts, allowing “super-sharers” to drown out less-active users
  • Accounts with hyperactive engagement were far more partisan on average than normal Facebook users, and they were more likely to behave suspiciously, sometimes appearing on the platform as much as 20 hours a day and engaging in spam-like behavior. The behavior suggested some were either people working in shifts or bots.
  • “We’re explicitly not going to build products that attempt to change people’s beliefs,” one 2018 document states. “We’re focused on products that increase empathy, understanding, and humanization of the ‘other side.’ ”
  • The debate got kicked up to Mr. Zuckerberg, who heard out both sides in a short meeting, said people briefed on it. His response: Do it, but cut the weighting by 80%. Mr. Zuckerberg also signaled he was losing interest in the effort to recalibrate the platform in the name of social good, they said, asking that they not bring him something like that again.
  • Mr. Uribe left Facebook and the tech industry within the year. He declined to discuss his work at Facebook in detail but confirmed his advocacy for the Sparing Sharing proposal. He said he left Facebook because of his frustration with company executives and their narrow focus on how integrity changes would affect American politics
  • While proposals like his did disproportionately affect conservatives in the U.S., he said, in other countries the opposite was true.
  • The tug of war was resolved in part by the growing furor over the Cambridge Analytica scandal. In a September 2018 reorganization of Facebook’s newsfeed team, managers told employees the company’s priorities were shifting “away from societal good to individual value,” said people present for the discussion. If users wanted to routinely view or post hostile content about groups they didn’t like, Facebook wouldn’t suppress it if the content didn’t specifically violate the company’s rules.
Javier E

How Facebook Failed the World - The Atlantic - 0 views

  • In the United States, Facebook has facilitated the spread of misinformation, hate speech, and political polarization. It has algorithmically surfaced false information about conspiracy theories and vaccines, and was instrumental in the ability of an extremist mob to attempt a violent coup at the Capitol. That much is now painfully familiar.
  • these documents show that the Facebook we have in the United States is actually the platform at its best. It’s the version made by people who speak our language and understand our customs, who take our civic problems seriously because those problems are theirs too. It’s the version that exists on a free internet, under a relatively stable government, in a wealthy democracy. It’s also the version to which Facebook dedicates the most moderation resources.
  • Elsewhere, the documents show, things are different. In the most vulnerable parts of the world—places with limited internet access, where smaller user numbers mean bad actors have undue influence—the trade-offs and mistakes that Facebook makes can have deadly consequences.
  • ...23 more annotations...
  • According to the documents, Facebook is aware that its products are being used to facilitate hate speech in the Middle East, violent cartels in Mexico, ethnic cleansing in Ethiopia, extremist anti-Muslim rhetoric in India, and sex trafficking in Dubai. It is also aware that its efforts to combat these things are insufficient. A March 2021 report notes, “We frequently observe highly coordinated, intentional activity … by problematic actors” that is “particularly prevalent—and problematic—in At-Risk Countries and Contexts”; the report later acknowledges, “Current mitigation strategies are not enough.”
  • As recently as late 2020, an internal Facebook report found that only 6 percent of Arabic-language hate content on Instagram was detected by Facebook’s systems. Another report that circulated last winter found that, of material posted in Afghanistan that was classified as hate speech within a 30-day range, only 0.23 percent was taken down automatically by Facebook’s tools. In both instances, employees blamed company leadership for insufficient investment.
  • last year, according to the documents, only 13 percent of Facebook’s misinformation-moderation staff hours were devoted to the non-U.S. countries in which it operates, whose populations comprise more than 90 percent of Facebook’s users.
  • Among the consequences of that pattern, according to the memo: The Hindu-nationalist politician T. Raja Singh, who posted to hundreds of thousands of followers on Facebook calling for India’s Rohingya Muslims to be shot—in direct violation of Facebook’s hate-speech guidelines—was allowed to remain on the platform despite repeated requests to ban him, including from the very Facebook employees tasked with monitoring hate speech.
  • The granular, procedural, sometimes banal back-and-forth exchanges recorded in the documents reveal, in unprecedented detail, how the most powerful company on Earth makes its decisions. And they suggest that, all over the world, Facebook’s choices are consistently driven by public perception, business risk, the threat of regulation, and the specter of “PR fires,” a phrase that appears over and over in the documents.
  • “It’s an open secret … that Facebook’s short-term decisions are largely motivated by PR and the potential for negative attention,” an employee named Sophie Zhang wrote in a September 2020 internal memo about Facebook’s failure to act on global misinformation threats.
  • In a memo dated December 2020 and posted to Workplace, Facebook’s very Facebooklike internal message board, an employee argued that “Facebook’s decision-making on content policy is routinely influenced by political considerations.”
  • To hear this employee tell it, the problem was structural: Employees who are primarily tasked with negotiating with governments over regulation and national security, and with the press over stories, were empowered to weigh in on conversations about building and enforcing Facebook’s rules regarding questionable content around the world. “Time and again,” the memo quotes a Facebook researcher saying, “I’ve seen promising interventions … be prematurely stifled or severely constrained by key decisionmakers—often based on fears of public and policy stakeholder responses.”
  • And although Facebook users post in at least 160 languages, the company has built robust AI detection in only a fraction of those languages, the ones spoken in large, high-profile markets such as the U.S. and Europe—a choice, the documents show, that means problematic content is seldom detected.
  • Employees weren’t placated. In dozens and dozens of comments, they questioned the decisions Facebook had made regarding which parts of the company to involve in content moderation, and raised doubts about its ability to moderate hate speech in India. They called the situation “sad” and Facebook’s response “inadequate,” and wondered about the “propriety of considering regulatory risk” when it comes to violent speech.
  • A 2020 Wall Street Journal article reported that Facebook’s top public-policy executive in India had raised concerns about backlash if the company were to do so, saying that cracking down on leaders from the ruling party might make running the business more difficult.
  • “I have a very basic question,” wrote one worker. “Despite having such strong processes around hate speech, how come there are so many instances that we have failed? It does speak on the efficacy of the process.”
  • Two other employees said that they had personally reported certain Indian accounts for posting hate speech. Even so, one of the employees wrote, “they still continue to thrive on our platform spewing hateful content.”
  • Taken together, Frances Haugen’s leaked documents show Facebook for what it is: a platform racked by misinformation, disinformation, conspiracy thinking, extremism, hate speech, bullying, abuse, human trafficking, revenge porn, and incitements to violence
  • It is a company that has pursued worldwide growth since its inception—and then, when called upon by regulators, the press, and the public to quell the problems its sheer size has created, it has claimed that its scale makes completely addressing those problems impossible.
  • Instead, Facebook’s 60,000-person global workforce is engaged in a borderless, endless, ever-bigger game of whack-a-mole, one with no winners and a lot of sore arms.
  • Zhang details what she found in her nearly three years at Facebook: coordinated disinformation campaigns in dozens of countries, including India, Brazil, Mexico, Afghanistan, South Korea, Bolivia, Spain, and Ukraine. In some cases, such as in Honduras and Azerbaijan, Zhang was able to tie accounts involved in these campaigns directly to ruling political parties. In the memo, posted to Workplace the day Zhang was fired from Facebook for what the company alleged was poor performance, she says that she made decisions about these accounts with minimal oversight or support, despite repeated entreaties to senior leadership. On multiple occasions, she said, she was told to prioritize other work.
  • A Facebook spokesperson said that the company tries “to keep people safe even if it impacts our bottom line,” adding that the company has spent $13 billion on safety since 2016. “​​Our track record shows that we crack down on abuse abroad with the same intensity that we apply in the U.S.”
  • Zhang's memo, though, paints a different picture. “We focus upon harm and priority regions like the United States and Western Europe,” she wrote. But eventually, “it became impossible to read the news and monitor world events without feeling the weight of my own responsibility.”
  • Indeed, Facebook explicitly prioritizes certain countries for intervention by sorting them into tiers, the documents show. Zhang “chose not to prioritize” Bolivia, despite credible evidence of inauthentic activity in the run-up to the country’s 2019 election. That election was marred by claims of fraud, which fueled widespread protests; more than 30 people were killed and more than 800 were injured.
  • “I have blood on my hands,” Zhang wrote in the memo. By the time she left Facebook, she was having trouble sleeping at night. “I consider myself to have been put in an impossible spot—caught between my loyalties to the company and my loyalties to the world as a whole.”
  • What happened in the Philippines—and in Honduras, and Azerbaijan, and India, and Bolivia—wasn’t just that a very large company lacked a handle on the content posted to its platform. It was that, in many cases, a very large company knew what was happening and failed to meaningfully intervene.
  • solving problems for users should not be surprising. The company is under the constant threat of regulation and bad press. Facebook is doing what companies do, triaging and acting in its own self-interest.
Javier E

How 2020 Forced Facebook and Twitter to Step In - The Atlantic - 0 views

  • mainstream platforms learned their lesson, accepting that they should intervene aggressively in more and more cases when users post content that might cause social harm.
  • During the wildfires in the American West in September, Facebook and Twitter took down false claims about their cause, even though the platforms had not done the same when large parts of Australia were engulfed in flames at the start of the year
  • Twitter, Facebook, and YouTube cracked down on QAnon, a sprawling, incoherent, and constantly evolving conspiracy theory, even though its borders are hard to delineate.
  • ...15 more annotations...
  • As platforms grow more comfortable with their power, they are recognizing that they have options beyond taking posts down or leaving them up. In addition to warning labels, Facebook implemented other “break glass” measures to stem misinformation as the election approached.
  • Nothing symbolizes this shift as neatly as Facebook’s decision in October (and Twitter’s shortly after) to start banning Holocaust denial. Almost exactly a year earlier, Zuckerberg had proudly tied himself to the First Amendment in a widely publicized “stand for free expression” at Georgetown University.
  • The evolution continues. Facebook announced earlier this month that it will join platforms such as YouTube and TikTok in removing, not merely labeling or down-ranking, false claims about COVID-19 vaccines.
  • the pandemic also showed that complete neutrality is impossible. Even though it’s not clear that removing content outright is the best way to correct misperceptions, Facebook and other platforms plainly want to signal that, at least in the current crisis, they don’t want to be seen as feeding people information that might kill them.
  • It tweaked its algorithm to boost authoritative sources in the news feed and turned off recommendations to join groups based around political or social issues. Facebook is reversing some of these steps now, but it cannot make people forget this toolbox exists in the future
  • Down-ranking, labeling, or deleting content on an internet platform does not address the social or political circumstances that caused it to be posted in the first place
  • Even before the pandemic, YouTube had begun adjusting its recommendation algorithm to reduce the spread of borderline and harmful content, and is introducing pop-up nudges to encourage user
  • Platforms don’t deserve praise for belatedly noticing dumpster fires that they helped create and affixing unobtrusive labels to them
  • Warning labels for misinformation might make some commentators feel a little better, but whether labels actually do much to contain the spread of false information is still unknown.
  • News reporting suggests that insiders at Facebook knew they could and should do more about misinformation, but higher-ups vetoed their ideas. YouTube barely acted to stem the flood of misinformation about election results on its platform.
  • When internet platforms announce new policies, assessing whether they can and will enforce them consistently has always been difficult. In essence, the companies are grading their own work. But too often what can be gleaned from the outside suggests that they’re failing.
  • And if 2020 finally made clear to platforms the need for greater content moderation, it also exposed the inevitable limits of content moderation.
  • Content moderation comes to every content platform eventually, and platforms are starting to realize this faster than ever.
  • even the most powerful platform will never be able to fully compensate for the failures of other governing institutions or be able to stop the leader of the free world from constructing an alternative reality when a whole media ecosystem is ready and willing to enable him. As Renée DiResta wrote in The Atlantic last month, “reducing the supply of misinformation doesn’t eliminate the demand.”
  • Even so, this year’s events showed that nothing is innate, inevitable, or immutable about platforms as they currently exist. The possibilities for what they might become—and what role they will play in society—are limited more by imagination than any fixed technological constraint, and the companies appear more willing to experiment than ever.
Javier E

Facebook's Push for Facial Recognition Prompts Privacy Alarms - The New York Times - 0 views

  • Facial recognition works by scanning faces of unnamed people in photos or videos and then matching codes of their facial patterns to those in a database of named people. Facebook has said that users are in charge of that process, telling them: “You control face recognition.
  • But critics said people cannot actually control the technology — because Facebook scans their faces in photos even when their facial recognition setting is turned off.
  • Rochelle Nadhiri, a Facebook spokeswoman, said its system analyzes faces in users’ photos to check whether they match with those who have their facial recognition setting turned on. If the system cannot find a match, she said, it does not identify the unknown face and immediately deletes the facial data
  • ...12 more annotations...
  • In the European Union, a tough new data protection law called the General Data Protection Regulation now requires companies to obtain explicit and “freely given” consent before collecting sensitive information like facial data. Some critics, including the former government official who originally proposed the new law, contend that Facebook tried to improperly influence user consent by promoting facial recognition as an identity protection tool.
  • People could turn it off. But privacy experts said Facebook had neither obtained users’ opt-in consent for the technology nor explicitly informed them that the company could benefit from scanning their photos
  • Separately, privacy and consumer groups lodged a complaint with the Federal Trade Commission in April saying Facebook added facial recognition services, like the feature to help identify impersonators, without obtaining prior consent from people before turning it on. The groups argued that Facebook violated a 2011 consent decree that prohibits it from deceptive privacy practices
  • Critics said Facebook took an early lead in consumer facial recognition services partly by turning on the technology as the default option for users. In 2010, it introduced a photo-labeling feature called Tag Suggestions that used face-matching software to suggest the names of people in users’ photos.
  • “Facebook is somehow threatening me that, if I do not buy into face recognition, I will be in danger,” said Viviane Reding, the former justice commissioner of the European Commission who is now a member of the European Parliament. “It goes completely against the European law because it tries to manipulate consent.”
  • “When Tag Suggestions asks you ‘Is this Jill?’ you don’t think you are annotating faces to improve Facebook’s face recognition algorithm,” said Brian Brackeen, the chief executive of Kairos, a facial recognition company. “Even the premise is an unfair use of people’s time and labor.”
  • The huge trove of identified faces, he added, enabled Facebook to quickly develop one of the world’s most powerful commercial facial recognition engines. In 2014, Facebook researchers said they had trained face-matching software “on the largest facial dataset to date, an identity labeled dataset of four million facial images.”
  • Facebook may only be getting started with its facial recognition services. The social network has applied for various patents, many of them still under consideration, which show how it could use the technology to track its online users in the real world.
  • One patent application, published last November, described a system that could detect consumers within stores and match those shoppers’ faces with their social networking profiles. Then it could analyze the characteristics of their friends, and other details, using the information to determine a “trust level” for each shopper. Consumers deemed “trustworthy” could be eligible for special treatment, like automatic access to merchandise in locked display cases, the document said.
  • Another Facebook patent filing described how cameras near checkout counters could capture shoppers’ faces, match them with their social networking profiles and then send purchase confirmation messages to their phones
  • But legal filings in the class-action suit hint at the technology’s importance to Facebook’s business.
  • If the suit were to move forward, Facebook’s lawyers argued in a recent court document, “the reputational and economic costs to Facebook will be irreparable.”
Javier E

Alex Stamos, Facebook Data Security Chief, To Leave Amid Outcry - The New York Times - 0 views

  • One central tension at Facebook has been that of the legal and policy teams versus the security team. The security team generally pushed for more disclosure about how nation states had misused the site, but the legal and policy teams have prioritized business imperatives, said the people briefed on the matter.
  • “The people whose job is to protect the user always are fighting an uphill battle against the people whose job is to make money for the company,” said Sandy Parakilas, who worked at Facebook enforcing privacy and other rules until 2012 and now advises a nonprofit organization called the Center for Humane Technology, which is looking at the effect of technology on people.
  • Mr. Stamos said in statement on Monday, “These are really challenging issues, and I’ve had some disagreements with all of my colleagues, including other executives.” On Twitter, he said he was “still fully engaged with my work at Facebook” and acknowledged that his role has changed, without addressing his future plans.
  • ...13 more annotations...
  • Mr. Stamos joined Facebook from Yahoo in June 2015. He and other Facebook executives, such as Ms. Sandberg, disagreed early on over how proactive the social network should be in policing its own platform, said the people briefed on the matter.
  • Mr. Stamos first put together a group of engineers to scour Facebook for Russian activity in June 2016, the month the Democratic National Committee announced it had been attacked by Russian hackers, the current and former employees said.
  • By November 2016, the team had uncovered evidence that Russian operatives had aggressively pushed DNC leaks and propaganda on Facebook. That same month, Mr. Zuckerberg publicly dismissed the notion that fake news influenced the 2016 election, calling it a “pretty crazy idea
  • In the ensuing months, Facebook’s security team found more Russian disinformation and propaganda on its site, according to the current and former employees. By the spring of 2017, deciding how much Russian interference to disclose publicly became a major source of contention within the company.
  • Mr. Stamos pushed to disclose as much as possible, while others including Elliot Schrage, Facebook’s vice president of communications and policy, recommended not naming Russia without more ironclad evidence, said the current and former employees.
  • A detailed memorandum Mr. Stamos wrote in early 2017 describing Russian interference was scrubbed for mentions of Russia and winnowed into a blog post last April that outlined, in hypothetical terms, how Facebook could be manipulated by a foreign adversary, they said. Russia was only referenced in a vague footnote. That footnote acknowledged that Facebook’s findings did not contradict a declassified January 2017 report in which the director of national intelligence concluded Russia had sought to undermine United States election, and Hillary Clinton in particular.
  • By last September, after Mr. Stamos’s investigation had revealed further Russian interference, Facebook was forced to reverse course. That month, the company disclosed that beginning in June 2015, Russians had paid Facebook $100,000 to run roughly 3,000 divisive ads to show the American electorate.
  • The public reaction caused some at Facebook to recoil at revealing more, said the current and former employees. Since the 2016 election, Facebook has paid unusual attention to the reputations of Mr. Zuckerberg and Ms. Sandberg, conducting polls to track how they are viewed by the public, said Tavis McGinn, who was recruited to the company last April and headed the executive reputation efforts through September 2017.
  • Mr. McGinn, who now heads Honest Data, which has done polling about Facebook’s reputation in different countries, said Facebook is “caught in a Catch-22.”
  • “Facebook cares so much about its image that the executives don’t want to come out and tell the whole truth when things go wrong,” he said. “But if they don’t, it damages their image.”
  • Mr. McGinn said he left Facebook after becoming disillusioned with the company’s conduct.
  • By December 2017, Mr. Stamos, who reports to Facebook’s general counsel, proposed that he report directly to higher-ups. Facebook executives rejected that proposal and instead reassigned Mr. Stamos’s team, splitting the security team between its product team, overseen by Guy Rosen, and infrastructure team, overseen by Pedro Canahuati, according to current and former employees.
  • “I told them, ‘Your business is based on trust, and you’re losing trust,’” said Mr. McNamee, a founder of the Center for Humane Technology. “They were treating it as a P.R. problem, when it’s a business problem. I couldn’t believe these guys I once knew so well had gotten so far off track.”
Javier E

Obama tried to give Zuckerberg a wake-up call over fake news on Facebook - The Washingt... - 0 views

  • There has been a rising bipartisan clamor, meanwhile, for new regulation of a tech industry that, amid a historic surge in wealth and power over the past decade, has largely had its way in Washington despite concerns raised by critics about its behavior.
  • In particular, momentum is building in Congress and elsewhere in the federal government for a law requiring tech companies — like newspapers, television stations and other traditional carriers of campaign messages — to disclose who buys political ads and how much they spend on them.
  • “There is no question that the idea that Silicon Valley is the darling of our markets and of our society — that sentiment is definitely turning,” said Tim O’Reilly, an adviser to tech executives and chief executive of the influential Silicon Valley-based publisher O’Reilly Media.
  • ...14 more annotations...
  • the Russian disinformation effort has proven far harder to track and combat because Russian operatives were taking advantage of Facebook’s core functions, connecting users with shared content and with targeted native ads to shape the political environment in an unusually contentious political season, say people familiar with Facebook’s response.
  • Unlike the Islamic State, what Russian operatives posted on Facebook was, for the most part, indistinguishable from legitimate political speech. The difference was the accounts that were set up to spread the misinformation and hate were illegitimate.
  • Facebook’s cyber experts found evidence that members of APT28 were setting up a series of shadowy accounts — including a persona known as Guccifer 2.0 and a Facebook page called DCLeaks — to promote stolen emails and other documents during the presidential race. Facebook officials once again contacted the FBI to share what they had seen.
  • The sophistication of the Russian tactics caught Facebook off-guard. Its highly regarded security team had erected formidable defenses against traditional cyber attacks but failed to anticipate that Facebook users — deploying easily available automated tools such as ad micro-targeting — pumped skillfully crafted propaganda through the social network without setting off any alarm bells.
  • One of the theories to emerge from their post-mortem was that Russian operatives who were directed by the Kremlin to support Trump may have taken advantage of Facebook and other social media platforms to direct their messages to American voters in key demographic areas in order to increase enthusiasm for Trump and suppress support for Clinton.
  • the intelligence agencies had little data on Russia’s use of Facebook and other U.S.-based social media platforms, in part because of rules designed to protect the privacy of communications between Americans.
  • “It is our responsibility,” he wrote, “to amplify the good effects [of the Facebook platform] and mitigate the bad — to continue increasing diversity while strengthening our common understanding so our community can create the greatest positive impact on the world.”
  • The extent of Facebook’s internal self-examination became clear in April, when Facebook Chief Security Officer Alex Stamos co-authored a 13-page white paper detailing the results of a sprawling research effort that included input from experts from across the company, who in some cases also worked to build new software aimed specifically at detecting foreign propaganda.
  • “Facebook sits at a critical juncture,” Stamos wrote in the paper, adding that the effort focused on “actions taken by organized actors (governments or non-state actors) to distort domestic or foreign political sentiment, most frequently to achieve a strategic and/or geopolitical outcome.” He described how the company had used a technique known as machine learning to build specialized data-mining software that can detect patterns of behavior — for example, the repeated posting of the same content — that malevolent actors might use.  
  • The software tool was given a secret designation, and Facebook is now deploying it and others in the run-up to elections around the world. It was used in the French election in May, where it helped disable 30,000 fake accounts, the company said. It was put to the test again on Sunday when Germans went to the polls. Facebook declined to share the software tool’s code name. 
  • Officials said Stamos underlined to Warner the magnitude of the challenge Facebook faced policing political content that looked legitimate. Stamos told Warner that Facebook had found no accounts that used advertising but agreed with the senator that some probably existed. The difficulty for Facebook was finding them.
  • Technicians then searched for “indicators” that would link those ads to Russia. To narrow down the search further, Facebook zeroed in on a Russian entity known as the Internet Research Agency, which had been publicly identified as a troll farm.
  • By early August, Facebook had identified more than 3,000 ads addressing social and political issues that ran in the United States between 2015 and 2017 and that appear to have come from accounts associated with the Internet Research Agency.
  • Congressional investigators say the disclosure only scratches the surface. One called Facebook’s discoveries thus far “the tip of the iceberg.” Nobody really knows how many accounts are out there and how to prevent more of them from being created to shape the next election — and turn American society against itself.
carolinehayter

'Stop Lying': Muslim Rights Group Sues Facebook Over Claims It Removes Hate Groups : NPR - 0 views

  • Frustrated with what it sees as a lack of progress, Muslim Advocates on Thursday filed a consumer protection lawsuit against Facebook, Zuckerberg and Sandberg, among other executives, demanding the social network start taking anti-Muslim activity more seriously.
  • The suit alleges that statements made by the executives about the removal of hateful and violent content have misled people into believing that Facebook is doing more than it actually is to combat anti-Muslim bigotry on the world's largest social network.
  • The suit cites research from Elon University professor Megan Squire, who found that anti-Muslim bias serves "as a common denominator among hate groups around the world" on Facebook. Squire, in 2018, alerted the company to more than 200 anti-Muslim groups on its platform. According to the suit, half of them remain active.
  • ...12 more annotations...
  • "We do not allow hate groups on Facebook overall. So if there is a group that their primary purpose or a large part of what they do is spreading hate, we will ban them from the platform overall," Zuckerberg told Congress in 2018. Facebook's Community Standards ban hate speech, violent and graphic content and "dangerous individuals and organizations," like an organized hate group.
  • Lawyers for Muslim Advocates say Facebook's passivity flies in the face of statements Zuckerberg has made to Congress that if something runs afoul of Facebook's rules, the company will remove it.
  • A year earlier, Muslim Advocates provided Facebook a list of 26 anti-Muslim hate groups. Nineteen of them remain active today, according to the suit.
  • "This is not, 'Oh a couple of things are falling through the cracks,'" Bauer said. "This is pervasive content that persists despite academics pointing it out, nonprofits pointing it out. Facebook has made a decision to not take this material down."
  • The lawsuit is asking a judge to declare the statements made by Facebook executives about its content moderation policies fraudulent misrepresentations.
  • It seeks an order preventing Facebook officials from making such remarks.
  • "A corporation is not entitled to exaggerate or misrepresent the safety of a product to drive up sales,
  • Since 2013, officials from Muslim Advocates have met with Facebook leadership, including Zuckerberg, "to educate them about the dangers of allowing anti-Muslim content to flourish on the platform," the suit says. But in the group's view, Facebook never lived up to its promises. Had the company done so, the group alleges in the lawsuit, "it would have significantly reduced the extent to which its platform encouraged and enabled anti-Muslim violence."
  • In the lawsuit, the group says it told Facebook that a militia group, the Texas Patriot Network, was using the platform to organize an armed protest at a Muslim convention in Houston in 2019. It took Facebook 24 hours to take the event down. The Texas Patriot Network is still active on the social network.
  • The suit also referenced an August 2020 event in Milwaukee, Wis. People gathered in front of a mosque and yelled hateful, threatening slurs against Muslims. It was broadcast live on Facebook. The video was removed days later after Muslims Advocates alerted Facebook to the content.
  • It pointed to the Christchurch mass shooting in New Zealand, which left 51 people dead. The shooter live-streamed the massacre on Facebook.
  • "Civil rights advocates have expressed alarm," the outside auditors wrote. "That Muslims feel under siege on Facebook."
katherineharron

Facebook is allowing politicians to lie openly. It's time to regulate (Opinion) - CNN - 0 views

  • At the center of the exchange was a tussle between Sen. Elizabeth Warren, who has been pushing for the break-up of tech giants like Facebook and Google, and Sen. Kamala Harris, who pointedly asked whether Warren would join her in demanding that Twitter suspend President Donald Trump's account on the platform.
  • This is a highly-charged and heavily politicized question, particularly for Democratic candidates. Last month, Facebook formalized a bold new policy that shocked many observers, announcing that the company would not seek to fact-check or censor politicians -- including in the context of paid political advertising, and even during an election season.Over the past few days, this decree has pushed US political advertising into something like the Wild West: President Donald Trump, who will likely face the Democratic candidate in next year's general election, has already taken the opportunity to spread political lies with no accountability.
  • This new Facebook policy opens a frightening new world for political communication — and for national politics. It is now the case that leading politicians can openly spread political lies without repercussion. Indeed, the Trump campaign was already spreading other falsehoods through online advertising immediately before Facebook made its announcement — and as one might predict, most of those advertisements have not been removed from the platform.
  • ...6 more annotations...
  • Should our politicians fail to reform regulations for internet platforms and digital advertising, our political future will be at risk. The 2016 election revealed the tremendous harm to the American democratic process that can result from coordinated misinformation campaigns; 2020 will be far worse if we do nothing to contain the capacity for politicians to lie on social media.
  • Warren responded to the Trump ad with a cheeky point: In an ad she has circulated over Facebook, she claims that "Mark Zuckerberg and Facebook just endorsed Donald Trump for re-election." Later in the ad, she acknowledges this is a falsehood, and contends that "what [Mark] Zuckerberg has done is given Donald Trump free rein to lie on his platform — and then to pay Facebook gobs of money to push out their lies to American voters."
  • It is disconcerting to think that by fiat, Facebook can deem a political ad to be dishonest because it contains fake buttons (which can deceive the viewer into clicking on a survey button when in fact there is no interactive feature in the ad), but the company will refuse to take action against ads containing widely-debunked political lies, even during an American presidential election.
  • Facebook has one principal counterargument against regulation: that the company must maintain strong commitments to free speech and freedom of political expression. This came across in Mark Zuckerberg's speech at Georgetown University on Thursday, in which he described social media as a kind of "Fifth Estate" and characterized politicians' calls to take action as an attempt to restrict freedom of expression. Quoting at times from Frederick Douglass and Supreme Court jurisprudence, Zuckerberg said "we are at a crossroads" and asserted: "When it's not absolutely clear what to do, we should err on the side of free expression."
  • Unfortunately for Facebook, this argument holds little water. If you determine that an ad containing a fake button is non-compliant because it "[entices] users to select an answer," then you certainly should not knowingly broadcast ads that entice voters to unwittingly consume publicly-known lies -- whether they are distributed by the President or any other politician. Indeed, as one official in Biden's presidential campaign has noted, Zuckerberg's argumentation amounts to an insidious "choice to cloak Facebook's policy in a feigned concern for free expression" to "use the Constitution as a shield for his company's bottom line."
  • If Facebook cannot take appropriate action and remove paid political lies from its platform, the only answer must be earnest regulation of the company -- regulation that forces Facebook to be transparent about the nature of political ads and prevents it from propagating political falsehoods, even if they are enthusiastically distributed by President Trump.
1 - 20 of 715 Next › Last »
Showing 20 items per page