Skip to main content

Home/ History Readings/ Group items tagged platform

Rss Feed Group items tagged

Javier E

The Antitrust Case Against Facebook, Google and Amazon - WSJ - 0 views

  • A growing number of critics think these tech giants need to be broken up or regulated as Standard Oil and AT&T once were.
  • antitrust regulators have a narrow test: Does their size leave consumers worse off?
  • By that standard, there isn’t a clear case for going after big tech—at least for now. They are driving down prices and rolling out new and often improved products and services every week.
  • ...29 more annotations...
  • That may not be true in the future: If market dominance means fewer competitors and less innovation, consumers will be worse off than if those companies had been restrained. “The impact on innovation can be the most important competitive effect” in an antitrust case
  • Yet Google’s monopoly means some features and prices that competitors offered never made it in front of customers. Yelp Inc., which in 2004 began aggregating detailed information and user reviews of local services, such as restaurants and stores, claims Google altered its search results to hurt Yelp and help its own competing service. While Yelp survived, it has retreated from Europe, and several similar local search services have faded.
  • In a 2005 paper, Mr. Scherer found that Standard Oil was indeed a prolific generator of patents in its early years, but that slowed once it achieved dominance.
  • Standard Oil and AT&T used trusts, regulations and patents to keep out or co-opt competitors. They were respected but unloved.
  • By contrast, Google and Facebook give away their main product, while Amazon undercuts traditional retailers so aggressively it may be holding down inflation. None enjoys a government-sanctioned monopoly; all invest prodigiously in new products.
  • All are among the public’s most loved brands, according to polls by Morning Consult.
  • Yet there are also important parallels. The monopolies of old and of today were built on proprietary technology and physical networks that drove down costs while locking in customers, erecting formidable barriers to entry.
  • . If they’re imposing a cost, it may not be what customers pay but the products they never see.
  • When the federal government sued to break up Standard Oil, the Supreme Court acknowledged business acumen was important to the company’s early success, but concluded that was eventually supplanted by a single-minded determination to drive others out of the market.
  • Amazon hasn’t yet reached the same market share as Google or Facebook but its position is arguably even more impregnable because it enjoys both physical and technological barriers to entry. Its roughly 75 fulfillment centers and state-of-the art logistics (including robots) put it closer, in time and space, to customers than any other online retailer.
  • “Just like people joined Facebook because everyone else was on Facebook, the biggest competitive advantage AT&T had was that it was interconnected,”
  • Early in the 20th century, AT&T began buying up local competitors and refusing to connect independent exchanges to its long-distance lines, arousing antitrust complaints. By the 1920s, it was allowed to become a monopoly in exchange for universal service in the communities it served. By 1939, the company carried more than 90% of calls.
  • After AT&T was broken up into separate local and long-distance companies in 1982, telecommunication innovation blossomed, spreading to digital switching, fiber optics, cellphones—and the internet.
  • “There should be hundreds of Yelps. There’s not. No one is pitching investors to build a service that relies on discovery through Facebook or Google to grow, because venture capitalists think it’s a poor bet.”
  • At that same hearing Jeffrey Katz, then the chief executive of Nextag, responded, “That is like saying move to Panama if you don’t like the tax rate in America. It’s a fake choice because no one has Google’s scope or capabilities and consumers won’t, don’t, and in fact can’t jump.”
  • In 2013 the U.S. Federal Trade Commission concluded that even if Google had hurt competitors, it was to serve consumers better, and declined to bring a case. Since then, comparison sites such as Nextag have largely faded.
  • The different outcomes hinge in part on different approaches. European regulators are more likely to see a shrinking pool of competitors as inherently bad for both competition and consumers. American regulators are more open to the possibility that it could be natural and benign.
  • Internet platforms have high fixed and minimal operating costs, which favors consolidation into a few deep-pocketed competitors. And the more customers a platform has, the more useful it is to each individual customer—the “network effect.”
  • But a platform that confers monopoly in one market can be leveraged to dominate another. Facebook’s existing user base enabled it to become the world’s largest photo-sharing site through its purchase of Instagram in 2012 and the largest instant-messaging provider through its purchase of WhatsApp in 2014. It is also muscling into virtual reality through its acquisition of Oculus VR in 2014 and anonymous polling with its purchase of TBH last year.
  • Once a company like Google or Facebook has critical mass, “the venture capital looks elsewhere,” says Roger McNamee of Elevation Partners, a technology-focused private-equity firm. “There’s no point taking on someone with a three or four years head start.”
  • when Google launched its own comparison business, Google Shopping, those sites found themselves dropping deeper into Google’s search results. They accused Google of changing its algorithm to favor its own results. The company responded that its algorithm was designed to give customers the results they want.
  • As the dominant platform for third-party online sales, Amazon also has access to data it can use to decide what products to sell itself. In 2016 Capitol Forum, a news service that investigates anticompetitive behavior, reported that when a shopper views an Amazon private-label clothing brand, the accompanying list of items labeled “Customers Who Bought This Item Also Bought,” is also dominated by Amazon’s private-label brands. This, it says, restricts competing sellers’ access to a prime marketing space
  • In the face of such accusations, the probability of regulatory action—for now—looks low, largely because U.S. regulators have a relatively high bar to clear: Do consumers suffer?
  • “We think consumer welfare is the right standard,” Bruce Hoffman, the FTC’s acting director of the bureau of competition, recently told a panel on antitrust law and innovation. “We have tried other standards. They were dismal failures.”
  • What would remedies look like? Since Big Tech owes its network effects to data, one often-proposed fix is to give users ownership of their own data: the “social graph” of connections on Facebook, or their search history on Google and Amazon. They could then take it to a competitor.
  • A more drastic remedy would be to block acquisitions of companies that might one day be a competing platform. British regulators let Facebook buy Instagram in part because Instagram didn’t sell ads, which they argued made them different businesses. In fact, Facebook used Instagram to engage users longer and thus sell more ads
  • Ben Thompson, wrote in his technology newsletter Stratechery. Building a network is “extremely difficult, but, once built, nearly impregnable. The only possible antidote is another network that draws away the one scarce resource: attention.” Thus, maintaining competition on the internet requires keeping “social networks in separate competitive companies.”
  • How sound are these premises? Google’s and Facebook’s access to that data and network effects might seem like an impregnable barrier, but the same appeared to be true of America Online’s membership, Yahoo ’s search engine and Apple’s iTunes store, note two economists, David Evans and Richard Schmalensee, in a recent paper. All saw their dominance recede in the face of disruptive competition.
  • It’s possible Microsoft might have become the dominant company in search and mobile without the scrutiny the federal antitrust case brought. Throughout history, entrepreneurs have often needed the government’s help to dislodge a monopolist—and may one day need it again.
Javier E

Christchurch mosque killer's theories seeping into mainstream, report warns | World new... - 0 views

  • Researchers have found that organised far-right networks are pushing a conspiracy known as the “great replacement” theory to the extent that references to it online have doubled in four years, with more than 1.5 million on Twitter alone, a total that is rising exponentially.
  • The theory emerged in France in 2014 and has become a dominant concept of the extreme right, focusing on a paranoia that white people are being wiped out through migration and violence. It received increased scrutiny after featuring in the manifesto of the gunman who killed 51 people in the Christchurch attacks in New Zealand in March.
  • Now the Institute for Strategic Dialogue (ISD), a UK-based counter-extremist organisation, has found that the once-obscure ideology has moved into mainstream politics and is now referenced by figures including US president Donald Trump, Italian interior minister Matteo Salvini and Björn Höcke of the German Alternative für Deutschland (AfD).
  • ...7 more annotations...
  • Despite its French origins, the ISD’s analysis has revealed that the theory is becoming more prevalent internationally, with English-speaking countries now accounting for 33% of online discussion.
  • She said that of the 10 most influential Twitter accounts propagating the ideology, eight were French. The other two were Trump’s account and the extreme-right site Defend Europa.
  • The study reveals that alternative social media platforms, image boards, fringe forums and encrypted chat channels are instrumental in diffusing influential ideologies that propagate hatred and violence. Far-right propagandists primarily use mainstream platforms such as Facebook, YouTube and Twitter as avenues to disseminate material to audiences, while fringe platforms remain safe havens for the initiated to radicalise further.
  • The new media ecosystem has been used, for instance, to promote the fear of a “white genocide”, a topic that is active across unregulated image-board threads on 8chan and 4chan, censorship-free discussion platforms such as Voat, ultra-libertarian social-media sites such as Gab and Minds, and closed-chat channels
  • Defined as a form of ethnic cleansing through the forced deportation of minority communities, the concept of “remigration” has been a particularly fevered subject. Since 2014, the volume of tweets featuring the word has surged, rising from 66,000 in 2014 to 150,000 in 2018.
  • Jacob Davey, co-author of the report at ISD, said: “Social media platforms are built to promote clickbait content to get more users liking, sharing and commenting. This research shows how the extreme right is exploiting this to boost hateful content in the form of memes, distorted statistics and pseudo-scientific studies.
  • “The far right is able to take ownership of the ‘grey zone’ around contentious issues like migration because politicians and society are less willing to take on the role of thought leaders in these areas for fear of public outcry and outrage,” said Ebner.
Javier E

AT&T and The Platform Monopolies - Talking Points Memo - 0 views

  • I was struck by how frank and specific Stephenson was focusing on the need to compete with the platform monopolies and specifically about the central role of data. And to be clear about what that means, the pervasiveness of AT&T on mobile and other distribution paths gives them data of various sorts that can be leveraged for a competitive position in advertising, which means ad targeting.
  • Essentially, he argued that only by combining a company with a dominant position in distribution (AT&T) with a content company (Time Warner) could anyone hope to compete with the platform monopolies Google and Facebook in the advertising business.
carolinehayter

'Stop Lying': Muslim Rights Group Sues Facebook Over Claims It Removes Hate Groups : NPR - 0 views

  • Frustrated with what it sees as a lack of progress, Muslim Advocates on Thursday filed a consumer protection lawsuit against Facebook, Zuckerberg and Sandberg, among other executives, demanding the social network start taking anti-Muslim activity more seriously.
  • The suit alleges that statements made by the executives about the removal of hateful and violent content have misled people into believing that Facebook is doing more than it actually is to combat anti-Muslim bigotry on the world's largest social network.
  • The suit cites research from Elon University professor Megan Squire, who found that anti-Muslim bias serves "as a common denominator among hate groups around the world" on Facebook. Squire, in 2018, alerted the company to more than 200 anti-Muslim groups on its platform. According to the suit, half of them remain active.
  • ...12 more annotations...
  • "We do not allow hate groups on Facebook overall. So if there is a group that their primary purpose or a large part of what they do is spreading hate, we will ban them from the platform overall," Zuckerberg told Congress in 2018. Facebook's Community Standards ban hate speech, violent and graphic content and "dangerous individuals and organizations," like an organized hate group.
  • Lawyers for Muslim Advocates say Facebook's passivity flies in the face of statements Zuckerberg has made to Congress that if something runs afoul of Facebook's rules, the company will remove it.
  • A year earlier, Muslim Advocates provided Facebook a list of 26 anti-Muslim hate groups. Nineteen of them remain active today, according to the suit.
  • "This is not, 'Oh a couple of things are falling through the cracks,'" Bauer said. "This is pervasive content that persists despite academics pointing it out, nonprofits pointing it out. Facebook has made a decision to not take this material down."
  • The lawsuit is asking a judge to declare the statements made by Facebook executives about its content moderation policies fraudulent misrepresentations.
  • It seeks an order preventing Facebook officials from making such remarks.
  • "A corporation is not entitled to exaggerate or misrepresent the safety of a product to drive up sales,
  • Since 2013, officials from Muslim Advocates have met with Facebook leadership, including Zuckerberg, "to educate them about the dangers of allowing anti-Muslim content to flourish on the platform," the suit says. But in the group's view, Facebook never lived up to its promises. Had the company done so, the group alleges in the lawsuit, "it would have significantly reduced the extent to which its platform encouraged and enabled anti-Muslim violence."
  • In the lawsuit, the group says it told Facebook that a militia group, the Texas Patriot Network, was using the platform to organize an armed protest at a Muslim convention in Houston in 2019. It took Facebook 24 hours to take the event down. The Texas Patriot Network is still active on the social network.
  • The suit also referenced an August 2020 event in Milwaukee, Wis. People gathered in front of a mosque and yelled hateful, threatening slurs against Muslims. It was broadcast live on Facebook. The video was removed days later after Muslims Advocates alerted Facebook to the content.
  • It pointed to the Christchurch mass shooting in New Zealand, which left 51 people dead. The shooter live-streamed the massacre on Facebook.
  • "Civil rights advocates have expressed alarm," the outside auditors wrote. "That Muslims feel under siege on Facebook."
katherineharron

Facebook is allowing politicians to lie openly. It's time to regulate (Opinion) - CNN - 0 views

  • At the center of the exchange was a tussle between Sen. Elizabeth Warren, who has been pushing for the break-up of tech giants like Facebook and Google, and Sen. Kamala Harris, who pointedly asked whether Warren would join her in demanding that Twitter suspend President Donald Trump's account on the platform.
  • This is a highly-charged and heavily politicized question, particularly for Democratic candidates. Last month, Facebook formalized a bold new policy that shocked many observers, announcing that the company would not seek to fact-check or censor politicians -- including in the context of paid political advertising, and even during an election season.Over the past few days, this decree has pushed US political advertising into something like the Wild West: President Donald Trump, who will likely face the Democratic candidate in next year's general election, has already taken the opportunity to spread political lies with no accountability.
  • This new Facebook policy opens a frightening new world for political communication — and for national politics. It is now the case that leading politicians can openly spread political lies without repercussion. Indeed, the Trump campaign was already spreading other falsehoods through online advertising immediately before Facebook made its announcement — and as one might predict, most of those advertisements have not been removed from the platform.
  • ...6 more annotations...
  • Should our politicians fail to reform regulations for internet platforms and digital advertising, our political future will be at risk. The 2016 election revealed the tremendous harm to the American democratic process that can result from coordinated misinformation campaigns; 2020 will be far worse if we do nothing to contain the capacity for politicians to lie on social media.
  • Warren responded to the Trump ad with a cheeky point: In an ad she has circulated over Facebook, she claims that "Mark Zuckerberg and Facebook just endorsed Donald Trump for re-election." Later in the ad, she acknowledges this is a falsehood, and contends that "what [Mark] Zuckerberg has done is given Donald Trump free rein to lie on his platform — and then to pay Facebook gobs of money to push out their lies to American voters."
  • It is disconcerting to think that by fiat, Facebook can deem a political ad to be dishonest because it contains fake buttons (which can deceive the viewer into clicking on a survey button when in fact there is no interactive feature in the ad), but the company will refuse to take action against ads containing widely-debunked political lies, even during an American presidential election.
  • Facebook has one principal counterargument against regulation: that the company must maintain strong commitments to free speech and freedom of political expression. This came across in Mark Zuckerberg's speech at Georgetown University on Thursday, in which he described social media as a kind of "Fifth Estate" and characterized politicians' calls to take action as an attempt to restrict freedom of expression. Quoting at times from Frederick Douglass and Supreme Court jurisprudence, Zuckerberg said "we are at a crossroads" and asserted: "When it's not absolutely clear what to do, we should err on the side of free expression."
  • Unfortunately for Facebook, this argument holds little water. If you determine that an ad containing a fake button is non-compliant because it "[entices] users to select an answer," then you certainly should not knowingly broadcast ads that entice voters to unwittingly consume publicly-known lies -- whether they are distributed by the President or any other politician. Indeed, as one official in Biden's presidential campaign has noted, Zuckerberg's argumentation amounts to an insidious "choice to cloak Facebook's policy in a feigned concern for free expression" to "use the Constitution as a shield for his company's bottom line."
  • If Facebook cannot take appropriate action and remove paid political lies from its platform, the only answer must be earnest regulation of the company -- regulation that forces Facebook to be transparent about the nature of political ads and prevents it from propagating political falsehoods, even if they are enthusiastically distributed by President Trump.
leilamulveny

Twitter Bans President Trump's Personal Account Permanently - WSJ - 0 views

  • citing the risk of further incitement of violence and closing off one of his main communication tools following the attack on the U.S. Capitol by a mob of his followers.
  • pressure on the platforms to do more to prevent additional violence.
  • Twitter had initially suspended Mr. Trump from posting on a temporary basis that Wednesday night, saying his tweets had violated its policies. The social-media company allowed him to resume posting on Thursday. Facebook Inc., FB -0.44% which temporarily suspended Mr. Trump’s account after the riot, said Thursday that it would extend that action indefinitely—and at least through the end of Mr. Trump’s term. Many critics of the president had called on Twitter to take more severe action as well.
  • ...12 more annotations...
  • “Twitter employees have coordinated with the Democrats and the Radical Left in removing my account from their platform, to silence me — and YOU, the 75,000,000 great patriots who voted for me,” the posts said. They added: “We have been negotiating with various other sites, and will have a big announcement soon, while we also look at the possibilities of building out our own platform in the near future. We will not be SILENCED!”
  • Twitter removed those new tweets from the @POTUS account soon after they were posted, saying the move was consistent with its policy against using other accounts to try to evade a suspension. “For government accounts, such as @POTUS and @WhiteHouse, we will not suspend those accounts permanently but will take action to limit their use,” a Twitter representative said.
  • Twitter and Facebook’s actions to shut off two of the largest megaphones Mr. Trump has relied on for years to communicate with the public highlights the difficult position social-media platforms face in regulating controversial content on their platforms.
  • Mr. Trump had more than 88 million followers on Twitter and more than 35 million on Facebook.
  • Google said it acted because of “continued posting in the Parler app that seeks to incite ongoing violence in the U.S.,” which violated its requirements for sufficient moderation of egregious content for apps it distributes.
  • “In light of this ongoing and urgent public safety threat, we are suspending the app’s listings from the Play Store until it addresses these issues,” a Google representative said.
  • Mr. Trump had tweeted three times since regaining account access Thursday. In his first post, he tweeted a video condemning the violence at the Capitol and acknowledging that a new administration would be inaugurated Jan. 20, without specifically naming Mr. Biden and Vice President-elect Kamala Harris.
  • They will not be disrespected or treated unfairly in any way, shape or form!!!”
  • Mr. Trump won more than 74 million votes, seven million less than Joe Biden received.
  • “These two Tweets must be read in the context of broader events in the country and the ways in which the President’s statements can be mobilized by different audiences, including to incite violence, as well as in the context of the pattern of behavior from this account in recent weeks.”
  • Twitter earlier Friday shut off the accounts of Michael Flynn, Mr. Trump’s former national security adviser, and Sidney Powell, a lawyer who worked alongside Mr. Trump’s legal team. The company also said Friday that it suspended several accounts associated with the far-right conspiracy group QAnon for violating its policy on coordinated harmful activity.
  • “the world’s largest social media companies finally do the right thing and deplatform the inciter-in-chief before another person is killed or another cherished piece of our democracy is violated.”
Javier E

Facebook Is a Doomsday Machine - The Atlantic - 0 views

  • megadeath is not the only thing that makes the Doomsday Machine petrifying. The real terror is in its autonomy, this idea that it would be programmed to detect a series of environmental inputs, then to act, without human interference. “There is no chance of human intervention, control, and final decision,” wrote the military strategist Herman Kahn in his 1960 book, On Thermonuclear War, which laid out the hypothetical for a Doomsday Machine. The concept was to render nuclear war unwinnable, and therefore unthinkable.
  • No machine should be that powerful by itself—but no one person should be either.
  • so far, somewhat miraculously, we have figured out how to live with the bomb. Now we need to learn how to survive the social web.
  • ...41 more annotations...
  • There’s a notion that the social web was once useful, or at least that it could have been good, if only we had pulled a few levers: some moderation and fact-checking here, a bit of regulation there, perhaps a federal antitrust lawsuit. But that’s far too sunny and shortsighted a view.
  • Today’s social networks, Facebook chief among them, were built to encourage the things that make them so harmful. It is in their very architecture.
  • I realized only recently that I’ve been thinking far too narrowly about the problem.
  • Megascale is nearly the existential threat that megadeath is. No single machine should be able to control the fate of the world’s population—and that’s what both the Doomsday Machine and Facebook are built to do.
  • Facebook does not exist to seek truth and report it, or to improve civic health, or to hold the powerful to account, or to represent the interests of its users, though these phenomena may be occasional by-products of its existence.
  • The company’s early mission was to “give people the power to share and make the world more open and connected.” Instead, it took the concept of “community” and sapped it of all moral meaning.
  • Facebook—along with Google and YouTube—is perfect for amplifying and spreading disinformation at lightning speed to global audiences.
  • Facebook decided that it needed not just a very large user base, but a tremendous one, unprecedented in size. That decision set Facebook on a path to escape velocity, to a tipping point where it can harm society just by existing.
  • No one, not even Mark Zuckerberg, can control the product he made. I’ve come to realize that Facebook is not a media company. It’s a Doomsday Machine.
  • Scale and engagement are valuable to Facebook because they’re valuable to advertisers. These incentives lead to design choices such as reaction buttons that encourage users to engage easily and often, which in turn encourage users to share ideas that will provoke a strong response.
  • Every time you click a reaction button on Facebook, an algorithm records it, and sharpens its portrait of who you are.
  • The hyper-targeting of users, made possible by reams of their personal data, creates the perfect environment for manipulation—by advertisers, by political campaigns, by emissaries of disinformation, and of course by Facebook itself, which ultimately controls what you see and what you don’t see on the site.
  • there aren’t enough moderators speaking enough languages, working enough hours, to stop the biblical flood of shit that Facebook unleashes on the world, because 10 times out of 10, the algorithm is faster and more powerful than a person.
  • At megascale, this algorithmically warped personalized informational environment is extraordinarily difficult to moderate in a meaningful way, and extraordinarily dangerous as a result.
  • These dangers are not theoretical, and they’re exacerbated by megascale, which makes the platform a tantalizing place to experiment on people
  • Even after U.S. intelligence agencies identified Facebook as a main battleground for information warfare and foreign interference in the 2016 election, the company has failed to stop the spread of extremism, hate speech, propaganda, disinformation, and conspiracy theories on its site.
  • it wasn’t until October of this year, for instance, that Facebook announced it would remove groups, pages, and Instragram accounts devoted to QAnon, as well as any posts denying the Holocaust.
  • In the days after the 2020 presidential election, Zuckerberg authorized a tweak to the Facebook algorithm so that high-accuracy news sources such as NPR would receive preferential visibility in people’s feeds, and hyper-partisan pages such as Breitbart News’s and Occupy Democrats’ would be buried, according to The New York Times, offering proof that Facebook could, if it wanted to, turn a dial to reduce disinformation—and offering a reminder that Facebook has the power to flip a switch and change what billions of people see online.
  • reducing the prevalence of content that Facebook calls “bad for the world” also reduces people’s engagement with the site. In its experiments with human intervention, the Times reported, Facebook calibrated the dial so that just enough harmful content stayed in users’ news feeds to keep them coming back for more.
  • Facebook’s stated mission—to make the world more open and connected—has always seemed, to me, phony at best, and imperialist at worst.
  • Facebook is a borderless nation-state, with a population of users nearly as big as China and India combined, and it is governed largely by secret algorithms
  • How much real-world violence would never have happened if Facebook didn’t exist? One of the people I’ve asked is Joshua Geltzer, a former White House counterterrorism official who is now teaching at Georgetown Law. In counterterrorism circles, he told me, people are fond of pointing out how good the United States has been at keeping terrorists out since 9/11. That’s wrong, he said. In fact, “terrorists are entering every single day, every single hour, every single minute” through Facebook.
  • Evidence of real-world violence can be easily traced back to both Facebook and 8kun. But 8kun doesn’t manipulate its users or the informational environment they’re in. Both sites are harmful. But Facebook might actually be worse for humanity.
  • In previous eras, U.S. officials could at least study, say, Nazi propaganda during World War II, and fully grasp what the Nazis wanted people to believe. Today, “it’s not a filter bubble; it’s a filter shroud,” Geltzer said. “I don’t even know what others with personalized experiences are seeing.”
  • Mary McCord, the legal director at the Institute for Constitutional Advocacy and Protection at Georgetown Law, told me that she thinks 8kun may be more blatant in terms of promoting violence but that Facebook is “in some ways way worse” because of its reach. “There’s no barrier to entry with Facebook,” she said. “In every situation of extremist violence we’ve looked into, we’ve found Facebook postings. And that reaches tons of people. The broad reach is what brings people into the fold and normalizes extremism and makes it mainstream.” In other words, it’s the megascale that makes Facebook so dangerous.
  • Facebook’s megascale gives Zuckerberg an unprecedented degree of influence over the global population. If he isn’t the most powerful person on the planet, he’s very near the top.
  • “The thing he oversees has such an effect on cognition and people’s beliefs, which can change what they do with their nuclear weapons or their dollars.”
  • Facebook’s new oversight board, formed in response to backlash against the platform and tasked with making decisions concerning moderation and free expression, is an extension of that power. “The first 10 decisions they make will have more effect on speech in the country and the world than the next 10 decisions rendered by the U.S. Supreme Court,” Geltzer said. “That’s power. That’s real power.”
  • Facebook is also a business, and a place where people spend time with one another. Put it this way: If you owned a store and someone walked in and started shouting Nazi propaganda or recruiting terrorists near the cash register, would you, as the shop owner, tell all of the other customers you couldn’t possibly intervene?
  • In 2004, Zuckerberg said Facebook ran advertisements only to cover server costs. But over the next two years Facebook completely upended and redefined the entire advertising industry. The pre-social web destroyed classified ads, but the one-two punch of Facebook and Google decimated local news and most of the magazine industry—publications fought in earnest for digital pennies, which had replaced print dollars, and social giants scooped them all up anyway.
  • In other words, if the Dunbar number for running a company or maintaining a cohesive social life is 150 people; the magic number for a functional social platform is maybe 20,000 people. Facebook now has 2.7 billion monthly users.
  • in 2007, Zuckerberg said something in an interview with the Los Angeles Times that now takes on a much darker meaning: “The things that are most powerful aren’t the things that people would have done otherwise if they didn’t do them on Facebook. Instead, it’s the things that would never have happened otherwise.”
  • We’re still in the infancy of this century’s triple digital revolution of the internet, smartphones, and the social web, and we find ourselves in a dangerous and unstable informational environment, powerless to resist forces of manipulation and exploitation that we know are exerted on us but remain mostly invisible
  • The Doomsday Machine offers a lesson: We should not accept this current arrangement. No single machine should be able to control so many people.
  • we need a new philosophical and moral framework for living with the social web—a new Enlightenment for the information age, and one that will carry us back to shared reality and empiricism.
  • localized approach is part of what made megascale possible. Early constraints around membership—the requirement at first that users attended Harvard, and then that they attended any Ivy League school, and then that they had an email address ending in .edu—offered a sense of cohesiveness and community. It made people feel more comfortable sharing more of themselves. And more sharing among clearly defined demographics was good for business.
  • we need to adopt a broader view of what it will take to fix the brokenness of the social web. That will require challenging the logic of today’s platforms—and first and foremost challenging the very concept of megascale as a way that humans gather.
  • The web’s existing logic tells us that social platforms are free in exchange for a feast of user data; that major networks are necessarily global and centralized; that moderators make the rules. None of that need be the case.
  • We need people who dismantle these notions by building alternatives. And we need enough people to care about these other alternatives to break the spell of venture capital and mass attention that fuels megascale and creates fatalism about the web as it is now.
  • We must also find ways to repair the aspects of our society and culture that the social web has badly damaged. This will require intellectual independence, respectful debate, and the same rebellious streak that helped establish Enlightenment values centuries ago.
  • Right now, too many people are allowing algorithms and tech giants to manipulate them, and reality is slipping from our grasp as a result. This century’s Doomsday Machine is here, and humming along.
Javier E

How Facebook Failed the World - The Atlantic - 0 views

  • In the United States, Facebook has facilitated the spread of misinformation, hate speech, and political polarization. It has algorithmically surfaced false information about conspiracy theories and vaccines, and was instrumental in the ability of an extremist mob to attempt a violent coup at the Capitol. That much is now painfully familiar.
  • these documents show that the Facebook we have in the United States is actually the platform at its best. It’s the version made by people who speak our language and understand our customs, who take our civic problems seriously because those problems are theirs too. It’s the version that exists on a free internet, under a relatively stable government, in a wealthy democracy. It’s also the version to which Facebook dedicates the most moderation resources.
  • Elsewhere, the documents show, things are different. In the most vulnerable parts of the world—places with limited internet access, where smaller user numbers mean bad actors have undue influence—the trade-offs and mistakes that Facebook makes can have deadly consequences.
  • ...23 more annotations...
  • According to the documents, Facebook is aware that its products are being used to facilitate hate speech in the Middle East, violent cartels in Mexico, ethnic cleansing in Ethiopia, extremist anti-Muslim rhetoric in India, and sex trafficking in Dubai. It is also aware that its efforts to combat these things are insufficient. A March 2021 report notes, “We frequently observe highly coordinated, intentional activity … by problematic actors” that is “particularly prevalent—and problematic—in At-Risk Countries and Contexts”; the report later acknowledges, “Current mitigation strategies are not enough.”
  • As recently as late 2020, an internal Facebook report found that only 6 percent of Arabic-language hate content on Instagram was detected by Facebook’s systems. Another report that circulated last winter found that, of material posted in Afghanistan that was classified as hate speech within a 30-day range, only 0.23 percent was taken down automatically by Facebook’s tools. In both instances, employees blamed company leadership for insufficient investment.
  • last year, according to the documents, only 13 percent of Facebook’s misinformation-moderation staff hours were devoted to the non-U.S. countries in which it operates, whose populations comprise more than 90 percent of Facebook’s users.
  • Among the consequences of that pattern, according to the memo: The Hindu-nationalist politician T. Raja Singh, who posted to hundreds of thousands of followers on Facebook calling for India’s Rohingya Muslims to be shot—in direct violation of Facebook’s hate-speech guidelines—was allowed to remain on the platform despite repeated requests to ban him, including from the very Facebook employees tasked with monitoring hate speech.
  • The granular, procedural, sometimes banal back-and-forth exchanges recorded in the documents reveal, in unprecedented detail, how the most powerful company on Earth makes its decisions. And they suggest that, all over the world, Facebook’s choices are consistently driven by public perception, business risk, the threat of regulation, and the specter of “PR fires,” a phrase that appears over and over in the documents.
  • “It’s an open secret … that Facebook’s short-term decisions are largely motivated by PR and the potential for negative attention,” an employee named Sophie Zhang wrote in a September 2020 internal memo about Facebook’s failure to act on global misinformation threats.
  • In a memo dated December 2020 and posted to Workplace, Facebook’s very Facebooklike internal message board, an employee argued that “Facebook’s decision-making on content policy is routinely influenced by political considerations.”
  • To hear this employee tell it, the problem was structural: Employees who are primarily tasked with negotiating with governments over regulation and national security, and with the press over stories, were empowered to weigh in on conversations about building and enforcing Facebook’s rules regarding questionable content around the world. “Time and again,” the memo quotes a Facebook researcher saying, “I’ve seen promising interventions … be prematurely stifled or severely constrained by key decisionmakers—often based on fears of public and policy stakeholder responses.”
  • And although Facebook users post in at least 160 languages, the company has built robust AI detection in only a fraction of those languages, the ones spoken in large, high-profile markets such as the U.S. and Europe—a choice, the documents show, that means problematic content is seldom detected.
  • A 2020 Wall Street Journal article reported that Facebook’s top public-policy executive in India had raised concerns about backlash if the company were to do so, saying that cracking down on leaders from the ruling party might make running the business more difficult.
  • Employees weren’t placated. In dozens and dozens of comments, they questioned the decisions Facebook had made regarding which parts of the company to involve in content moderation, and raised doubts about its ability to moderate hate speech in India. They called the situation “sad” and Facebook’s response “inadequate,” and wondered about the “propriety of considering regulatory risk” when it comes to violent speech.
  • “I have a very basic question,” wrote one worker. “Despite having such strong processes around hate speech, how come there are so many instances that we have failed? It does speak on the efficacy of the process.”
  • Two other employees said that they had personally reported certain Indian accounts for posting hate speech. Even so, one of the employees wrote, “they still continue to thrive on our platform spewing hateful content.”
  • Taken together, Frances Haugen’s leaked documents show Facebook for what it is: a platform racked by misinformation, disinformation, conspiracy thinking, extremism, hate speech, bullying, abuse, human trafficking, revenge porn, and incitements to violence
  • It is a company that has pursued worldwide growth since its inception—and then, when called upon by regulators, the press, and the public to quell the problems its sheer size has created, it has claimed that its scale makes completely addressing those problems impossible.
  • Instead, Facebook’s 60,000-person global workforce is engaged in a borderless, endless, ever-bigger game of whack-a-mole, one with no winners and a lot of sore arms.
  • Zhang details what she found in her nearly three years at Facebook: coordinated disinformation campaigns in dozens of countries, including India, Brazil, Mexico, Afghanistan, South Korea, Bolivia, Spain, and Ukraine. In some cases, such as in Honduras and Azerbaijan, Zhang was able to tie accounts involved in these campaigns directly to ruling political parties. In the memo, posted to Workplace the day Zhang was fired from Facebook for what the company alleged was poor performance, she says that she made decisions about these accounts with minimal oversight or support, despite repeated entreaties to senior leadership. On multiple occasions, she said, she was told to prioritize other work.
  • A Facebook spokesperson said that the company tries “to keep people safe even if it impacts our bottom line,” adding that the company has spent $13 billion on safety since 2016. “​​Our track record shows that we crack down on abuse abroad with the same intensity that we apply in the U.S.”
  • Zhang's memo, though, paints a different picture. “We focus upon harm and priority regions like the United States and Western Europe,” she wrote. But eventually, “it became impossible to read the news and monitor world events without feeling the weight of my own responsibility.”
  • Indeed, Facebook explicitly prioritizes certain countries for intervention by sorting them into tiers, the documents show. Zhang “chose not to prioritize” Bolivia, despite credible evidence of inauthentic activity in the run-up to the country’s 2019 election. That election was marred by claims of fraud, which fueled widespread protests; more than 30 people were killed and more than 800 were injured.
  • “I have blood on my hands,” Zhang wrote in the memo. By the time she left Facebook, she was having trouble sleeping at night. “I consider myself to have been put in an impossible spot—caught between my loyalties to the company and my loyalties to the world as a whole.”
  • What happened in the Philippines—and in Honduras, and Azerbaijan, and India, and Bolivia—wasn’t just that a very large company lacked a handle on the content posted to its platform. It was that, in many cases, a very large company knew what was happening and failed to meaningfully intervene.
  • solving problems for users should not be surprising. The company is under the constant threat of regulation and bad press. Facebook is doing what companies do, triaging and acting in its own self-interest.
Javier E

The Israel-Hamas War Shows Just How Broken Social Media Has Become - The Atlantic - 0 views

  • major social platforms have grown less and less relevant in the past year. In response, some users have left for smaller competitors such as Bluesky or Mastodon. Some have simply left. The internet has never felt more dense, yet there seem to be fewer reliable avenues to find a signal in all the noise. One-stop information destinations such as Facebook or Twitter are a thing of the past. The global town square—once the aspirational destination that social-media platforms would offer to all of us—lies in ruins, its architecture choked by the vines and tangled vegetation of a wild informational jungle
  • Musk has turned X into a deepfake version of Twitter—a facsimile of the once-useful social network, altered just enough so as to be disorienting, even terrifying.
  • At the same time, Facebook’s user base began to erode, and the company’s transparency reports revealed that the most popular content circulating on the platform was little more than viral garbage—a vast wasteland of CBD promotional content and foreign tabloid clickbait.
  • ...4 more annotations...
  • What’s left, across all platforms, is fragmented. News and punditry are everywhere online, but audiences are siloed; podcasts are more popular than ever, and millions of younger people online have turned to influencers and creators on Instagram and especially TikTok as trusted sources of news.
  • Social media, especially Twitter, has sometimes been an incredible news-gathering tool; it has also been terrible and inefficient, a game of do your own research that involves batting away bullshit and parsing half truths, hyperbole, outright lies, and invaluable context from experts on the fly. Social media’s greatest strength is thus its original sin: These sites are excellent at making you feel connected and informed, frequently at the expense of actually being informed.
  • At the center of these pleas for a Twitter alternative is a feeling that a fundamental promise has been broken. In exchange for our time, our data, and even our well-being, we uploaded our most important conversations onto platforms designed for viral advertising—all under the implicit understanding that social media could provide an unparalleled window to the world.
  • What comes next is impossible to anticipate, but it’s worth considering the possibility that the centrality of social media as we’ve known it for the past 15 years has come to an end—that this particular window to the world is being slammed shut.
Javier E

The Irrational Consumer: Why Economics Is Dead Wrong About How We Make Choices - Derek ... - 0 views

  • Atlantic.displayRandomElement('#header li.business .sponsored-dropdown-item'); Derek Thompson - Derek Thompson is a senior editor at The Atlantic, where he oversees business coverage for the website. More Derek has also written for Slate, BusinessWeek, and the Daily Beast. He has appeared as a guest on radio and television networks, including NPR, the BBC, CNBC, and MSNBC. All Posts RSS feed Share Share on facebook Share on linkedin Share on twitter « Previous Thompson Email Print Close function plusOneCallback () { $(document).trigger('share'); } $(document).ready(function() { var iframeUrl = "\/ad\/thanks-iframe\/TheAtlanticOnline\/channel_business;src=blog;by=derek-thompson;title=the-irrational-consumer-why-economics-is-dead-wrong-about-how-we-make-choices;pos=sharing;sz=640x480,336x280,300x250"; var toolsClicked = false; $('#toolsTop').click(function() { toolsClicked = 'top'; }); $('#toolsBottom').click(function() { toolsClicked = 'bottom'; }); $('#thanksForSharing a.hide').click(function() { $('#thanksForSharing').hide(); }); var onShareClickHandler = function() { var top = parseInt($(this).css('top').replace(/px/, ''), 10); toolsClicked = (top > 600) ? 'bottom' : 'top'; }; var onIframeReady = function(iframe) { var win = iframe.contentWindow; // Don't show the box if there's no ad in it if (win.$('.ad').children().length == 1) { return; } var visibleAds = win.$('.ad').filter(function() { return !($(this).css('display') == 'none'); }); if (visibleAds.length == 0) { // Ad is hidden, so don't show return; } if (win.$('.ad').hasClass('adNotLoaded')) { // Ad failed to load so don't show return; } $('#thanksForSharing').css('display', 'block'); var top; if(toolsClicked == 'bottom' && $('#toolsBottom').length) { top = $('#toolsBottom')[0].offsetTop + $('#toolsBottom').height() - 310; } else { top = $('#toolsTop')[0].offsetTop + $('#toolsTop').height() + 10; } $('#thanksForSharing').css('left', (-$('#toolsTop').offset().left + 60) + 'px'); $('#thanksForSharing').css('top', top + 'px'); }; var onShare = function() { // Close "Share successful!" AddThis plugin popup if (window._atw && window._atw.clb && $('#at15s:visible').length) { _atw.clb(); } if (iframeUrl == null) { return; } $('#thanksForSharingIframe').attr('src', "\/ad\/thanks-iframe\/TheAtlanticOnline\/channel_business;src=blog;by=derek-thompson;title=the-irrational-consumer-why-economics-is-dead-wrong-about-how-we-make-choices;pos=sharing;sz=640x480,336x280,300x250"); $('#thanksForSharingIframe').load(function() { var iframe = this; var win = iframe.contentWindow; if (win.loaded) { onIframeReady(iframe); } else { win.$(iframe.contentDocument).ready(function() { onIframeReady(iframe); }) } }); }; if (window.addthis) { addthis.addEventListener('addthis.ready', function() { $('.articleTools .share').mouseover(function() { $('#at15s').unbind('click', onShareClickHandler); $('#at15s').bind('click', onShareClickHandler); }); }); addthis.addEventListener('addthis.menu.share', function(evt) { onShare(); }); } // This 'share' event is used for testing, so one can call // $(document).trigger('share') to get the thank you for // sharing box to appear. $(document).bind('share', function(event) { onShare(); }); if (!window.FB || (window.FB && !window.FB._apiKey)) { // Hook into the fbAsyncInit function and register our listener there var oldFbAsyncInit = (window.fbAsyncInit) ? window.fbAsyncInit : (function() { }); window.fbAsyncInit = function() { oldFbAsyncInit(); FB.Event.subscribe('edge.create', function(response) { // to hide the facebook comments box $('#facebookLike span.fb_edge_comment_widget').hide(); onShare(); }); }; } else if (window.FB) { FB.Event.subscribe('edge.create', function(response) { // to hide the facebook comments box $('#facebookLike span.fb_edge_comment_widget').hide(); onShare(); }); } }); The Irrational Consumer: Why Economics Is Dead Wrong About How We Make Choices By Derek Thompson he
  • First, making a choice is physically exhausting, literally, so that somebody forced to make a number of decisions in a row is likely to get lazy and dumb.
  • Second, having too many choices can make us less likely to come to a conclusion. In a famous study of the so-called "paradox of choice", psychologists Mark Lepper and Sheena Iyengar found that customers presented with six jam varieties were more likely to buy one than customers offered a choice of 24.
  • ...7 more annotations...
  • neurologists are finding that many of the biases behavioral economists perceive in decision-making start in our brains. "Brain studies indicate that organisms seem to be on a hedonic treadmill, quickly habituating to homeostasis," McFadden writes. In other words, perhaps our preference for the status quo isn't just figuratively our heads, but also literally sculpted by the hand of evolution inside of our brains.
  • The third check against the theory of the rational consumer is the fact that we're social animals. We let our friends and family and tribes do our thinking for us
  • Many of our mistakes stem from a central "availability bias." Our brains are computers, and we like to access recently opened files, even though many decisions require a deep body of information that might require some searching. Cheap example: We remember the first, last, and peak moments of certain experiences.
  • The popular psychological theory of "hyperbolic discounting" says people don't properly evaluate rewards over time. The theory seeks to explain why many groups -- nappers, procrastinators, Congress -- take rewards now and pain later, over and over again. But neurology suggests that it hardly makes sense to speak of "the brain," in the singular, because it's two very different parts of the brain that process choices for now and later. The choice to delay gratification is mostly processed in the frontal system. But studies show that the choice to do something immediately gratifying is processed in a different system, the limbic system, which is more viscerally connected to our behavior, our "reward pathways," and our feelings of pain and pleasure.
  • the final message is that neither the physiology of pleasure nor the methods we use to make choices are as simple or as single-minded as the classical economists thought. A lot of behavior is consistent with pursuit of self-interest, but in novel or ambiguous decision-making environments there is a good chance that our habits will fail us and inconsistencies in the way we process information will undo us.
  • Our brains seem to operate like committees, assigning some tasks to the limbic system, others to the frontal system. The "switchboard" does not seem to achieve complete, consistent communication between different parts of the brain. Pleasure and pain are experienced in the limbic system, but not on one fixed "utility" or "self-interest" scale. Pleasure and pain have distinct neural pathways, and these pathways adapt quickly to homeostasis, with sensation coming from changes rather than levels
  • Social networks are sources of information, on what products are available, what their features are, and how your friends like them. If the information is accurate, this should help you make better choices. On the other hand, it also makes it easier for you to follow the crowd rather than engaging in the due diligence of collecting and evaluating your own information and playing it against your own preferences
oliviaodon

Ban World Leaders from Twitter - The Atlantic - 0 views

  • Before 2017, a president taking to Twitter to taunt a nuclear power would’ve been unthinkable. But Tuesday, Donald Trump, whose bygone impulsiveness contributed to two failed marriages and the bankruptcies of numerous businesses, engaged in a geopolitical boasting contest with North Korea, sacrificing the benefits of considered diplomacy to satiate his impulsiveness and need for attention:
  • This may be the most irresponsible tweet in history.
  • “The good news is, other countries won’t take talk like this too seriously because they understand Trump is a small man who blusters to make himself feel potent. That’s also the bad news; there’s nowhere left to go rhetorically when we need to signal that we’re serious.” Most likely, that’s the fallout.
  • ...5 more annotations...
  • By now these truths are self-evident:Twitter was designed to lower barriers to communication and encourage impulsive, off-the-cuff comments—and at that the platform has been wildly successful. Twitter routinely stokes needless conflict. Countless people who use Twitter routinely publish words that are ill-considered.
  • Having global leaders tweeting gives humanity nothing commensurate with the risks we bear so that the powerful can communicate this way.
  • Banning world leaders from the platform might be a loss for them, but it would be  a clear win for humanity: minuscule costs with conceivably civilization-saving benefits.
  • in Trump’s case, there is an absurdity to allowing him to continue tweeting. The platform is now banning people with a few thousand followers to prevent the harm of online harassment—yet it abides a president taunting an erratic totalitarian with an arsenal that could kill millions in minutes if a war were to break out? “You may not make specific threats of violence,” Twitter’s rules state. Mutually assured destruction may well be a necessary evil in our world; communicating it to hostile regimes in a careful, deliberate, responsible manner is part of being president of the United States as most Americans conceive of it; but Twitter is surely within its rights to declare that its platform is neither the time nor the place for such communications––which surely constitute a threat of violence––given the strengths, weaknesses, and limits baked into what it has designed.
  • Twitter should give the people what they want, and ban the most elite of the political elites once and for all. Or if it won’t, it must at least tell the public, in advance of future catastrophe: Would it let a president tweet literally anything? If not, where is the line?
Javier E

Fight the Future - The Triad - 0 views

  • In large part because our major tech platforms reduced the coefficient of friction (μ for my mechanics nerd posse) to basically zero. QAnons crept out of the dark corners of the web—obscure boards like 4chan and 8kun—and got into the mainstream platforms YouTube, Facebook, Instagram, and Twitter.
  • Why did QAnon spread like wildfire in America?
  • These platforms not only made it easy for conspiracy nuts to share their crazy, but they used algorithms that actually boosted the spread of crazy, acting as a force multiplier.
  • ...24 more annotations...
  • So it sounds like a simple fix: Impose more friction at the major platform level and you’ll clean up the public square.
  • But it’s not actually that simple because friction runs counter to the very idea of the internet.
  • The fundamental precept of the internet is that it reduces marginal costs to zero. And this fact is why the design paradigm of the internet is to continually reduce friction experienced by users to zero, too. Because if the second unit of everything is free, then the internet has a vested interest in pushing that unit in front of your eyeballs as smoothly as possible.
  • the internet is “broken,” but rather it’s been functioning exactly as it was designed to:
  • Perhaps more than any other job in the world, you do not want the President of the United States to live in a frictionless state of posting. The Presidency is not meant to be a frictionless position, and the United States government is not a frictionless entity, much to the chagrin of many who have tried to change it. Prior to this administration, decisions were closely scrutinized for, at the very least, legality, along with the impact on diplomacy, general norms, and basic grammar. This kind of legal scrutiny and due diligence is also a kind of friction--one that we now see has a lot of benefits. 
  • The deep lesson here isn’t about Donald Trump. It’s about the collision between the digital world and the real world.
  • In the real world, marginal costs are not zero. And so friction is a desirable element in helping to get to the optimal state. You want people to pause before making decisions.
  • described friction this summer as: “anything that inhibits user action within a digital interface, particularly anything that requires an additional click or screen.” For much of my time in the technology sector, friction was almost always seen as the enemy, a force to be vanquished. A “frictionless” experience was generally held up as the ideal state, the optimal product state.
  • Trump was riding the ultimate frictionless optimized engagement Twitter experience: he rode it all the way to the presidency, and then he crashed the presidency into the ground.
  • From a metrics and user point of view, the abstract notion of the President himself tweeting was exactly what Twitter wanted in its original platonic ideal. Twitter has been built to incentivize someone like Trump to engage and post
  • The other day we talked a little bit about how fighting disinformation, extremism, and online cults is like fighting a virus: There is no “cure.” Instead, what you have to do is create enough friction that the rate of spread becomes slow.
  • Our challenge is that when human and digital design comes into conflict, the artificial constraints we impose should be on the digital world to become more in service to us. Instead, we’ve let the digital world do as it will and tried to reconcile ourselves to the havoc it wreaks.
  • And one of the lessons of the last four years is that when you prize the digital design imperatives—lack of friction—over the human design imperatives—a need for friction—then bad things can happen.
  • We have an ongoing conflict between the design precepts of humans and the design precepts of computers.
  • Anyone who works with computers learns to fear their capacity to forget. Like so many things with computers, memory is strictly binary. There is either perfect recall or total oblivion, with nothing in between. It doesn't matter how important or trivial the information is. The computer can forget anything in an instant. If it remembers, it remembers for keeps.
  • This doesn't map well onto human experience of memory, which is fuzzy. We don't remember anything with perfect fidelity, but we're also not at risk of waking up having forgotten our own name. Memories tend to fade with time, and we remember only the more salient events.
  • And because we live in a time when storage grows ever cheaper, we learn to save everything, log everything, and keep it forever. You never know what will come in useful. Deleting is dangerous.
  • Our lives have become split between two worlds with two very different norms around memory.
  • [A] lot of what's wrong with the Internet has to do with memory. The Internet somehow contrives to remember too much and too little at the same time, and it maps poorly on our concepts of how memory should work.
  • The digital world is designed to never forget anything. It has perfect memory. Forever. So that one time you made a crude joke 20 years ago? It can now ruin your life.
  • Memory in the carbon-based world is imperfect. People forget things. That can be annoying if you’re looking for your keys but helpful if you’re trying to broker peace between two cultures. Or simply become a better person than you were 20 years ago.
  • The digital and carbon-based worlds have different design parameters. Marginal cost is one of them. Memory is another.
  • 2. Forget Me Now
  • 1. Fix Tech, Fix America
aidenborst

Talk of overturning the 2020 election on new social media platforms used by QAnon follo... - 0 views

  • Online conversation among Trump supporters and QAnon followers on new and emerging social media platforms is creating concern on Capitol Hill that President Donald Trump's continued perpetuation of the falsehood that the 2020 election was stolen could soon incite further violence, three congressional sources tell CNN.
  • "It's a great day when we start seeing evidence of the plan coming together! He just told us it won't be long now," wrote another.
  • Trump's comments to right-wing media outlets in recent weeks have played directly into the false belief among some of his supporters that he will be reinstated as president in the coming months.
  • ...9 more annotations...
  • Officials are careful to stress that much of it falls under First Amendment free speech protections.
  • Major social media platforms like Facebook and Twitter suspended the accounts of influential peddlers of election conspiracy theories after the January 6 insurrection at the US Capitol, including Trump himself.
  • "He doesn't have to wait until 2024 people, he's coming back this year, everything is going to be reversed," one Telegram user commented on the clip.
  • The social messaging platform Telegram has emerged as a particular source of concern among law enforcement officials, the congressional sources say.
  • "It's going to be a very interesting time in our country," he said. "How do you govern when you lost?"
  • "We The People will take action," one Telegram user commented in reaction to a clip of the interview.
  • "Trump knows what happens. Biden administration will be removed," commented another, while one warned, "He just told us things are about to get very ugly all over America. These thugs aren't going to take this news very well! Be prepared!"
  • Telegram was founded in Russia in 2013 and quickly became popular as a propaganda and organizing tool for members of ISIS. The company did take some steps to tackle ISIS content.
  • In the days before the insurrection, Watkins was retweeted multiple times by Trump. He was suspended by Twitter after January 6. Now, Watkins posts daily to his 200,000 followers on Telegram in Arizona -- continually casting doubt on the election result.
cartergramiak

Opinion | The Site Trump Could Run to Next - The New York Times - 0 views

  • Facebook and Twitter have kicked Donald Trump off their platforms and Amazon Web Services removed Parler from its cloud. But there’s another popular platform that markets itself as the destination for free speech: Substack.
  • With more than 250,000 unique individuals paying for the newsletters on its platform, Substack is a lot smaller than Twitter or Facebook. Still, it’s a rapidly growing space for big media personalities who want to reach their audience directly.
  • So should media companies be worried about the competition?
  • ...1 more annotation...
  • On this episode of “Sway,” Kara Swisher speaks to Chris Best, the chief executive and a co-founder of Substack, about content moderation on his platform and asks whether Substack is going to destroy media gatekeepers or just turn into one of them.
katherineharron

The Joe Biden campaign intensifies its criticism of Facebook - CNNPolitics - 0 views

  • The Biden campaign ramped up its criticism of Facebook in a second letter to the social media giant, calling for the platform to reject another false anti-Joe Biden ad.
  • In Thursday's letter, Biden campaign manager Greg Schultz blasted Facebook for what he called a "deeply flawed" policy, saying it gives "blanket permission" for candidates to use the platform to "mislead American voters all while Facebook profits from their advertising dollars." The ad was paid for by a political action committee called The Committee to Defend the President.
  • "The ad you wrote us about is currently inactive on our platform and is not running. Should it become active it would then be sent to third party fact checkers where they would have the option to review it," Harbath said.
  • ...3 more annotations...
  • The ad, which is now inactive, accuses the 2020 Democratic presidential candidate of "blackmailing" US allies, alleging impropriety while he was vice president and his son Hunter served on the board of a Ukrainian energy company. There is no evidence of wrongdoing by Joe or Hunter Biden. The ad ends with the narrator saying, "Send Quid Pro Joe Biden into retirement."
  • Warren recently ran a false ad on Facebook deliberately to draw attention to the issue. Facebook then tweeted at Warren that the "FCC doesn't want broadcast companies censoring candidates' speech." It continued, "We agree it's better to let voters -- not companies -- decide."
  • Under current policy, Facebook exempts ads by politicians from third-party fact-checking -- a loophole, both Biden and Warren allege, that allows Zuckerberg to continue taking money from President Donald Trump's campaign and its supporters despite Trump's ads spreading lies about Biden and his son. While it allows politicians to run ads with false information, its policy does not allow PACs to do so. Facebook does not fact check ads from PACS before they run on the platform.
rerobinson03

Fringe Groups Splinter Online After Facebook and Twitter Bans - The New York Times - 0 views

  • In the days since rioters stormed Capitol Hill, fringe groups like armed militias, QAnon conspiracy theorists and far-right supporters of President Trump have vowed to continue their fight in hundreds of conversations on a range of internet platforms.
  • me of the organizers have moved to encrypted messaging apps like Telegram and Signal, which cannot be as easily monitored as social media platforms.
  • Adding to the muddle, when Twitter and Facebook kicked Mr. Trump off their platforms last week, they made it harder for organizers to rally around a singular voice.
  • ...2 more annotations...
  • Just hours after rioters were cleared from the Capitol on Wednesday, there was already discussion about what would happen next on Parler and Gab, another social-media platform that has become popular with the far right.
  • Andrew Torba, chief executive of Gab, said: “As we have communicated to our partners in law enforcement, we have adopted a heightened security posture in the lead-up to the inauguration and are ready to respond quickly to any request law enforcement may make of us during the period.”
Javier E

Opinion | It's Time to Break Up Facebook - The New York Times - 1 views

  • For many people today, it’s hard to imagine government doing much of anything right, let alone breaking up a company like Facebook. This isn’t by coincidence.
  • Starting in the 1970s, a small but dedicated group of economists, lawyers and policymakers sowed the seeds of our cynicism. Over the next 40 years, they financed a network of think tanks, journals, social clubs, academic centers and media outlets to teach an emerging generation that private interests should take precedence over public ones
  • Their gospel was simple: “Free” markets are dynamic and productive, while government is bureaucratic and ineffective. By the mid-1980s, they had largely managed to relegate energetic antitrust enforcement to the history books.
  • ...51 more annotations...
  • This shift, combined with business-friendly tax and regulatory policy, ushered in a period of mergers and acquisitions that created megacorporations
  • In the past 20 years, more than 75 percent of American industries, from airlines to pharmaceuticals, have experienced increased concentration, and the average size of public companies has tripled. The results are a decline in entrepreneurship, stalled productivity growth, and higher prices and fewer choices for consumers.
  • Because Facebook so dominates social networking, it faces no market-based accountability. This means that every time Facebook messes up, we repeat an exhausting pattern: first outrage, then disappointment and, finally, resignation.
  • Over a decade later, Facebook has earned the prize of domination. It is worth half a trillion dollars and commands, by my estimate, more than 80 percent of the world’s social networking revenue. It is a powerful monopoly, eclipsing all of its rivals and erasing competition from the social networking category.
  • Facebook’s monopoly is also visible in its usage statistics. About 70 percent of American adults use social media, and a vast majority are on Facebook products
  • Over two-thirds use the core site, a third use Instagram, and a fifth use WhatsApp.
  • As a result of all this, would-be competitors can’t raise the money to take on Facebook. Investors realize that if a company gets traction, Facebook will copy its innovations, shut it down or acquire it for a relatively modest sum
  • Facebook’s dominance is not an accident of history. The company’s strategy was to beat every competitor in plain view, and regulators and the government tacitly — and at times explicitly — approved
  • The F.T.C.’s biggest mistake was to allow Facebook to acquire Instagram and WhatsApp. In 2012, the newer platforms were nipping at Facebook’s heels because they had been built for the smartphone, where Facebook was still struggling to gain traction. Mark responded by buying them, and the F.T.C. approved.
  • Neither Instagram nor WhatsApp had any meaningful revenue, but both were incredibly popular. The Instagram acquisition guaranteed Facebook would preserve its dominance in photo networking, and WhatsApp gave it a new entry into mobile real-time messaging.
  • When it hasn’t acquired its way to dominance, Facebook has used its monopoly position to shut out competing companies or has copied their technology.
  • In 2014, the rules favored curiosity-inducing “clickbait” headlines. In 2016, they enabled the spread of fringe political views and fake news, which made it easier for Russian actors to manipulate the American electorate.
  • As markets become more concentrated, the number of new start-up businesses declines. This holds true in other high-tech areas dominated by single companies, like search (controlled by Google) and e-commerce (taken over by Amazon)
  • I don’t blame Mark for his quest for domination. He has demonstrated nothing more nefarious than the virtuous hustle of a talented entrepreneur
  • It’s on our government to ensure that we never lose the magic of the invisible hand. How did we allow this to happen
  • a narrow reliance on whether or not consumers have experienced price gouging fails to take into account the full cost of market domination
  • It doesn’t recognize that we also want markets to be competitive to encourage innovation and to hold power in check. And it is out of step with the history of antitrust law. Two of the last major antitrust suits, against AT&T and IBM in the 1980s, were grounded in the argument that they had used their size to stifle innovation and crush competition.
  • It is a disservice to the laws and their intent to retain such a laserlike focus on price effects as the measure of all that antitrust was meant to do.”
  • Facebook is the perfect case on which to reverse course, precisely because Facebook makes its money from targeted advertising, meaning users do not pay to use the service. But it is not actually free, and it certainly isn’t harmless.
  • We pay for Facebook with our data and our attention, and by either measure it doesn’t come cheap.
  • The choice is mine, but it doesn’t feel like a choice. Facebook seeps into every corner of our lives to capture as much of our attention and data as possible and, without any alternative, we make the trade.
  • The vibrant marketplace that once drove Facebook and other social media companies to compete to come up with better products has virtually disappeared. This means there’s less chance of start-ups developing healthier, less exploitative social media platforms. It also means less accountability on issues like privacy.
  • The most problematic aspect of Facebook’s power is Mark’s unilateral control over speech. There is no precedent for his ability to monitor, organize and even censor the conversations of two billion people.
  • Facebook engineers write algorithms that select which users’ comments or experiences end up displayed in the News Feeds of friends and family. These rules are proprietary and so complex that many Facebook employees themselves don’t understand them.
  • What started out as lighthearted entertainment has become the primary way that people of all ages communicate online.
  • In January 2018, Mark announced that the algorithms would favor non-news content shared by friends and news from “trustworthy” sources, which his engineers interpreted — to the confusion of many — as a boost for anything in the category of “politics, crime, tragedy.”
  • As if Facebook’s opaque algorithms weren’t enough, last year we learned that Facebook executives had permanently deleted their own messages from the platform, erasing them from the inboxes of recipients; the justification was corporate security concerns.
  • No one at Facebook headquarters is choosing what single news story everyone in America wakes up to, of course. But they do decide whether it will be an article from a reputable outlet or a clip from “The Daily Show,” a photo from a friend’s wedding or an incendiary call to kill others.
  • Mark knows that this is too much power and is pursuing a twofold strategy to mitigate it. He is pivoting Facebook’s focus toward encouraging more private, encrypted messaging that Facebook’s employees can’t see, let alone control
  • Second, he is hoping for friendly oversight from regulators and other industry executives.
  • In an op-ed essay in The Washington Post in March, he wrote, “Lawmakers often tell me we have too much power over speech, and I agree.” And he went even further than before, calling for more government regulation — not just on speech, but also on privacy and interoperability, the ability of consumers to seamlessly leave one network and transfer their profiles, friend connections, photos and other data to another.
  • I don’t think these proposals were made in bad faith. But I do think they’re an attempt to head off the argument that regulators need to go further and break up the company. Facebook isn’t afraid of a few more rules. It’s afraid of an antitrust case and of the kind of accountability that real government oversight would bring.
  • We don’t expect calcified rules or voluntary commissions to work to regulate drug companies, health care companies, car manufacturers or credit card providers. Agencies oversee these industries to ensure that the private market works for the public good. In these cases, we all understand that government isn’t an external force meddling in an organic market; it’s what makes a dynamic and fair market possible in the first place. This should be just as true for social networking as it is for air travel or pharmaceuticals.
  • Just breaking up Facebook is not enough. We need a new agency, empowered by Congress to regulate tech companies. Its first mandate should be to protect privacy.
  • First, Facebook should be separated into multiple companies. The F.T.C., in conjunction with the Justice Department, should enforce antitrust laws by undoing the Instagram and WhatsApp acquisitions and banning future acquisitions for several years.
  • How would a breakup work? Facebook would have a brief period to spin off the Instagram and WhatsApp businesses, and the three would become distinct companies, most likely publicly traded.
  • Facebook is indeed more valuable when there are more people on it: There are more connections for a user to make and more content to be shared. But the cost of entering the social network business is not that high. And unlike with pipes and electricity, there is no good argument that the country benefits from having only one dominant social networking company.
  • others worry that the breakup of Facebook or other American tech companies could be a national security problem. Because advancements in artificial intelligence require immense amounts of data and computing power, only large companies like Facebook, Google and Amazon can afford these investments, they say. If American companies become smaller, the Chinese will outpace us.
  • The American government needs to do two things: break up Facebook’s monopoly and regulate the company to make it more accountable to the American people.
  • But the biggest winners would be the American people. Imagine a competitive market in which they could choose among one network that offered higher privacy standards, another that cost a fee to join but had little advertising and another that would allow users to customize and tweak their feeds as they saw fit
  • The cost of breaking up Facebook would be next to zero for the government, and lots of people stand to gain economically. A ban on short-term acquisitions would ensure that competitors, and the investors who take a bet on them, would have the space to flourish. Digital advertisers would suddenly have multiple companies vying for their dollars.
  • The Europeans have made headway on privacy with the General Data Protection Regulation, a law that guarantees users a minimal level of protection. A landmark privacy bill in the United States should specify exactly what control Americans have over their digital information, require clearer disclosure to users and provide enough flexibility to the agency to exercise effective oversight over time
  • The agency should also be charged with guaranteeing basic interoperability across platforms.
  • Finally, the agency should create guidelines for acceptable speech on social media
  • We will have to create similar standards that tech companies can use. These standards should of course be subject to the review of the courts, just as any other limits on speech are. But there is no constitutional right to harass others or live-stream violence.
  • These are difficult challenges. I worry that government regulators will not be able to keep up with the pace of digital innovation
  • I worry that more competition in social networking might lead to a conservative Facebook and a liberal one, or that newer social networks might be less secure if government regulation is weak
  • Professor Wu has written that this “policeman at the elbow” led IBM to steer clear “of anything close to anticompetitive conduct, for fear of adding to the case against it.”
  • Finally, an aggressive case against Facebook would persuade other behemoths like Google and Amazon to think twice about stifling competition in their own sectors, out of fear that they could be next.
  • The alternative is bleak. If we do not take action, Facebook’s monopoly will become even more entrenched. With much of the world’s personal communications in hand, it can mine that data for patterns and trends, giving it an advantage over competitors for decades to come.
  • This movement of public servants, scholars and activists deserves our support. Mark Zuckerberg cannot fix Facebook, but our government can.
Javier E

Opinion | What to Do About Facebook, and What Not to Do - The New York Times - 0 views

  • Facebook’s alarming power. The company is among the largest collectors of humanity’s most private information, one of the planet’s most-trafficked sources of news, and it seems to possess the ability, in some degree, to alter public discourse. Worse, essentially all of Facebook’s power is vested in Zuckerberg alone.
  • This feels intolerable; as the philosopher Kanye West put it, “No one man should have all that power.”
  • Persily proposes piercing the black box before we do anything else. He has written draft legislation that would compel large tech platforms to provide to outside researchers a range of data about what users see on the service, how they engage with it, and what information the platform provides to advertisers and governments.
  • ...6 more annotations...
  • Nathaniel Persily, a professor at Stanford Law School, has a neat way of describing the most basic problem in policing Facebook: “At present,” Persily has written, “we do not know even what we do not know” about social media’s effect on the world.
  • Rashad Robinson, president of the civil rights advocacy group Color of Charge, favored another proposed law, the Algorithmic Justice and Online Platform Transparency Act, which would also require that platforms release data about how they collect and use personal information about, among other demographic categories, users’ race, ethnicity, sex, religion, gender identity, sexual orientation and disability status, in order to show whether their systems are being applied in discriminatory ways.
  • one idea as “unsexy but important”: Educating the public to resist believing everything they see online.
  • What we need, then, is something like a society-wide effort to teach people how to process digital information.
  • In his new book, “Tech Panic: Why We Shouldn’t Fear Facebook and the Future,” Robby Soave, an editor at Reason magazine, argues that the media and lawmakers have become too worked up about the dangers posed by Facebook.He doesn’t disagree that the company’s rise has had some terrible effects, but he worries that some proposals could exacerbate Facebook’s dominance — a point with which I agree.
  • But Soave will probably get what he wants. As long as there’s wide disagreement among politicians about how to address Facebook’s ills, doing nothing might be the likeliest outcome.
Javier E

What the War on Terror Cost America | Foreign Affairs - 0 views

  • At a joint session of Congress on September 20, 2001, U.S. President George W. Bush announced a new type of war, a “war on terror.” He laid out its terms: “We will direct every resource at our command—every means of diplomacy, every tool of intelligence, every instrument of law enforcement, every financial influence, and every necessary weapon of war—to the disruption and to the defeat of the global terror network.” Then he described what that defeat might look like: “We will starve terrorists of funding, turn them one against another, drive them from place to place until there is no refuge or no rest.”
  • If Bush’s words outlined the essential objectives of the global war on terror, 20 years later, the United States has largely achieved them. Osama bin Laden is dead. The surviving core members of al Qaeda are dispersed and weak. Bin Laden’s successor, Ayman al-Zawahiri, communicates only through rare propaganda releases, and al Qaeda’s most powerful offshoot, the Islamic State (or ISIS), has seen its territorial holdings dwindle to insignificance in Iraq and Syria.
  • Most important, however, is the United States’ success in securing its homeland.
  • ...39 more annotations...
  • Since 9/11, the United States has suffered, on average, six deaths per year due to jihadi terrorism. (To put this in perspective, in 2019, an average of 39 Americans died every day from overdoses involving prescription opioids.) If the goal of the global war on terror was to prevent significant acts of terrorism, particularly in the United States, then the war has succeeded.
  • But at what cost?
  • Every war the United States has fought, beginning with the American Revolution, has required an economic model to sustain it with sufficient bodies and cash.
  • Like its predecessors, the war on terror came with its own model: the war was fought by an all-volunteer military and paid for largely through deficit spending.
  • It should be no surprise that this model, which by design anesthetized a majority of Americans to the costs of conflict, delivered them their longest war; in his September 20, 2001, speech, when describing how Americans might support the war effort, Bush said, “I ask you to live your lives and hug your children.”
  • This model has also had a profound effect on American democracy, one that is only being fully understood 20 years later.
  • Funding the war through deficit spending allowed it to fester through successive administrations with hardly a single politician ever mentioning the idea of a war tax. Meanwhile, other forms of spending—from financial bailouts to health care and, most recently, a pandemic recovery stimulus package—generate breathless debate.
  • , technological and social changes have numbed them to its human cost. The use of drone aircraft and other platforms has facilitated the growing automation of combat, which allows the U.S. military to kill remotely. This development has further distanced Americans from the grim costs of war
  • the absence of a draft has allowed the U.S. government to outsource its wars to a military caste, an increasingly self-segregated portion of society, opening up a yawning civil-military divide as profound as any that American society has ever known.
  • For now, the military remains one of the most trusted institutions in the United States and one of the few that the public sees as having no overt political bias. How long will this trust last under existing political conditions? As partisanship taints every facet of American life, it would seem to be only a matter of time before that infection spreads to the U.S. military.
  • From Caesar’s Rome to Napoleon’s France, history shows that when a republic couples a large standing military with dysfunctional domestic politics, democracy doesn’t last long. The United States today meets both conditions.
  • Historically, this has invited the type of political crisis that leads to military involvement (or even intervention) in domestic politics.
  • How imminent is the threat from these states? When it comes to legacy military platforms—aircraft carriers, tanks, fighter planes—the United States continues to enjoy a healthy technological dominance over its near-peer competitors. But its preferred platforms might not be the right ones. Long-range land-based cruise missiles could render large aircraft carriers obsolete. Advances in cyberoffense could make tech-reliant fighter aircraft too vulnerable to fly
  • It is not difficult to imagine a more limited counterterrorism campaign in Afghanistan that might have brought bin Laden to justice or a strategy to contain Saddam Hussein’s Iraq that would not have involved a full-scale U.S. invasion. The long, costly counterinsurgency campaigns that followed in each country were wars of choice.
  • Both proved to be major missteps when it came to achieving the twin goals of bringing the perpetrators of 9/11 to justice and securing the homeland. In fact, at several moments over the past two decades, the wars set back those objectives
  • Few years proved to be more significant in the war on terror than 2011. Aside from being the year bin Laden was killed, it also was the year the Arab Spring took off and the year U.S. troops fully withdrew from Iraq. If the great strategic blunder of the Bush administration was to put troops into Iraq, then the great strategic blunder of the Obama administration was to pull all of them out. Both missteps created power vacuums. The first saw the flourishing of al Qaeda in Iraq; the second gave birth to that group’s successor, ISIS.
  • But what makes the war on terror different from other wars is that victory has never been based on achieving a positive outcome; the goal has been to prevent a negative one.
  • How, then, do you declare victory? How do you prove a negative?
  • The wars in Afghanistan and Iraq represented a familiar type of war, with an invasion to topple a government and liberate a people, followed by a long occupation and counterinsurgency campaigns.
  • In addition to blood and treasure, there is another metric by which the war on terror can be judged: opportunity cost
  • For the past two decades, while Washington was repurposing the U.S. military to engage in massive counterinsurgency campaigns and precision counterterrorism operations, Beijing was busy building a military to fight and defeat a peer-level competitor.
  • Today, the Chinese navy is the largest in the world. It boasts 350 commissioned warships to the U.S. Navy’s roughly 290.
  • it now seems inevitable that the two countries’ militaries will one day reach parity. China has spent 20 years building a chain of artificial islands throughout the South China Sea that can effectively serve as a defensive line of unsinkable aircraft carriers.
  • Culturally, China has become more militaristic, producing hypernationalist content such as the Wolf Warrior action movies.
  • After the century opened with 9/11, conventional wisdom had it that nonstate actors would prove to be the greatest threat to U.S. national security
  • Nonstate actors have compromised national security not by attacking the United States but by diverting its attention away from state actors. It is these classic antagonists—China, Iran, North Korea, and Russia—that have expanded their capabilities and antipathies in the face of a distracted United States.
  • it may seem odd to separate the wars in Afghanistan and Iraq from the war on terror,
  • The greatest minds in the U.S. military have now, finally, turned their attention to these concerns, with the U.S. Marine Corps, for example, shifting its entire strategic focus to a potential conflict with China. But it may be too late.
  • Americans’ fatigue—and rival countries’ recognition of it—has limited the United States’ strategic options. As a result, presidents have adopted policies of inaction, and American credibility has eroded.
  • When Obama went to legislators to gain support for a military strike against the Assad regime, he encountered bipartisan war fatigue that mirrored the fatigue of voters, and he called off the attack. The United States’ redline had been crossed, without incident or reprisal.
  • Fatigue may seem like a “soft” cost of the war on terror, but it is a glaring strategic liability.
  • This proved to be true during the Cold War when, at the height of the Vietnam War, in 1968, the Soviets invaded Czechoslovakia, and when, in the war’s aftermath, in 1979, the Soviets invaded Afghanistan. Because it was embroiled in a war in the first case and reeling from it in the second, the United States could not credibly deter Soviet military aggression
  • It is no coincidence that China, for instance, has felt empowered to infringe on Hong Kong’s autonomy and commit brazen human rights abuses against its minority Uyghur population. When American power recedes, other states fill the vacuum.
  • U.S. adversaries have also learned to obfuscate their aggression. The cyberwar currently being waged from Russia is one example, with the Russian government claiming no knowledge of the spate of ransomware attacks emanating from within its borders. With Taiwan, likewise, Chinese aggression probably wouldn’t manifest in conventional military ways. Beijing is more likely to take over the island through gradual annexation, akin to what it has done with Hong Kong, than stage an outright invasion.
  • From time to time, people have asked in what ways the war changed me. I have never known how to answer this question because ultimately the war didn’t change me; the war made me
  • Today, I have a hard time remembering what the United States used to be like. I forget what it was like to be able to arrive at the airport just 20 minutes before a flight. What it was like to walk through a train station without armed police meandering around the platforms. Or what it was like to believe—particularly in those heady years right after the Cold War—that the United States’ version of democracy would remain ascendant for all time and that the world had reached “the end of history.”
  • Today, the United States is different; it is skeptical of its role in the world, more clear-eyed about the costs of war despite having experienced those costs only in predominantly tangential ways. Americans’ appetite to export their ideals abroad is also diminished, particularly as they struggle to uphold those ideals at home, whether in violence around the 2020 presidential election, the summer of 2020’s civil unrest, or even the way the war on terror compromised the country through scandals from Abu Ghraib prison to Edward Snowden’s leaks. A United States in which Band of Brothers has near-universal appeal is a distant memory.
  • When I told him that even though we might have lost the war in Afghanistan, our generation could still claim to have won the war on terror, he was skeptical. We debated the issue but soon let it drop. The next day, I received an email from him. A southerner and a lover of literature, he had sent me the following, from The Sound and the Fury:
  • No battle is ever won. . . . They are not even fought. The field only reveals to man his own folly and despair, and victory is an illusion of philosophers and fools.
criscimagnael

Jan. 6 Committee Subpoenas Twitter, Meta, Alphabet and Reddit - The New York Times - 0 views

  • The House committee investigating the Jan. 6 attack on the Capitol issued subpoenas on Thursday to four major social media companies — Alphabet, Meta, Reddit and Twitter — criticizing them for allowing extremism to spread on their platforms and saying they have failed to cooperate adequately with the inquiry.
  • In letters accompanying the subpoenas, the panel named Facebook, a unit of Meta, and YouTube, which is owned by Alphabet’s Google subsidiary, as among the worst offenders that contributed to the spread of misinformation and violent extremism.
  • The committee sent letters in August to 15 social media companies — including sites where misinformation about election fraud spread, such as the pro-Trump website TheDonald.win — seeking documents pertaining to efforts to overturn the election and any domestic violent extremists associated with the Jan. 6 rally and attack.
  • ...16 more annotations...
  • “It’s disappointing that after months of engagement, we still do not have the documents and information necessary to answer those basic questions,”
  • In the days after the attack, Reddit banned a discussion forum dedicated to former President Donald J. Trump, where tens of thousands of Mr. Trump’s supporters regularly convened to express solidarity with him.
  • In the year since the events of Jan. 6, social media companies have been heavily scrutinized for whether their sites played an instrumental role in organizing the attack.
  • In the months surrounding the 2020 election, employees inside Meta raised warning signs that Facebook posts and comments containing “combustible election misinformation” were spreading quickly across the social network, according to a cache of documents and photos reviewed by The New York Times.
  • Frances Haugen, a former Facebook employee turned whistle-blower, said the company relaxed its safeguards too quickly after the election, which then led it to be used in the storming of the Capitol.
  • On Twitter, many of Mr. Trump’s followers used the site to amplify and spread false allegations of election fraud, while connecting with other Trump supporters and conspiracy theorists using the site. And on YouTube, some users broadcast the events of Jan. 6 using the platform’s video streaming technology.
  • Meta said that it had “produced documents to the committee on a schedule committee staff requested — and we will continue to do so.”
  • The committee said letters to the four firms accompanied the subpoenas.The panel said YouTube served as a platform for “significant communications by its users that were relevant to the planning and execution of Jan. 6 attack on the United States Capitol,” including livestreams of the attack as it was taking place.
  • The panel said Facebook and other Meta platforms were used to share messages of “hate, violence and incitement; to spread misinformation, disinformation and conspiracy theories around the election; and to coordinate or attempt to coordinate the Stop the Steal movement.”
  • “Meta has declined to commit to a deadline for producing or even identifying these materials,” Mr. Thompson wrote to Mark Zuckerberg, Meta’s chief executive.
  • The panel said it was focused on Reddit because the platform hosted the r/The_Donald subreddit community that grew significantly before migrating in 2020 to the website TheDonald.win, which ultimately hosted significant discussion and planning related to the Jan. 6 attack.
  • “Unfortunately, the select committee believes Twitter has failed to disclose critical information,” the panel stated.
  • In recent years, Big Tech and Washington have had a history of butting heads. Some Republicans have accused sites including Facebook, Instagram and Twitter of silencing conservative voices.
  • The Federal Trade Commission is investigating whether a number of tech companies have grown too big, and in the process abused their market power to stifle competition. And a bipartisan group of senators and representatives continues to say sites like Facebook and YouTube are not doing enough to curb the spread of misinformation and conspiracy theories.
  • After months of discussions with the companies, only the four large corporations were issued subpoenas on Thursday, because the committee said the firms were “unwilling to commit to voluntarily and expeditiously” cooperating with its work.
  • The panel has interviewed more than 340 witnesses and issued dozens of subpoenas, including for bank and phone records.
‹ Previous 21 - 40 of 533 Next › Last »
Showing 20 items per page