Skip to main content

Home/ TOK Friends/ Group items tagged platform

Rss Feed Group items tagged

Javier E

How 2020 Forced Facebook and Twitter to Step In - The Atlantic - 0 views

  • mainstream platforms learned their lesson, accepting that they should intervene aggressively in more and more cases when users post content that might cause social harm.
  • During the wildfires in the American West in September, Facebook and Twitter took down false claims about their cause, even though the platforms had not done the same when large parts of Australia were engulfed in flames at the start of the year
  • Twitter, Facebook, and YouTube cracked down on QAnon, a sprawling, incoherent, and constantly evolving conspiracy theory, even though its borders are hard to delineate.
  • ...15 more annotations...
  • Content moderation comes to every content platform eventually, and platforms are starting to realize this faster than ever.
  • Nothing symbolizes this shift as neatly as Facebook’s decision in October (and Twitter’s shortly after) to start banning Holocaust denial. Almost exactly a year earlier, Zuckerberg had proudly tied himself to the First Amendment in a widely publicized “stand for free expression” at Georgetown University.
  • The evolution continues. Facebook announced earlier this month that it will join platforms such as YouTube and TikTok in removing, not merely labeling or down-ranking, false claims about COVID-19 vaccines.
  • the pandemic also showed that complete neutrality is impossible. Even though it’s not clear that removing content outright is the best way to correct misperceptions, Facebook and other platforms plainly want to signal that, at least in the current crisis, they don’t want to be seen as feeding people information that might kill them.
  • When internet platforms announce new policies, assessing whether they can and will enforce them consistently has always been difficult. In essence, the companies are grading their own work. But too often what can be gleaned from the outside suggests that they’re failing.
  • It tweaked its algorithm to boost authoritative sources in the news feed and turned off recommendations to join groups based around political or social issues. Facebook is reversing some of these steps now, but it cannot make people forget this toolbox exists in the future
  • As platforms grow more comfortable with their power, they are recognizing that they have options beyond taking posts down or leaving them up. In addition to warning labels, Facebook implemented other “break glass” measures to stem misinformation as the election approached.
  • Platforms don’t deserve praise for belatedly noticing dumpster fires that they helped create and affixing unobtrusive labels to them
  • Warning labels for misinformation might make some commentators feel a little better, but whether labels actually do much to contain the spread of false information is still unknown.
  • News reporting suggests that insiders at Facebook knew they could and should do more about misinformation, but higher-ups vetoed their ideas. YouTube barely acted to stem the flood of misinformation about election results on its platform.
  • Even before the pandemic, YouTube had begun adjusting its recommendation algorithm to reduce the spread of borderline and harmful content, and is introducing pop-up nudges to encourage user
  • And if 2020 finally made clear to platforms the need for greater content moderation, it also exposed the inevitable limits of content moderation.
  • Down-ranking, labeling, or deleting content on an internet platform does not address the social or political circumstances that caused it to be posted in the first place
  • even the most powerful platform will never be able to fully compensate for the failures of other governing institutions or be able to stop the leader of the free world from constructing an alternative reality when a whole media ecosystem is ready and willing to enable him. As Renée DiResta wrote in The Atlantic last month, “reducing the supply of misinformation doesn’t eliminate the demand.”
  • Even so, this year’s events showed that nothing is innate, inevitable, or immutable about platforms as they currently exist. The possibilities for what they might become—and what role they will play in society—are limited more by imagination than any fixed technological constraint, and the companies appear more willing to experiment than ever.
Javier E

Why the Past 10 Years of American Life Have Been Uniquely Stupid - The Atlantic - 0 views

  • Social scientists have identified at least three major forces that collectively bind together successful democracies: social capital (extensive social networks with high levels of trust), strong institutions, and shared stories.
  • Social media has weakened all three.
  • gradually, social-media users became more comfortable sharing intimate details of their lives with strangers and corporations. As I wrote in a 2019 Atlantic article with Tobias Rose-Stockwell, they became more adept at putting on performances and managing their personal brand—activities that might impress others but that do not deepen friendships in the way that a private phone conversation will.
  • ...118 more annotations...
  • the stage was set for the major transformation, which began in 2009: the intensification of viral dynamics.
  • Before 2009, Facebook had given users a simple timeline––a never-ending stream of content generated by their friends and connections, with the newest posts at the top and the oldest ones at the bottom
  • That began to change in 2009, when Facebook offered users a way to publicly “like” posts with the click of a button. That same year, Twitter introduced something even more powerful: the “Retweet” button, which allowed users to publicly endorse a post while also sharing it with all of their followers.
  • “Like” and “Share” buttons quickly became standard features of most other platforms.
  • Facebook developed algorithms to bring each user the content most likely to generate a “like” or some other interaction, eventually including the “share” as well.
  • Later research showed that posts that trigger emotions––especially anger at out-groups––are the most likely to be shared.
  • By 2013, social media had become a new game, with dynamics unlike those in 2008. If you were skillful or lucky, you might create a post that would “go viral” and make you “internet famous”
  • If you blundered, you could find yourself buried in hateful comments. Your posts rode to fame or ignominy based on the clicks of thousands of strangers, and you in turn contributed thousands of clicks to the game.
  • This new game encouraged dishonesty and mob dynamics: Users were guided not just by their true preferences but by their past experiences of reward and punishment,
  • As a social psychologist who studies emotion, morality, and politics, I saw this happening too. The newly tweaked platforms were almost perfectly designed to bring out our most moralistic and least reflective selves. The volume of outrage was shocking.
  • It was just this kind of twitchy and explosive spread of anger that James Madison had tried to protect us from as he was drafting the U.S. Constitution.
  • The Framers of the Constitution were excellent social psychologists. They knew that democracy had an Achilles’ heel because it depended on the collective judgment of the people, and democratic communities are subject to “the turbulency and weakness of unruly passions.”
  • The key to designing a sustainable republic, therefore, was to build in mechanisms to slow things down, cool passions, require compromise, and give leaders some insulation from the mania of the moment while still holding them accountable to the people periodically, on Election Day.
  • The tech companies that enhanced virality from 2009 to 2012 brought us deep into Madison’s nightmare.
  • a less quoted yet equally important insight, about democracy’s vulnerability to triviality.
  • Madison notes that people are so prone to factionalism that “where no substantial occasion presents itself, the most frivolous and fanciful distinctions have been sufficient to kindle their unfriendly passions and excite their most violent conflicts.”
  • Social media has both magnified and weaponized the frivolous.
  • It’s not just the waste of time and scarce attention that matters; it’s the continual chipping-away of trust.
  • a democracy depends on widely internalized acceptance of the legitimacy of rules, norms, and institutions.
  • when citizens lose trust in elected leaders, health authorities, the courts, the police, universities, and the integrity of elections, then every decision becomes contested; every election becomes a life-and-death struggle to save the country from the other side
  • The most recent Edelman Trust Barometer (an international measure of citizens’ trust in government, business, media, and nongovernmental organizations) showed stable and competent autocracies (China and the United Arab Emirates) at the top of the list, while contentious democracies such as the United States, the United Kingdom, Spain, and South Korea scored near the bottom (albeit above Russia).
  • The literature is complex—some studies show benefits, particularly in less developed democracies—but the review found that, on balance, social media amplifies political polarization; foments populism, especially right-wing populism; and is associated with the spread of misinformation.
  • When people lose trust in institutions, they lose trust in the stories told by those institutions. That’s particularly true of the institutions entrusted with the education of children.
  • Facebook and Twitter make it possible for parents to become outraged every day over a new snippet from their children’s history lessons––and math lessons and literature selections, and any new pedagogical shifts anywhere in the country
  • The motives of teachers and administrators come into question, and overreaching laws or curricular reforms sometimes follow, dumbing down education and reducing trust in it further.
  • young people educated in the post-Babel era are less likely to arrive at a coherent story of who we are as a people, and less likely to share any such story with those who attended different schools or who were educated in a different decade.
  • former CIA analyst Martin Gurri predicted these fracturing effects in his 2014 book, The Revolt of the Public. Gurri’s analysis focused on the authority-subverting effects of information’s exponential growth, beginning with the internet in the 1990s. Writing nearly a decade ago, Gurri could already see the power of social media as a universal solvent, breaking down bonds and weakening institutions everywhere it reached.
  • he notes a constructive feature of the pre-digital era: a single “mass audience,” all consuming the same content, as if they were all looking into the same gigantic mirror at the reflection of their own society. I
  • The digital revolution has shattered that mirror, and now the public inhabits those broken pieces of glass. So the public isn’t one thing; it’s highly fragmented, and it’s basically mutually hostile
  • Facebook, Twitter, YouTube, and a few other large platforms unwittingly dissolved the mortar of trust, belief in institutions, and shared stories that had held a large and diverse secular democracy together.
  • I think we can date the fall of the tower to the years between 2011 (Gurri’s focal year of “nihilistic” protests) and 2015, a year marked by the “great awokening” on the left and the ascendancy of Donald Trump on the right.
  • Twitter can overpower all the newspapers in the country, and stories cannot be shared (or at least trusted) across more than a few adjacent fragments—so truth cannot achieve widespread adherence.
  • fter Babel, nothing really means anything anymore––at least not in a way that is durable and on which people widely agree.
  • Politics After Babel
  • “Politics is the art of the possible,” the German statesman Otto von Bismarck said in 1867. In a post-Babel democracy, not much may be possible.
  • The ideological distance between the two parties began increasing faster in the 1990s. Fox News and the 1994 “Republican Revolution” converted the GOP into a more combative party.
  • So cross-party relationships were already strained before 2009. But the enhanced virality of social media thereafter made it more hazardous to be seen fraternizing with the enemy or even failing to attack the enemy with sufficient vigor.
  • What changed in the 2010s? Let’s revisit that Twitter engineer’s metaphor of handing a loaded gun to a 4-year-old. A mean tweet doesn’t kill anyone; it is an attempt to shame or punish someone publicly while broadcasting one’s own virtue, brilliance, or tribal loyalties. It’s more a dart than a bullet
  • from 2009 to 2012, Facebook and Twitter passed out roughly 1 billion dart guns globally. We’ve been shooting one another ever since.
  • “devoted conservatives,” comprised 6 percent of the U.S. population.
  • the warped “accountability” of social media has also brought injustice—and political dysfunction—in three ways.
  • First, the dart guns of social media give more power to trolls and provocateurs while silencing good citizens.
  • a small subset of people on social-media platforms are highly concerned with gaining status and are willing to use aggression to do so.
  • Across eight studies, Bor and Petersen found that being online did not make most people more aggressive or hostile; rather, it allowed a small number of aggressive people to attack a much larger set of victims. Even a small number of jerks were able to dominate discussion forums,
  • Additional research finds that women and Black people are harassed disproportionately, so the digital public square is less welcoming to their voices.
  • Second, the dart guns of social media give more power and voice to the political extremes while reducing the power and voice of the moderate majority.
  • The “Hidden Tribes” study, by the pro-democracy group More in Common, surveyed 8,000 Americans in 2017 and 2018 and identified seven groups that shared beliefs and behaviors.
  • Social media has given voice to some people who had little previously, and it has made it easier to hold powerful people accountable for their misdeeds
  • The group furthest to the left, the “progressive activists,” comprised 8 percent of the population. The progressive activists were by far the most prolific group on social media: 70 percent had shared political content over the previous year. The devoted conservatives followed, at 56 percent.
  • These two extreme groups are similar in surprising ways. They are the whitest and richest of the seven groups, which suggests that America is being torn apart by a battle between two subsets of the elite who are not representative of the broader society.
  • they are the two groups that show the greatest homogeneity in their moral and political attitudes.
  • likely a result of thought-policing on social media:
  • political extremists don’t just shoot darts at their enemies; they spend a lot of their ammunition targeting dissenters or nuanced thinkers on their own team.
  • Finally, by giving everyone a dart gun, social media deputizes everyone to administer justice with no due process. Platforms like Twitter devolve into the Wild West, with no accountability for vigilantes.
  • Enhanced-virality platforms thereby facilitate massive collective punishment for small or imagined offenses, with real-world consequences, including innocent people losing their jobs and being shamed into suicide
  • we don’t get justice and inclusion; we get a society that ignores context, proportionality, mercy, and truth.
  • Since the tower fell, debates of all kinds have grown more and more confused. The most pervasive obstacle to good thinking is confirmation bias, which refers to the human tendency to search only for evidence that confirms our preferred beliefs
  • search engines were supercharging confirmation bias, making it far easier for people to find evidence for absurd beliefs and conspiracy theorie
  • The most reliable cure for confirmation bias is interaction with people who don’t share your beliefs. They confront you with counterevidence and counterargument.
  • In his book The Constitution of Knowledge, Jonathan Rauch describes the historical breakthrough in which Western societies developed an “epistemic operating system”—that is, a set of institutions for generating knowledge from the interactions of biased and cognitively flawed individuals
  • English law developed the adversarial system so that biased advocates could present both sides of a case to an impartial jury.
  • Newspapers full of lies evolved into professional journalistic enterprises, with norms that required seeking out multiple sides of a story, followed by editorial review, followed by fact-checking.
  • Universities evolved from cloistered medieval institutions into research powerhouses, creating a structure in which scholars put forth evidence-backed claims with the knowledge that other scholars around the world would be motivated to gain prestige by finding contrary evidence.
  • Part of America’s greatness in the 20th century came from having developed the most capable, vibrant, and productive network of knowledge-producing institutions in all of human history
  • But this arrangement, Rauch notes, “is not self-maintaining; it relies on an array of sometimes delicate social settings and understandings, and those need to be understood, affirmed, and protected.”
  • This, I believe, is what happened to many of America’s key institutions in the mid-to-late 2010s. They got stupider en masse because social media instilled in their members a chronic fear of getting darted
  • it was so pervasive that it established new behavioral norms backed by new policies seemingly overnight
  • Participants in our key institutions began self-censoring to an unhealthy degree, holding back critiques of policies and ideas—even those presented in class by their students—that they believed to be ill-supported or wrong.
  • The stupefying process plays out differently on the right and the left because their activist wings subscribe to different narratives with different sacred values.
  • The “Hidden Tribes” study tells us that the “devoted conservatives” score highest on beliefs related to authoritarianism. They share a narrative in which America is eternally under threat from enemies outside and subversives within; they see life as a battle between patriots and traitors.
  • they are psychologically different from the larger group of “traditional conservatives” (19 percent of the population), who emphasize order, decorum, and slow rather than radical change.
  • The traditional punishment for treason is death, hence the battle cry on January 6: “Hang Mike Pence.”
  • Right-wing death threats, many delivered by anonymous accounts, are proving effective in cowing traditional conservatives
  • The wave of threats delivered to dissenting Republican members of Congress has similarly pushed many of the remaining moderates to quit or go silent, giving us a party ever more divorced from the conservative tradition, constitutional responsibility, and reality.
  • The stupidity on the right is most visible in the many conspiracy theories spreading across right-wing media and now into Congress.
  • The Democrats have also been hit hard by structural stupidity, though in a different way. In the Democratic Party, the struggle between the progressive wing and the more moderate factions is open and ongoing, and often the moderates win.
  • The problem is that the left controls the commanding heights of the culture: universities, news organizations, Hollywood, art museums, advertising, much of Silicon Valley, and the teachers’ unions and teaching colleges that shape K–12 education. And in many of those institutions, dissent has been stifled:
  • Liberals in the late 20th century shared a belief that the sociologist Christian Smith called the “liberal progress” narrative, in which America used to be horrifically unjust and repressive, but, thanks to the struggles of activists and heroes, has made (and continues to make) progress toward realizing the noble promise of its founding.
  • It is also the view of the “traditional liberals” in the “Hidden Tribes” study (11 percent of the population), who have strong humanitarian values, are older than average, and are largely the people leading America’s cultural and intellectual institutions.
  • when the newly viralized social-media platforms gave everyone a dart gun, it was younger progressive activists who did the most shooting, and they aimed a disproportionate number of their darts at these older liberal leaders.
  • Confused and fearful, the leaders rarely challenged the activists or their nonliberal narrative in which life at every institution is an eternal battle among identity groups over a zero-sum pie, and the people on top got there by oppressing the people on the bottom. This new narrative is rigidly egalitarian––focused on equality of outcomes, not of rights or opportunities. It is unconcerned with individual rights.
  • The universal charge against people who disagree with this narrative is not “traitor”; it is “racist,” “transphobe,” “Karen,” or some related scarlet letter marking the perpetrator as one who hates or harms a marginalized group.
  • The punishment that feels right for such crimes is not execution; it is public shaming and social death.
  • anyone on Twitter had already seen dozens of examples teaching the basic lesson: Don’t question your own side’s beliefs, policies, or actions. And when traditional liberals go silent, as so many did in the summer of 2020, the progressive activists’ more radical narrative takes over as the governing narrative of an organization.
  • This is why so many epistemic institutions seemed to “go woke” in rapid succession that year and the next, beginning with a wave of controversies and resignations at The New York Times and other newspapers, and continuing on to social-justice pronouncements by groups of doctors and medical associations
  • The problem is structural. Thanks to enhanced-virality social media, dissent is punished within many of our institutions, which means that bad ideas get elevated into official policy.
  • In a 2018 interview, Steve Bannon, the former adviser to Donald Trump, said that the way to deal with the media is “to flood the zone with shit.” He was describing the “firehose of falsehood” tactic pioneered by Russian disinformation programs to keep Americans confused, disoriented, and angry.
  • artificial intelligence is close to enabling the limitless spread of highly believable disinformation. The AI program GPT-3 is already so good that you can give it a topic and a tone and it will spit out as many essays as you like, typically with perfect grammar and a surprising level of coherence.
  • Renée DiResta, the research manager at the Stanford Internet Observatory, explained that spreading falsehoods—whether through text, images, or deep-fake videos—will quickly become inconceivably easy. (She co-wrote the essay with GPT-3.)
  • American factions won’t be the only ones using AI and social media to generate attack content; our adversaries will too.
  • In the 20th century, America’s shared identity as the country leading the fight to make the world safe for democracy was a strong force that helped keep the culture and the polity together.
  • In the 21st century, America’s tech companies have rewired the world and created products that now appear to be corrosive to democracy, obstacles to shared understanding, and destroyers of the modern tower.
  • What changes are needed?
  • I can suggest three categories of reforms––three goals that must be achieved if democracy is to remain viable in the post-Babel era.
  • We must harden democratic institutions so that they can withstand chronic anger and mistrust, reform social media so that it becomes less socially corrosive, and better prepare the next generation for democratic citizenship in this new age.
  • Harden Democratic Institutions
  • we must reform key institutions so that they can continue to function even if levels of anger, misinformation, and violence increase far above those we have today.
  • Reforms should reduce the outsize influence of angry extremists and make legislators more responsive to the average voter in their district.
  • One example of such a reform is to end closed party primaries, replacing them with a single, nonpartisan, open primary from which the top several candidates advance to a general election that also uses ranked-choice voting
  • A second way to harden democratic institutions is to reduce the power of either political party to game the system in its favor, for example by drawing its preferred electoral districts or selecting the officials who will supervise elections
  • These jobs should all be done in a nonpartisan way.
  • Reform Social Media
  • Social media’s empowerment of the far left, the far right, domestic trolls, and foreign agents is creating a system that looks less like democracy and more like rule by the most aggressive.
  • it is within our power to reduce social media’s ability to dissolve trust and foment structural stupidity. Reforms should limit the platforms’ amplification of the aggressive fringes while giving more voice to what More in Common calls “the exhausted majority.”
  • the main problem with social media is not that some people post fake or toxic stuff; it’s that fake and outrage-inducing content can now attain a level of reach and influence that was not possible before
  • Perhaps the biggest single change that would reduce the toxicity of existing platforms would be user verification as a precondition for gaining the algorithmic amplification that social media offers.
  • One of the first orders of business should be compelling the platforms to share their data and their algorithms with academic researchers.
  • Prepare the Next Generation
  • Childhood has become more tightly circumscribed in recent generations––with less opportunity for free, unstructured play; less unsupervised time outside; more time online. Whatever else the effects of these shifts, they have likely impeded the development of abilities needed for effective self-governance for many young adults
  • Depression makes people less likely to want to engage with new people, ideas, and experiences. Anxiety makes new things seem more threatening. As these conditions have risen and as the lessons on nuanced social behavior learned through free play have been delayed, tolerance for diverse viewpoints and the ability to work out disputes have diminished among many young people
  • Students did not just say that they disagreed with visiting speakers; some said that those lectures would be dangerous, emotionally devastating, a form of violence. Because rates of teen depression and anxiety have continued to rise into the 2020s, we should expect these views to continue in the generations to follow, and indeed to become more severe.
  • The most important change we can make to reduce the damaging effects of social media on children is to delay entry until they have passed through puberty.
  • The age should be raised to at least 16, and companies should be held responsible for enforcing it.
  • et them out to play. Stop starving children of the experiences they most need to become good citizens: free play in mixed-age groups of children with minimal adult supervision
  • while social media has eroded the art of association throughout society, it may be leaving its deepest and most enduring marks on adolescents. A surge in rates of anxiety, depression, and self-harm among American teens began suddenly in the early 2010s. (The same thing happened to Canadian and British teens, at the same time.) The cause is not known, but the timing points to social media as a substantial contributor—the surge began just as the large majority of American teens became daily users of the major platforms.
  • What would it be like to live in Babel in the days after its destruction? We know. It is a time of confusion and loss. But it is also a time to reflect, listen, and build.
  • In recent years, Americans have started hundreds of groups and organizations dedicated to building trust and friendship across the political divide, including BridgeUSA, Braver Angels (on whose board I serve), and many others listed at BridgeAlliance.us. We cannot expect Congress and the tech companies to save us. We must change ourselves and our communities.
  • when we look away from our dysfunctional federal government, disconnect from social media, and talk with our neighbors directly, things seem more hopeful. Most Americans in the More in Common report are members of the “exhausted majority,” which is tired of the fighting and is willing to listen to the other side and compromise. Most Americans now see that social media is having a negative impact on the country, and are becoming more aware of its damaging effects on children.
Javier E

Instagram's Algorithm Delivers Toxic Video Mix to Adults Who Follow Children - WSJ - 0 views

  • Instagram’s Reels video service is designed to show users streams of short videos on topics the system decides will interest them, such as sports, fashion or humor. 
  • The Meta Platforms META -1.04%decrease; red down pointing triangle-owned social app does the same thing for users its algorithm decides might have a prurient interest in children, testing by The Wall Street Journal showed.
  • The Journal sought to determine what Instagram’s Reels algorithm would recommend to test accounts set up to follow only young gymnasts, cheerleaders and other teen and preteen influencers active on the platform.
  • ...30 more annotations...
  • “Our systems are effective at reducing harmful content, and we’ve invested billions in safety, security and brand suitability solutions,” said Samantha Stetson, a Meta vice president who handles relations with the advertising industry. She said the prevalence of inappropriate content on Instagram is low, and that the company invests heavily in reducing it.
  • The Journal set up the test accounts after observing that the thousands of followers of such young people’s accounts often include large numbers of adult men, and that many of the accounts who followed those children also had demonstrated interest in sex content related to both children and adults
  • The Journal also tested what the algorithm would recommend after its accounts followed some of those users as well, which produced more-disturbing content interspersed with ads.
  • The Canadian Centre for Child Protection, a child-protection group, separately ran similar tests on its own, with similar results.
  • Meta said the Journal’s tests produced a manufactured experience that doesn’t represent what billions of users see. The company declined to comment on why the algorithms compiled streams of separate videos showing children, sex and advertisements, but a spokesman said that in October it introduced new brand safety tools that give advertisers greater control over where their ads appear, and that Instagram either removes or reduces the prominence of four million videos suspected of violating its standards each month. 
  • The Journal reported in June that algorithms run by Meta, which owns both Facebook and Instagram, connect large communities of users interested in pedophilic content. The Meta spokesman said a task force set up after the Journal’s article has expanded its automated systems for detecting users who behave suspiciously, taking down tens of thousands of such accounts each month. The company also is participating in a new industry coalition to share signs of potential child exploitation.
  • Following what it described as Meta’s unsatisfactory response to its complaints, Match began canceling Meta advertising for some of its apps, such as Tinder, in October. It has since halted all Reels advertising and stopped promoting its major brands on any of Meta’s platforms. “We have no desire to pay Meta to market our brands to predators or place our ads anywhere near this content,” said Match spokeswoman Justine Sacco.
  • Even before the 2020 launch of Reels, Meta employees understood that the product posed safety concerns, according to former employees.
  • Robbie McKay, a spokesman for Bumble, said it “would never intentionally advertise adjacent to inappropriate content,” and that the company is suspending its ads across Meta’s platforms.
  • Meta created Reels to compete with TikTok, the video-sharing platform owned by Beijing-based ByteDance. Both products feed users a nonstop succession of videos posted by others, and make money by inserting ads among them. Both companies’ algorithms show to a user videos the platforms calculate are most likely to keep that user engaged, based on his or her past viewing behavior
  • The Journal reporters set up the Instagram test accounts as adults on newly purchased devices and followed the gymnasts, cheerleaders and other young influencers. The tests showed that following only the young girls triggered Instagram to begin serving videos from accounts promoting adult sex content alongside ads for major consumer brands, such as one for Walmart that ran after a video of a woman exposing her crotch. 
  • When the test accounts then followed some users who followed those same young people’s accounts, they yielded even more disturbing recommendations. The platform served a mix of adult pornography and child-sexualizing material, such as a video of a clothed girl caressing her torso and another of a child pantomiming a sex act.
  • Experts on algorithmic recommendation systems said the Journal’s tests showed that while gymnastics might appear to be an innocuous topic, Meta’s behavioral tracking has discerned that some Instagram users following preteen girls will want to engage with videos sexualizing children, and then directs such content toward them.
  • Current and former Meta employees said in interviews that the tendency of Instagram algorithms to aggregate child sexualization content from across its platform was known internally to be a problem. Once Instagram pigeonholes a user as interested in any particular subject matter, they said, its recommendation systems are trained to push more related content to them.
  • Preventing the system from pushing noxious content to users interested in it, they said, requires significant changes to the recommendation algorithms that also drive engagement for normal users. Company documents reviewed by the Journal show that the company’s safety staffers are broadly barred from making changes to the platform that might reduce daily active users by any measurable amount.
  • The test accounts showed that advertisements were regularly added to the problematic Reels streams. Ads encouraging users to visit Disneyland for the holidays ran next to a video of an adult acting out having sex with her father, and another of a young woman in lingerie with fake blood dripping from her mouth. An ad for Hims ran shortly after a video depicting an apparently anguished woman in a sexual situation along with a link to what was described as “the full video.”
  • Instagram’s system served jarring doses of salacious content to those test accounts, including risqué footage of children as well as overtly sexual adult videos—and ads for some of the biggest U.S. brands.
  • Part of the problem is that automated enforcement systems have a harder time parsing video content than text or still images. Another difficulty arises from how Reels works: Rather than showing content shared by users’ friends, the way other parts of Instagram and Facebook often do, Reels promotes videos from sources they don’t follow
  • In an analysis conducted shortly before the introduction of Reels, Meta’s safety staff flagged the risk that the product would chain together videos of children and inappropriate content, according to two former staffers. Vaishnavi J, Meta’s former head of youth policy, described the safety review’s recommendation as: “Either we ramp up our content detection capabilities, or we don’t recommend any minor content,” meaning any videos of children.
  • At the time, TikTok was growing rapidly, drawing the attention of Instagram’s young users and the advertisers targeting them. Meta didn’t adopt either of the safety analysis’s recommendations at that time, according to J.
  • Stetson, Meta’s liaison with digital-ad buyers, disputed that Meta had neglected child safety concerns ahead of the product’s launch. “We tested Reels for nearly a year before releasing it widely, with a robust set of safety controls and measures,” she said. 
  • After initially struggling to maximize the revenue potential of its Reels product, Meta has improved how its algorithms recommend content and personalize video streams for users
  • Among the ads that appeared regularly in the Journal’s test accounts were those for “dating” apps and livestreaming platforms featuring adult nudity, massage parlors offering “happy endings” and artificial-intelligence chatbots built for cybersex. Meta’s rules are supposed to prohibit such ads.
  • The Journal informed Meta in August about the results of its testing. In the months since then, tests by both the Journal and the Canadian Centre for Child Protection show that the platform continued to serve up a series of videos featuring young children, adult content and apparent promotions for child sex material hosted elsewhere. 
  • As of mid-November, the center said Instagram is continuing to steadily recommend what the nonprofit described as “adults and children doing sexual posing.”
  • Meta hasn’t offered a timetable for resolving the problem or explained how in the future it would restrict the promotion of inappropriate content featuring children. 
  • The Journal’s test accounts found that the problem even affected Meta-related brands. Ads for the company’s WhatsApp encrypted chat service and Meta’s Ray-Ban Stories glasses appeared next to adult pornography. An ad for Lean In Girls, the young women’s empowerment nonprofit run by former Meta Chief Operating Officer Sheryl Sandberg, ran directly before a promotion for an adult sex-content creator who often appears in schoolgirl attire. Sandberg declined to comment. 
  • Through its own tests, the Canadian Centre for Child Protection concluded that Instagram was regularly serving videos and pictures of clothed children who also appear in the National Center for Missing and Exploited Children’s digital database of images and videos confirmed to be child abuse sexual material. The group said child abusers often use the images of the girls to advertise illegal content for sale in dark-web forums.
  • The nature of the content—sexualizing children without generally showing nudity—reflects the way that social media has changed online child sexual abuse, said Lianna McDonald, executive director for the Canadian center. The group has raised concerns about the ability of Meta’s algorithms to essentially recruit new members of online communities devoted to child sexual abuse, where links to illicit content in more private forums proliferate.
  • “Time and time again, we’ve seen recommendation algorithms drive users to discover and then spiral inside of these online child exploitation communities,” McDonald said, calling it disturbing that ads from major companies were subsidizing that process.
Javier E

The Economic Case for Regulating Social Media - The New York Times - 0 views

  • Social media platforms like Facebook, YouTube and Twitter generate revenue by using detailed behavioral information to direct ads to individual users.
  • this bland description of their business model fails to convey even a hint of its profound threat to the nation’s political and social stability.
  • legislators in Congress to propose the breakup of some tech firms, along with other traditional antitrust measures. But the main hazard posed by these platforms is not aggressive pricing, abusive service or other ills often associated with monopoly.
  • ...16 more annotations...
  • Instead, it is their contribution to the spread of misinformation, hate speech and conspiracy theories.
  • digital platforms, since the marginal cost of serving additional consumers is essentially zero. Because the initial costs of producing a platform’s content are substantial, and because any company’s first goal is to remain solvent, it cannot just give stuff away. Even so, when price exceeds marginal cost, competition relentlessly pressures rival publishers to cut prices — eventually all the way to zero. This, in a nutshell, is the publisher’s dilemma in the digital age.
  • These firms make money not by charging for access to content but by displaying it with finely targeted ads based on the specific types of things people have already chosen to view. If the conscious intent were to undermine social and political stability, this business model could hardly be a more effective weapon.
  • The algorithms that choose individual-specific content are crafted to maximize the time people spend on a platform
  • As the developers concede, Facebook’s algorithms are addictive by design and exploit negative emotional triggers. Platform addiction drives earnings, and hate speech, lies and conspiracy theories reliably boost addiction.
  • the subscription model isn’t fully efficient: Any positive fee would inevitably exclude at least some who would value access but not enough to pay the fee
  • a conservative think tank, says, for example, that government has no business second-guessing people’s judgments about what to post or read on social media.
  • That position would be easier to defend in a world where individual choices had no adverse impact on others. But negative spillover effects are in fact quite common
  • individual and collective incentives about what to post or read on social media often diverge sharply.
  • There is simply no presumption that what spreads on these platforms best serves even the individual’s own narrow interests, much less those of society as a whole.
  • a simpler step may hold greater promise: Platforms could be required to abandon that model in favor of one relying on subscriptions, whereby members gain access to content in return for a modest recurring fee.
  • Major newspapers have done well under this model, which is also making inroads in book publishing. The subscription model greatly weakens the incentive to offer algorithmically driven addictive content provided by individuals, editorial boards or other sources.
  • Careful studies have shown that Facebook’s algorithms have increased political polarization significantly
  • More worrisome, those excluded would come disproportionately from low-income groups. Such objections might be addressed specifically — perhaps with a modest tax credit to offset subscription fees — or in a more general way, by making the social safety net more generous.
  • Adam Smith, the 18th-century Scottish philosopher widely considered the father of economics, is celebrated for his “invisible hand” theory, which describes conditions under which market incentives promote socially benign outcomes. Many of his most ardent admirers may view steps to constrain the behavior of social media platforms as regulatory overreach.
  • But Smith’s remarkable insight was actually more nuanced: Market forces often promote society’s welfare, but not always. Indeed, as he saw clearly, individual interests are often squarely at odds with collective aspirations, and in many such instances it is in society’s interest to intervene. The current information crisis is a case in point.
Javier E

Opinion | Gen Z Has Regrets - The New York Times - 0 views

  • Was social media a good invention? One way to quantify the value of a product is to find out how many of the people who use it wish it had never been invented. Feelings of regret or resentment are common with addictive products (cigarettes, for example) and addictive activities like gambling, even if most users say they enjoy them.
  • What about social media platforms? They achieved global market penetration faster than almost any product in history. The category took hold in the early aughts with Friendster, MySpace and the one that rose to dominance: Facebook. By 2020, more than half of all humans were using some form of social media
  • But it turns out that it can be hard for people who don’t like social media to avoid it, because when everyone else is on it, the abstainers begin to miss out on information, trends and gossip
  • ...25 more annotations...
  • if this were any normal product we’d assume that people love it and are grateful to the companies that provide it to them — without charge, no less.
  • Turning to their own lives, 52 percent of the total sample say social media has benefited their lives, and 29 percent say it has hurt them personally.
  • Is it more like walkie-talkies, where hardly anyone wished they had never been invented? Or is it more like cigarettes, where smokers often say they enjoy smoking, but more than 71 percent of smokers (in one 2014 survey) regret ever starting?
  • So what does Gen Z really think about social media?
  • We recently collaborated on a nationally representative survey of 1,006 Gen Z adults (ages 18-27). We asked them online about their own social media use, about their views on the effects of social media on themselves and on society and about what kinds of reforms they’d support.
  • First, the number of hours spent on social media each day is astonishing. Over 60 percent of our respondents said they spend at least four hours a day, with 23 percent saying they spend seven or more hours each day using social media.
  • Second, our respondents recognize the harm that social media causes society, with 60 percent saying it has a negative impact (versus 32 percent who say it has a positive impact).
  • This is especially painful for adolescents, whose social networks have migrated, since the early 2010s, onto a few giant platforms. Nearly all American teenagers use social media regularly, and they spend an average of nearly five hours a day just on these platforms.
  • Although the percentage citing specific personal benefits was usually higher than those citing harms, this was less true for women and L.G.B.T.Q. respondents
  • For example, 37 percent of respondents said social media had a negative impact on their emotional health, with significantly more women (44 percent) than men (31 percent), and with more L.G.B.T.Q. (47 percent) than non-L.G.B.T.Q. respondents (35 percent) saying so. We have found this pattern — that social media disproportionately hurts young people from historically disadvantaged groups — in a wide array of surveys.
  • And even when more respondents cite more benefits than harms, that does not justify the unregulated distribution of a consumer product that is hurting — damaging, really — millions of children and young adults
  • Our survey shows that many Gen Z-ers see substantial dangers and costs from social media. A majority of them want better and safer platforms, and many don’t think these platforms are suitable for children
  • If any other consumer product was causing serious harm to more than one out of every 10 of its young users, there would be a tidal wave of state and federal legislation to ban or regulate it.
  • Turning to the ultimate test of regret versus gratitude: We asked respondents to tell us, for various platforms and products, if they wished that it “was never invented.”
  • Five items produced relatively low levels of regret: YouTube (15 percent), Netflix (17 percent), the internet itself (17 percent), messaging apps (19 percent) and the smartphone (21 percent).
  • We interpret these low numbers as indicating that Gen Z does not heavily regret the basic communication, storytelling and information-seeking functions of the internet.
  • responses were different for the main social media platforms that parents and Gen Z itself worry about most. Many more respondents wished these products had never been invented: Instagram (34 percent), Facebook (37 percent), Snapchat (43 percent), and the most regretted platforms of all: TikTok (47 percent) and X/Twitter (50 percent).
  • We’re not just talking about sad feelings from FOMO or social comparison. We’re talking about a range of documented risks that affect heavy users, including sleep deprivation, body image distortion, depression, anxiety, exposure to content promoting suicide and eating disorders, sexual predation and sextortion, and “problematic use,” which is the term psychologists use to describe compulsive overuse that interferes with success in other areas of life.
  • Forty-five percent of Gen Z-ers report that they “would not or will not allow my child to have a smartphone before reaching high school age (i.e. about 14 years old)”
  • 57 percent support the idea that parents should restrict their child’s access to smartphones before that age
  • Although only 36 percent support social media bans for those under the age of 16, 69 percent support a law requiring social media companies to develop a child-safe option for users under 18.
  • This high level of support is true across race, gender, social class and sexual orientatio
  • it has important implications for the House of Representatives, which is considering just such a bill, the Kids Online Safety Act. The bill would, among other things, disable addictive product features, require tech companies to offer young users the option to use non-personalized algorithmic feeds and mandate that platforms default to the safest settings possible for accounts believed to be held by minors.
  • imagine if walkie-talkies were harming millions of young people. Imagine if more than a third of young people wished that walkie-talkies didn’t exist, yet still felt compelled to use them for five hours every day.
  • We’d insist that the manufacturers make their products safer and less addictive for kids. Social media companies must be held to the same standard: Either fix their products to ensure the safety of young users or stop providing them to children altogether.
katherineharron

Snapchat to stop promoting Trump after controversial posts - CNN - 0 views

  • Snapchat (SNAP) will no longer promote President Donald Trump's account on its platform in the wake of his controversial comments on ongoing protests across the US, the company announced Wednesday.
  • "We are not currently promoting the President's content on Snapchat's Discover platform. We will not amplify voices who incite racial violence and injustice by giving them free promotion on Discover," Rachel Racusen, a spokesperson for Snap, Snapchat's parent company, said in a statement Wednesday.
  • "Racial violence and injustice have no place in our society and we stand together with all who seek peace, love, equality, and justice in America,"
  • ...3 more annotations...
  • In a memo to Snap staff on Sunday, the company's CEO, Evan Spiegel, wrote, "we simply cannot promote accounts in America that are linked to people who incite racial violence, whether they do so on or off our platform. Our Discover content platform is a curated platform, where we decide what we promote. We have spoken time and again about working hard to make a positive impact, and we will walk the talk with the content we promote on Snapchat. We may continue to allow divisive people to maintain an account on Snapchat, as long as the content that is published on Snapchat is consistent with our community guidelines, but we will not promote that account or content in any way."
  • "There are plenty of debates to be had about the future of our country and the world. But there is simply no room for debate in our country about the value of human life and the importance of a constant struggle for freedom, equality, and justice. We are standing with all those who stand for peace, love, and justice and we will use our platform to promote good rather than evil."
  • Last week, Twitter for the first time affixed a fact-check label to multiple Trump tweets about mail-in ballots and days later put a warning label on a tweet from Trump about the protest, in which he warned: "when the looting starts, the shooting starts." Facebook chose not to act on identical posts that appeared on its platform.
criscimagnael

Social media's toxic content can harm teens | News | Harvard T.H. Chan School of Public... - 0 views

  • social media platforms—especially image-based platforms like Instagram—have very harmful effects on teen mental health, especially for teens struggling with body image, anxiety, depression, and eating disorders.
  • we know that Instagram, with its algorithmically-driven feeds of content tailored to each user’s engagement patterns, can draw vulnerable teens into a dangerous spiral of negative social comparison and hook them onto unrealistic ideals of appearance and body size and shape.
  • Keep in mind that this is not about just about putting teens in a bad mood. Over time, with exposure to harmful content on social media, the negative impacts add up. And we now have more cause for worry than ever, with the pandemic worsening mental health stressors and social isolation for teens, pushing millions of youth to increase their social media use. We are witnessing dramatic increases in clinical level depression, anxiety, and suicidality, and eating disorders cases have doubled or even tripled at children’s hospitals across the country.
  • ...5 more annotations...
  • The company knows that strong negative emotions, which can be provoked by negative social comparison, keep users’ attention longer than other emotions—and Instagram’s algorithms are expressly designed to push teens toward toxic content so that they stay on the platform.
  • Instagram is peddling a false narrative that the platform is simply a reflection of its users’ interests and experiences, without distortion or manipulation by the platform. But Instagram knows full well that this not true. In fact, their very business model is predicated on how much they can manipulate users’ behavior to boost engagement and extend time spent on the platform, which the platform then monetizes to sell to advertisers.
  • The business model, which has proven itself to be exquisitely profitable, is self-reinforcing for investors and top management.
  • Although it’s a real struggle for parents to keep their kids off social media, they can set limits on its use, for instance by requiring that everyone’s phones go into a basket at mealtimes and at bedtime.
  • With all that we know today about the harmful effects of social media and its algorithms, combined with the powerful stories of teens, parents, and community advocates, we may finally have the opportunity to get meaningful federal regulation in place.
Javier E

[Six Questions] | Astra Taylor on The People's Platform: Taking Back Power and Culture ... - 1 views

  • Astra Taylor, a cultural critic and the director of the documentaries Zizek! and Examined Life, challenges the notion that the Internet has brought us into an age of cultural democracy. While some have hailed the medium as a platform for diverse voices and the free exchange of information and ideas, Taylor shows that these assumptions are suspect at best. Instead, she argues, the new cultural order looks much like the old: big voices overshadow small ones, content is sensationalist and powered by advertisements, quality work is underfunded, and corporate giants like Google and Facebook rule. The Internet does offer promising tools, Taylor writes, but a cultural democracy will be born only if we work collaboratively to develop the potential of this powerful resource
  • Most people don’t realize how little information can be conveyed in a feature film. The transcripts of both of my movies are probably equivalent in length to a Harper’s cover story.
  • why should Amazon, Apple, Facebook, and Google get a free pass? Why should we expect them to behave any differently over the long term? The tradition of progressive media criticism that came out of the Frankfurt School, not to mention the basic concept of political economy (looking at the way business interests shape the cultural landscape), was nowhere to be seen, and that worried me. It’s not like political economy became irrelevant the second the Internet was invented.
  • ...15 more annotations...
  • How do we reconcile our enjoyment of social media even as we understand that the corporations who control them aren’t always acting in our best interests?
  • hat was because the underlying economic conditions hadn’t been changed or “disrupted,” to use a favorite Silicon Valley phrase. Google has to serve its shareholders, just like NBCUniversal does. As a result, many of the unappealing aspects of the legacy-media model have simply carried over into a digital age — namely, commercialism, consolidation, and centralization. In fact, the new system is even more dependent on advertising dollars than the one that preceded it, and digital advertising is far more invasive and ubiquitous
  • the popular narrative — new communications technologies would topple the establishment and empower regular people — didn’t accurately capture reality. Something more complex and predictable was happening. The old-media dinosaurs weren’t dying out, but were adapting to the online environment; meanwhile the new tech titans were coming increasingly to resemble their predecessors
  • I use lots of products that are created by companies whose business practices I object to and that don’t act in my best interests, or the best interests of workers or the environment — we all do, since that’s part of living under capitalism. That said, I refuse to invest so much in any platform that I can’t quit without remorse
  • these services aren’t free even if we don’t pay money for them; we pay with our personal data, with our privacy. This feeds into the larger surveillance debate, since government snooping piggybacks on corporate data collection. As I argue in the book, there are also negative cultural consequences (e.g., when advertisers are paying the tab we get more of the kind of culture marketers like to associate themselves with and less of the stuff they don’t) and worrying social costs. For example, the White House and the Federal Trade Commission have both recently warned that the era of “big data” opens new avenues of discrimination and may erode hard-won consumer protections.
  • I’m resistant to the tendency to place this responsibility solely on the shoulders of users. Gadgets and platforms are designed to be addictive, with every element from color schemes to headlines carefully tested to maximize clickability and engagement. The recent news that Facebook tweaked its algorithms for a week in 2012, showing hundreds of thousands of users only “happy” or “sad” posts in order to study emotional contagion — in other words, to manipulate people’s mental states — is further evidence that these platforms are not neutral. In the end, Facebook wants us to feel the emotion of wanting to visit Facebook frequently
  • social inequalities that exist in the real world remain meaningful online. What are the particular dangers of discrimination on the Internet?
  • That it’s invisible or at least harder to track and prove. We haven’t figured out how to deal with the unique ways prejudice plays out over digital channels, and that’s partly because some folks can’t accept the fact that discrimination persists online. (After all, there is no sign on the door that reads Minorities Not Allowed.)
  • just because the Internet is open doesn’t mean it’s equal; offline hierarchies carry over to the online world and are even amplified there. For the past year or so, there has been a lively discussion taking place about the disproportionate and often outrageous sexual harassment women face simply for entering virtual space and asserting themselves there — research verifies that female Internet users are dramatically more likely to be threatened or stalked than their male counterparts — and yet there is very little agreement about what, if anything, can be done to address the problem.
  • What steps can we take to encourage better representation of independent and non-commercial media? We need to fund it, first and foremost. As individuals this means paying for the stuff we believe in and want to see thrive. But I don’t think enlightened consumption can get us where we need to go on its own. I’m skeptical of the idea that we can shop our way to a better world. The dominance of commercial media is a social and political problem that demands a collective solution, so I make an argument for state funding and propose a reconceptualization of public media. More generally, I’m struck by the fact that we use these civic-minded metaphors, calling Google Books a “library” or Twitter a “town square” — or even calling social media “social” — but real public options are off the table, at least in the United States. We hand the digital commons over to private corporations at our peril.
  • 6. You advocate for greater government regulation of the Internet. Why is this important?
  • I’m for regulating specific things, like Internet access, which is what the fight for net neutrality is ultimately about. We also need stronger privacy protections and restrictions on data gathering, retention, and use, which won’t happen without a fight.
  • I challenge the techno-libertarian insistence that the government has no productive role to play and that it needs to keep its hands off the Internet for fear that it will be “broken.” The Internet and personal computing as we know them wouldn’t exist without state investment and innovation, so let’s be real.
  • there’s a pervasive and ill-advised faith that technology will promote competition if left to its own devices (“competition is a click away,” tech executives like to say), but that’s not true for a variety of reasons. The paradox of our current media landscape is this: our devices and consumption patterns are ever more personalized, yet we’re simultaneously connected to this immense, opaque, centralized infrastructure. We’re all dependent on a handful of firms that are effectively monopolies — from Time Warner and Comcast on up to Google and Facebook — and we’re seeing increased vertical integration, with companies acting as both distributors and creators of content. Amazon aspires to be the bookstore, the bookshelf, and the book. Google isn’t just a search engine, a popular browser, and an operating system; it also invests in original content
  • So it’s not that the Internet needs to be regulated but that these big tech corporations need to be subject to governmental oversight. After all, they are reaching farther and farther into our intimate lives. They’re watching us. Someone should be watching them.
Javier E

Can Social Networks Do Better? We Don't Know Because They Haven't Tried - Talking Point... - 0 views

  • it’s not fair to say it’s Facebook or a Facebook problem. Facebook is just the latest media and communications medium. We hardly blame the technology of the book for spreading anti-Semitism via the notorious Protocols of the Elders of Zion
  • But of course, it’s not that simple. Social media platforms have distinct features that earlier communications media did not. The interactive nature of the media, the collection of data which is then run through algorithms and artificial intelligence creates something different.
  • All social media platforms are engineered with two basic goals: maximize the time you spend on the platform and make advertising as effective and thus as lucrative as possible. This means that social media can never be simply a common carrier, a distribution technology that has no substantial influence over the nature of the communication that travels over it.
  • ...5 more annotations...
  • it’s a substantial difference which deprives social media platforms of the kind of hands-off logic that would make it ridiculous to say phones are bad or the phone company is responsible if planning for a mass murder was carried out over the phone.
  • the Internet doesn’t ‘do’ anything more than make the distribution of information more efficient and radically lower the formal, informal and financial barriers to entry that used to stand in the way of various marginalized ideas.
  • Social media can never plead innocence like this because the platforms are designed to addict you and convince you of things.
  • If the question is: what can social media platforms do to protect against government-backed subversion campaigns like the one we saw in the 2016 campaign the best answer is, we don’t know. And we don’t know for a simple reason: they haven’t tried.
  • The point is straightforward: the mass collection of data, harnessed to modern computing power and the chance to amass unimaginable wealth has spurred vast technological innovation.
Javier E

An Unholy Alliance Between Ye, Musk, and Trump - The Atlantic - 0 views

  • Musk, Trump, and Ye are after something different: They are all obsessed with setting the rules of public spaces.
  • An understandable consensus began to form on the political left that large social networks, but especially Facebook, helped Trump rise to power. The reasons were multifaceted: algorithms that gave a natural advantage to the most shameless users, helpful marketing tools that the campaign made good use of, a confusing tangle of foreign interference (the efficacy of which has always been tough to suss out), and a basic attentional architecture that helps polarize and pit Americans against one another (no foreign help required).
  • The misinformation industrial complex—a loosely knit network of researchers, academics, journalists, and even government entities—coalesced around this moment. Different phases of the backlash homed in on bots, content moderation, and, after the Cambridge Analytica scandal, data privacy
  • ...15 more annotations...
  • the broad theme was clear: Social-media platforms are the main communication tools of the 21st century, and they matter.
  • With Trump at the center, the techlash morphed into a culture war with a clear partisan split. One could frame the position from the left as: We do not want these platforms to give a natural advantage to the most shameless and awful people who stoke resentment and fear to gain power
  • On the right, it might sound more like: We must preserve the power of the platforms to let outsiders have a natural advantage (by stoking fear and resentment to gain power).
  • the political world realized that platforms and content-recommendation engines decide which cultural objects get amplified. The left found this troubling, whereas the right found it to be an exciting prospect and something to leverage, exploit, and manipulate via the courts
  • Crucially, both camps resent the power of the technology platforms and believe the companies have a negative influence on our discourse and politics by either censoring too much or not doing enough to protect users and our political discourse.
  • one outcome of the techlash has been an incredibly facile public understanding of content moderation and a whole lot of culture warring.
  • Musk and Ye aren’t so much buying into the right’s overly simplistic Big Tech culture war as they are hijacking it for their own purposes; Trump, meanwhile, is mostly just mad
  • Each one casts himself as an antidote to a heavy-handed, censorious social-media apparatus that is either captured by progressive ideology or merely pressured into submission by it. But none of them has any understanding of thorny First Amendment or content-moderation issues.
  • They embrace a shallow posture of free-speech maximalism—the very kind that some social-media-platform founders first espoused, before watching their sites become overrun with harassment, spam, and other hateful garbage that drives away both users and advertisers
  • for those who can hit the mark without getting banned, social media is a force multiplier for cultural and political relevance and a way around gatekeeping media.
  • Musk, Ye, and Trump rely on their ability to pick up their phones, go direct, and say whatever they wan
  • the moment they butt up against rules or consequences, they begin to howl about persecution and unfair treatment. The idea of being treated similarly to the rest of a platform’s user base
  • is so galling to these men that they declare the entire system to be broken.
  • they also demonstrate how being the Main Character of popular and political culture can totally warp perspective. They’re so blinded by their own outlying experiences across social media that, in most cases, they hardly know what it is they’re buying
  • These are projects motivated entirely by grievance and conflict. And so they are destined to amplify grievance and conflict
Javier E

His Job Was to Make Instagram Safe for Teens. His 14-Year-Old Showed Him What the App W... - 0 views

  • The experience of young users on Meta’s Instagram—where Bejar had spent the previous two years working as a consultant—was especially acute. In a subsequent email to Instagram head Adam Mosseri, one statistic stood out: One in eight users under the age of 16 said they had experienced unwanted sexual advances on the platform over the previous seven days.
  • For Bejar, that finding was hardly a surprise. His daughter and her friends had been receiving unsolicited penis pictures and other forms of harassment on the platform since the age of 14, he wrote, and Meta’s systems generally ignored their reports—or responded by saying that the harassment didn’t violate platform rules.
  • “I asked her why boys keep doing that,” Bejar wrote to Zuckerberg and his top lieutenants. “She said if the only thing that happens is they get blocked, why wouldn’t they?”
  • ...39 more annotations...
  • For the well-being of its users, Bejar argued, Meta needed to change course, focusing less on a flawed system of rules-based policing and more on addressing such bad experiences
  • The company would need to collect data on what upset users and then work to combat the source of it, nudging those who made others uncomfortable to improve their behavior and isolating communities of users who deliberately sought to harm others.
  • “I am appealing to you because I believe that working this way will require a culture shift,” Bejar wrote to Zuckerberg—the company would have to acknowledge that its existing approach to governing Facebook and Instagram wasn’t working.
  • During and after Bejar’s time as a consultant, Meta spokesman Andy Stone said, the company has rolled out several product features meant to address some of the Well-Being Team’s findings. Those features include warnings to users before they post comments that Meta’s automated systems flag as potentially offensive, and reminders to be kind when sending direct messages to users like content creators who receive a large volume of messages. 
  • Meta’s classifiers were reliable enough to remove only a low single-digit percentage of hate speech with any degree of precision.
  • Bejar was floored—all the more so when he learned that virtually all of his daughter’s friends had been subjected to similar harassment. “DTF?” a user they’d never met would ask, using shorthand for a vulgar proposition. Instagram acted so rarely on reports of such behavior that the girls no longer bothered reporting them. 
  • Meta’s own statistics suggested that big problems didn’t exist. 
  • Meta had come to approach governing user behavior as an overwhelmingly automated process. Engineers would compile data sets of unacceptable content—things like terrorism, pornography, bullying or “excessive gore”—and then train machine-learning models to screen future content for similar material.
  • While users could still flag things that upset them, Meta shifted resources away from reviewing them. To discourage users from filing reports, internal documents from 2019 show, Meta added steps to the reporting process. Meta said the changes were meant to discourage frivolous reports and educate users about platform rules. 
  • The outperformance of Meta’s automated enforcement relied on what Bejar considered two sleights of hand. The systems didn’t catch anywhere near the majority of banned content—only the majority of what the company ultimately removed
  • “Please don’t talk about my underage tits,” Bejar’s daughter shot back before reporting his comment to Instagram. A few days later, the platform got back to her: The insult didn’t violate its community guidelines.
  • Also buttressing Meta’s statistics were rules written narrowly enough to ban only unambiguously vile material. Meta’s rules didn’t clearly prohibit adults from flooding the comments section on a teenager’s posts with kiss emojis or posting pictures of kids in their underwear, inviting their followers to “see more” in a private Facebook Messenger group. 
  • “Mark personally values freedom of expression first and foremost and would say this is a feature and not a bug,” Rosen responded
  • Narrow rules and unreliable automated enforcement systems left a lot of room for bad behavior—but they made the company’s child-safety statistics look pretty good according to Meta’s metric of choice: prevalence.
  • Defined as the percentage of content viewed worldwide that explicitly violates a Meta rule, prevalence was the company’s preferred measuring stick for the problems users experienced.
  • According to prevalence, child exploitation was so rare on the platform that it couldn’t be reliably estimated, less than 0.05%, the threshold for functional measurement. Content deemed to encourage self-harm, such as eating disorders, was just as minimal, and rule violations for bullying and harassment occurred in just eight of 10,000 views. 
  • “There’s a grading-your-own-homework problem,”
  • Meta defines what constitutes harmful content, so it shapes the discussion of how successful it is at dealing with it.”
  • It could reconsider its AI-generated “beauty filters,” which internal research suggested made both the people who used them and those who viewed the images more self-critical
  • the team built a new questionnaire called BEEF, short for “Bad Emotional Experience Feedback.
  • A recurring survey of issues 238,000 users had experienced over the past seven days, the effort identified problems with prevalence from the start: Users were 100 times more likely to tell Instagram they’d witnessed bullying in the last week than Meta’s bullying-prevalence statistics indicated they should.
  • “People feel like they’re having a bad experience or they don’t,” one presentation on BEEF noted. “Their perception isn’t constrained by policy.
  • they seemed particularly common among teens on Instagram.
  • Among users under the age of 16, 26% recalled having a bad experience in the last week due to witnessing hostility against someone based on their race, religion or identity
  • More than a fifth felt worse about themselves after viewing others’ posts, and 13% had experienced unwanted sexual advances in the past seven days. 
  • The vast gap between the low prevalence of content deemed problematic in the company’s own statistics and what users told the company they experienced suggested that Meta’s definitions were off, Bejar argued
  • To minimize content that teenagers told researchers made them feel bad about themselves, Instagram could cap how much beauty- and fashion-influencer content users saw.
  • Proving to Meta’s leadership that the company’s prevalence metrics were missing the point was going to require data the company didn’t have. So Bejar and a group of staffers from the Well-Being Team started collecting it
  • And it could build ways for users to report unwanted contacts, the first step to figuring out how to discourage them.
  • One experiment run in response to BEEF data showed that when users were notified that their comment or post had upset people who saw it, they often deleted it of their own accord. “Even if you don’t mandate behaviors,” said Krieger, “you can at least send signals about what behaviors aren’t welcome.”
  • But among the ranks of Meta’s senior middle management, Bejar and Krieger said, BEEF hit a wall. Managers who had made their careers on incrementally improving prevalence statistics weren’t receptive to the suggestion that the approach wasn’t working. 
  • After three decades in Silicon Valley, he understood that members of the company’s C-Suite might not appreciate a damning appraisal of the safety risks young users faced from its product—especially one citing the company’s own data. 
  • “This was the email that my entire career in tech trained me not to send,” he says. “But a part of me was still hoping they just didn’t know.”
  • “Policy enforcement is analogous to the police,” he wrote in the email Oct. 5, 2021—arguing that it’s essential to respond to crime, but that it’s not what makes a community safe. Meta had an opportunity to do right by its users and take on a problem that Bejar believed was almost certainly industrywide.
  • fter Haugen’s airing of internal research, Meta had cracked down on the distribution of anything that would, if leaked, cause further reputational damage. With executives privately asserting that the company’s research division harbored a fifth column of detractors, Meta was formalizing a raft of new rules for employees’ internal communication.
  • Among the mandates for achieving “Narrative Excellence,” as the company called it, was to keep research data tight and never assert a moral or legal duty to fix a problem.
  • “I had to write about it as a hypothetical,” Bejar said. Rather than acknowledging that Instagram’s survey data showed that teens regularly faced unwanted sexual advances, the memo merely suggested how Instagram might help teens if they faced such a problem.
  • The hope that the team’s work would continue didn’t last. The company stopped conducting the specific survey behind BEEF, then laid off most everyone who’d worked on it as part of what Zuckerberg called Meta’s “year of efficiency.
  • If Meta was to change, Bejar told the Journal, the effort would have to come from the outside. He began consulting with a coalition of state attorneys general who filed suit against the company late last month, alleging that the company had built its products to maximize engagement at the expense of young users’ physical and mental health. Bejar also got in touch with members of Congress about where he believes the company’s user-safety efforts fell short. 
Javier E

Social Media and the Devolution of Friendship: Full Essay (Pts I & II) » Cybo... - 1 views

  • social networking sites create pressure to put time and effort into tending weak ties, and how it can be impossible to keep up with them all. Personally, I also find it difficult to keep up with my strong ties. I’m a great “pick up where we left off” friend, as are most of the people closest to me (makes sense, right?). I’m decidedly sub-awesome, however, at being in constant contact with more than a few people at a time.
  • the devolution of friendship. As I explain over the course of this essay, I link the devolution of friendship to—but do not “blame” it on—the affordances of various social networking platforms, especially (but not exclusively) so-called “frictionless sharing” features.
  • I’m using the word here in the same way that people use it to talk about the devolution of health care. One example of devolution of health care is some outpatient surgeries: patients are allowed to go home after their operations, but they still require a good deal of post-operative care such as changing bandages, irrigating wounds, administering medications, etc. Whereas before these patients would stay in the hospital and nurses would perform the care-labor necessary for their recoveries, patients must now find their own caregivers (usually family members or friends; sometimes themselves) to perform free care-labor. In this context, devolution marks the shift of labor and responsibility away from the medical establishment and onto the patient; within the patient-medical establishment collaboration, the patient must now provide a greater portion of the necessary work. Similarly, in some ways, we now expect our friends to do a greater portion of the work of being friends with us.
  • ...13 more annotations...
  • Through social media, “sharing with friends” is rationalized to the point of relentless efficiency. The current apex of such rationalization is frictionless sharing: we no longer need to perform the labor of telling our individual friends about what we read online, or of copy-pasting links and emailing them to “the list,” or of clicking a button for one-step posting of links on our Facebook walls. With frictionless sharing, all we have to do is look, or listen; what we’ve read or watched or listened to is then “shared” or “scrobbled” to our Facebook, Twitter, Tumblr, or whatever other online profiles. Whether we share content actively or passively, however, we feel as though we’ve done our half of the friendship-labor by ‘pushing’ the information to our walls, streams, and tumblelogs. It’s then up to our friends to perform their halves of the friendship-labor by ‘pulling’ the information we share from those platforms.
  • We’re busy people; we like the idea of making one announcement on Facebook and being done with it, rather than having to repeat the same story over and over again to different friends individually. We also like not always having to think about which friends might like which stories or songs; we like the idea of sharing with all of our friends at once, and then letting them sort out amongst themselves who is and isn’t interested. Though social media can create burdensome expectations to keep up with strong ties, weak ties, and everyone in between, social media platforms can also be very efficient. Using the same moment of friendship-labor to tend multiple friendships at once kills more birds with fewer stones.
  • sometimes we like the devolution of friendship. When we have to ‘pull’ friendship-content instead of receiving it in a ‘push’, we can pick and choose which content items to pull. We can ignore the baby pictures, or the pet pictures, or the sushi pictures—whatever it is our friends post that we only pretend to care about
  • I’ve been thinking since, however, on what it means to view our friends as “generalized others.” I may now feel like less of like “creepy stalker” when I click on a song in someone’s Spotify feed, but I don’t exactly feel ‘shared with’ either. Far as I know, I’ve never been SpotiVaguebooked (or SubSpotified?); I have no reason to think anyone is speaking to me personally as they listen to music, or as they choose not to disable scrobbling (if they make that choice consciously at all). I may have been granted the opportunity to view something, but it doesn’t follow that what I’m viewing has anything to do with me unless I choose to make it about me. Devolved friendship means it’s not up to us to interact with our friends personally; instead it’s now up to our friends to make our generalized broadcasts personal.
  • While I won’t go so far as to say they’re definitely ‘problems,’ there are two major things about devolved friendship that I think are worth noting. The first is the non-uniform rationalization of friendship-labor, and the second is the depersonalization of friendship-labor.
  • Within devolved friendship interactions, it takes less effort to be polite while secretly waiting for someone to please just stop talking.
  • The second thing worth noting is that devolved friendship is also depersonalized friendship.
  • Personal interaction doesn’t just happen on Spotify, and since I was hoping Spotify would be the New Porch, I initially found Spotify to be somewhat lonely-making. It’s the mutual awareness of presence that gives companionate silence its warmth, whether in person or across distance. The silence within Spotify’s many sounds, on the other hand, felt more like being on the outside looking in. This isn’t to say that Spotify can’t be social in a more personal way; once I started sending tracks to my friends, a few of them started sending tracks in return. But it took a lot more work to get to that point, which gets back to the devolution of friendship (as I explain below).
  • In short, “sharing” has become a lot easier and a lot more efficient, but “being shared with” has become much more time-consuming, demanding, and inefficient (especially if we don’t ignore most of our friends most of the time). Given this, expecting our friends to keep up with our social media content isn’t expecting them to meet us halfway; it’s asking them to take on the lion’s share of staying in touch with us. Our jobs (in this role) have gotten easier; our friends’ jobs have gotten harder.
  • When we consider the lopsided rationalization of ‘sharing’ and ‘shared with,’ as well as the depersonalization of frictionless sharing and generalized broadcasting, what becomes clear is this: the social media deck is stacked in such a way as to make being ‘a self’ easier and more rewarding than being ‘a friend.’
  • It’s easy to share, to broadcast, to put our selves and our tastes and our identity performances out into the world for others to consume; what feedback and friendship we get in return comes in response to comparatively little effort and investment from us. It takes a lot more work, however, to do the consumption, to sift through everything all (or even just some) of our friends produce, to do the work of connecting to our friends’ generalized broadcasts so that we can convert their depersonalized shares into meaningful friendship-labor.
  • We may be prosumers of social media, but the reward structures of social media sites encourage us to place greater emphasis on our roles as share-producers—even though many of us probably spend more time consuming shared content than producing it. There’s a reason for this, of course; the content we produce (for free) is what fuels every last ‘Web 2.0’ machine, and its attendant self-centered sociality is the linchpin of the peculiarly Silicon Valley concept of “Social” (something Nathan Jurgenson and I discuss together in greater detail here). It’s not super-rewarding to be one of ten people who “like” your friend’s shared link, but it can feel rewarding to get 10 “likes” on something you’ve shared—even if you have hundreds or thousands of ‘friends.’ Sharing is easy; dealing with all that shared content is hard.
  • t I wonder sometimes if the shifts in expectation that accompany devolved friendship don’t migrate across platforms and contexts in ways we don’t always see or acknowledge. Social media affects how we see the world—and how we feel about being seen in the world—even when we’re not engaged directly with social media websites. It’s not a stretch, then, to imagine that the affordances of social media platforms might also affect how we see friendship and our obligations as friends most generally.
peterconnelly

Opinion: Texas' new social media law affects all of us - CNN - 0 views

  • Earlier this month, a federal appeals court ruled that a Texas law, which allows residents to sue social media companies for "censoring" what they post, could go into effect.
  • The biggest challenge facing social media companies today is doing exactly what HB 20 seems to disallow: removing misinformation and hate speech.
  • They can remove toxic content like misinformation and hate speech and get tied up in bottomless, costly lawsuits. They can let their platforms turn into cesspools of hate and misinformation and watch people stop using them altogether. Or they can just stop offering their services in Texas, which also exposes them to potential liability since the law makes it illegal for social media platforms to discriminate against Texans based on their location.
  • ...3 more annotations...
  • The state law, referred to as HB 20, makes it illegal for large social networks like Facebook and Twitter to "block, ban, remove, de-platform, demonetize, de-boost, restrict, deny equal access or visibility to, or otherwise discriminate against expression."
  • HB 20 does carve out exemptions, including those that allow social networks to remove content that "directly incites criminal activity or consists of specific threats of violence targeted against a person or group" based on certain characteristics, or that "is unlawful expression."
  • We need to fix social networks by removing toxic content. This month's appeals court ruling does the exact opposite and could even deal a fatal blow to social media as we know it. The only thing worse than not fixing the social platforms we have now would be to see them be subject to a constant slew of lawsuits or devolve into platforms that become bastions of hate speech and misinformation. Let's hope Congress doesn't let us down.
criscimagnael

Social media CEO hopes to 'remove any temptation for bad behavior' from its platform - 0 views

  • A new platform being beta-tested hopes to re-introduce an element of what many social media companies have seemingly lost sight of amid industry controversies — maintaining a sense of community.
  • There are no likes. There are no followers. There are no algorithms tracking you. We've removed any temptation for bad behavior that you'll find on other platforms
  • However, Austin stressed how her platform will be very different from Twitter and other social media networks where online discord can create toxic digital environments.
  • ...2 more annotations...
  • "In terms of moderation, we have a lot of thoughts — we're trying to move very slowly and intentionally around what the experience is like," Austin said. "I will say, unequivocally, that we won't allow any language that revolves around hate and all the encompassing ways of what that can mean."
  • "[We're] definitely [going] for scale, but I think in scale there can be very different groups inside that space," Austin said. "I think fragmentation has been happening since the beginning of the internet, we just had these larger conglomerates come in and bring everyone to that space."
Javier E

When a Shitposter Runs a Social Media Platform - The Bulwark - 0 views

  • This is an unfortunate and pernicious pattern. Musk often refers to himself as moderate or independent, but he routinely treats far-right fringe figures as people worth taking seriously—and, more troublingly, as reliable sources of information.
  • By doing so, he boosts their messages: A message retweeted by or receiving a reply from Musk will potentially be seen by millions of people.
  • Also, people who pay for Musk’s Twitter Blue badges get a lift in the algorithm when they tweet or reply; because of the way Twitter Blue became a culture war front, its subscribers tend to skew to the righ
  • ...19 more annotations...
  • The important thing to remember amid all this, and the thing that has changed the game when it comes to the free speech/content moderation conversation, is that Elon Musk himself loves conspiracy theorie
  • The media isn’t just unduly critical—a perennial sore spot for Musk—but “all news is to some degree propaganda,” meaning he won’t label actual state-affiliated propaganda outlets on his platform to distinguish their stories from those of the New York Times.
  • In his mind, they’re engaged in the same activity, so he strikes the faux-populist note that the people can decide for themselves what is true, regardless of objectively very different track records from different sources.
  • Musk’s “just asking questions” maneuver is a classic Trump tactic that enables him to advertise conspiracy theories while maintaining a sort of deniability.
  • At what point should we infer that he’s taking the concerns of someone like Loomer seriously not despite but because of her unhinged beliefs?
  • Musk’s skepticism seems largely to extend to criticism of the far-right, while his credulity for right-wing sources is boundless.
  • This is part of the argument for content moderation that limits the dispersal of bullshit: People simply don’t have the time, energy, or inclination to seek out the boring truth when stimulated by some online outrage.
  • Refuting bullshit requires some technological literacy, perhaps some policy knowledge, but most of all it requires time and a willingness to challenge your own prior beliefs, two things that are in precious short supply online.
  • Brandolini’s Law holds that the amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.
  • Here we can return to the example of Loomer’s tweet. People did fact-check her, but it hardly matters: Following Musk’s reply, she ended up receiving over 5 million views, an exponentially larger online readership than is normal for her. In the attention economy, this counts as a major win. “Thank you so much for posting about this, @elonmusk!” she gushed in response to his reply. “I truly appreciate it.”
  • the problem isn’t limited to elevating Loomer. Musk had his own stock of misinformation to add to the pile. After interacting with her account, Musk followed up last Tuesday by tweeting out last week a 2021 Federalist article claiming that Facebook founder Mark Zuckerberg had “bought” the 2020 election, an allegation previously raised by Trump and others, and which Musk had also brought up during his recent interview with Tucker Carlson.
  • If Zuckerberg wanted to use his vast fortune to tip the election, it would have been vastly more efficient to create a super PAC with targeted get-out-the-vote operations and advertising. Notwithstanding legitimate criticisms one can make about Facebook’s effect on democracy, and whatever Zuckerberg’s motivations, you have to squint hard to see this as something other than a positive act addressing a real problem.
  • It’s worth mentioning that the refutations I’ve just sketched of the conspiratorial claims made by Loomer and Musk come out to around 1,200 words. The tweets they wrote, read by millions, consisted of fewer than a hundred words in total. That’s Brandolini’s Law in action—an illustration of why Musk’s cynical free-speech-over-all approach amounts to a policy in favor of disinformation and against democracy.
  • Moderation is a subject where Zuckerberg’s actions provide a valuable point of contrast with Musk. Through Facebook’s independent oversight board, which has the power to overturn the company’s own moderation decisions, Zuckerberg has at least made an effort to have credible outside actors inform how Facebook deals with moderation issues
  • Meanwhile, we are still waiting on the content moderation council that Elon Musk promised last October:
  • The problem is about to get bigger than unhinged conspiracy theorists occasionally receiving a profile-elevating reply from Musk. Twitter is the venue that Tucker Carlson, whom advertisers fled and Fox News fired after it agreed to pay $787 million to settle a lawsuit over its election lies, has chosen to make his comeback. Carlson and Musk are natural allies: They share an obsessive anti-wokeness, a conspiratorial mindset, and an unaccountable sense of grievance peculiar to rich, famous, and powerful men who have taken it upon themselves to rail against the “elites,” however idiosyncratically construed
  • f the rumors are true that Trump is planning to return to Twitter after an exclusivity agreement with Truth Social expires in June, Musk’s social platform might be on the verge of becoming a gigantic rec room for the populist right.
  • These days, Twitter increasingly feels like a neighborhood where the amiable guy-next-door is gone and you suspect his replacement has a meth lab in the basement.
  • even if Twitter’s increasingly broken information environment doesn’t sway the results, it is profoundly damaging to our democracy that so many people have lost faith in our electoral system. The sort of claims that Musk is toying with in his feed these days do not help. It is one thing for the owner of a major source of information to be indifferent to the content that gets posted to that platform. It is vastly worse for an owner to actively fan the flames of disinformation and doubt.
Javier E

The Israel-Hamas War Shows Just How Broken Social Media Has Become - The Atlantic - 0 views

  • major social platforms have grown less and less relevant in the past year. In response, some users have left for smaller competitors such as Bluesky or Mastodon. Some have simply left. The internet has never felt more dense, yet there seem to be fewer reliable avenues to find a signal in all the noise. One-stop information destinations such as Facebook or Twitter are a thing of the past. The global town square—once the aspirational destination that social-media platforms would offer to all of us—lies in ruins, its architecture choked by the vines and tangled vegetation of a wild informational jungle
  • Musk has turned X into a deepfake version of Twitter—a facsimile of the once-useful social network, altered just enough so as to be disorienting, even terrifying.
  • At the same time, Facebook’s user base began to erode, and the company’s transparency reports revealed that the most popular content circulating on the platform was little more than viral garbage—a vast wasteland of CBD promotional content and foreign tabloid clickbait.
  • ...4 more annotations...
  • What’s left, across all platforms, is fragmented. News and punditry are everywhere online, but audiences are siloed; podcasts are more popular than ever, and millions of younger people online have turned to influencers and creators on Instagram and especially TikTok as trusted sources of news.
  • Social media, especially Twitter, has sometimes been an incredible news-gathering tool; it has also been terrible and inefficient, a game of do your own research that involves batting away bullshit and parsing half truths, hyperbole, outright lies, and invaluable context from experts on the fly. Social media’s greatest strength is thus its original sin: These sites are excellent at making you feel connected and informed, frequently at the expense of actually being informed.
  • At the center of these pleas for a Twitter alternative is a feeling that a fundamental promise has been broken. In exchange for our time, our data, and even our well-being, we uploaded our most important conversations onto platforms designed for viral advertising—all under the implicit understanding that social media could provide an unparalleled window to the world.
  • What comes next is impossible to anticipate, but it’s worth considering the possibility that the centrality of social media as we’ve known it for the past 15 years has come to an end—that this particular window to the world is being slammed shut.
Javier E

Why Silicon Valley can't fix itself | News | The Guardian - 1 views

  • After decades of rarely apologising for anything, Silicon Valley suddenly seems to be apologising for everything. They are sorry about the trolls. They are sorry about the bots. They are sorry about the fake news and the Russians, and the cartoons that are terrifying your kids on YouTube. But they are especially sorry about our brains.
  • Sean Parker, the former president of Facebook – who was played by Justin Timberlake in The Social Network – has publicly lamented the “unintended consequences” of the platform he helped create: “God only knows what it’s doing to our children’s brains.”
  • Parker, Rosenstein and the other insiders now talking about the harms of smartphones and social media belong to an informal yet influential current of tech critics emerging within Silicon Valley. You could call them the “tech humanists”. Amid rising public concern about the power of the industry, they argue that the primary problem with its products is that they threaten our health and our humanity.
  • ...52 more annotations...
  • It is clear that these products are designed to be maximally addictive, in order to harvest as much of our attention as they can. Tech humanists say this business model is both unhealthy and inhumane – that it damages our psychological well-being and conditions us to behave in ways that diminish our humanity
  • The main solution that they propose is better design. By redesigning technology to be less addictive and less manipulative, they believe we can make it healthier – we can realign technology with our humanity and build products that don’t “hijack” our minds.
  • its most prominent spokesman is executive director Tristan Harris, a former “design ethicist” at Google who has been hailed by the Atlantic magazine as “the closest thing Silicon Valley has to a conscience”. Harris has spent years trying to persuade the industry of the dangers of tech addiction.
  • In February, Pierre Omidyar, the billionaire founder of eBay, launched a related initiative: the Tech and Society Solutions Lab, which aims to “maximise the tech industry’s contributions to a healthy society”.
  • the tech humanists are making a bid to become tech’s loyal opposition. They are using their insider credentials to promote a particular diagnosis of where tech went wrong and of how to get it back on track
  • The real reason tech humanism matters is because some of the most powerful people in the industry are starting to speak its idiom. Snap CEO Evan Spiegel has warned about social media’s role in encouraging “mindless scrambles for friends or unworthy distractions”,
  • In short, the effort to humanise computing produced the very situation that the tech humanists now consider dehumanising: a wilderness of screens where digital devices chase every last instant of our attention.
  • After years of ignoring their critics, industry leaders are finally acknowledging that problems exist. Tech humanists deserve credit for drawing attention to one of those problems – the manipulative design decisions made by Silicon Valley.
  • these decisions are only symptoms of a larger issue: the fact that the digital infrastructures that increasingly shape our personal, social and civic lives are owned and controlled by a few billionaires
  • Because it ignores the question of power, the tech-humanist diagnosis is incomplete – and could even help the industry evade meaningful reform
  • Taken up by leaders such as Zuckerberg, tech humanism is likely to result in only superficial changes
  • they will not address the origin of that anger. If anything, they will make Silicon Valley even more powerful.
  • To the litany of problems caused by “technology that extracts attention and erodes society”, the text asserts that “humane design is the solution”. Drawing on the rhetoric of the “design thinking” philosophy that has long suffused Silicon Valley, the website explains that humane design “starts by understanding our most vulnerable human instincts so we can design compassionately”
  • this language is not foreign to Silicon Valley. On the contrary, “humanising” technology has long been its central ambition and the source of its power. It was precisely by developing a “humanised” form of computing that entrepreneurs such as Steve Jobs brought computing into millions of users’ everyday lives
  • Facebook had a new priority: maximising “time well spent” on the platform, rather than total time spent. By “time well spent”, Zuckerberg means time spent interacting with “friends” rather than businesses, brands or media sources. He said the News Feed algorithm was already prioritising these “more meaningful” activities.
  • They believe we can use better design to make technology serve human nature rather than exploit and corrupt it. But this idea is drawn from the same tradition that created the world that tech humanists believe is distracting and damaging us.
  • Tech humanists say they want to align humanity and technology. But this project is based on a deep misunderstanding of the relationship between humanity and technology: namely, the fantasy that these two entities could ever exist in separation.
  • The story of our species began when we began to make tools
  • All of which is to say: humanity and technology are not only entangled, they constantly change together.
  • This is not just a metaphor. Recent research suggests that the human hand evolved to manipulate the stone tools that our ancestors used
  • The ways our bodies and brains change in conjunction with the tools we make have long inspired anxieties that “we” are losing some essential qualities
  • Yet as we lose certain capacities, we gain new ones.
  • The nature of human nature is that it changes. It can not, therefore, serve as a stable basis for evaluating the impact of technology
  • Yet the assumption that it doesn’t change serves a useful purpose. Treating human nature as something static, pure and essential elevates the speaker into a position of power. Claiming to tell us who we are, they tell us how we should be.
  • Holding humanity and technology separate clears the way for a small group of humans to determine the proper alignment between them
  • Harris and his fellow tech humanists also frequently invoke the language of public health. The Center for Humane Technology’s Roger McNamee has gone so far as to call public health “the root of the whole thing”, and Harris has compared using Snapchat to smoking cigarettes
  • The public-health framing casts the tech humanists in a paternalistic role. Resolving a public health crisis requires public health expertise. It also precludes the possibility of democratic debate. You don’t put the question of how to treat a disease up for a vote – you call a doctor.
  • They also remain confined to the personal level, aiming to redesign how the individual user interacts with technology rather than tackling the industry’s structural failures. Tech humanism fails to address the root cause of the tech backlash: the fact that a small handful of corporations own our digital lives and strip-mine them for profit.
  • This is a fundamentally political and collective issue. But by framing the problem in terms of health and humanity, and the solution in terms of design, the tech humanists personalise and depoliticise it.
  • Far from challenging Silicon Valley, tech humanism offers Silicon Valley a useful way to pacify public concerns without surrendering any of its enormous wealth and power.
  • these principles could make Facebook even more profitable and powerful, by opening up new business opportunities. That seems to be exactly what Facebook has planned.
  • reported that total time spent on the platform had dropped by around 5%, or about 50m hours per day. But, Zuckerberg said, this was by design: in particular, it was in response to tweaks to the News Feed that prioritised “meaningful” interactions with “friends” rather than consuming “public content” like video and news. This would ensure that “Facebook isn’t just fun, but also good for people’s well-being”
  • Zuckerberg said he expected those changes would continue to decrease total time spent – but “the time you do spend on Facebook will be more valuable”. This may describe what users find valuable – but it also refers to what Facebook finds valuable
  • not all data is created equal. One of the most valuable sources of data to Facebook is used to inform a metric called “coefficient”. This measures the strength of a connection between two users – Zuckerberg once called it “an index for each relationship”
  • Facebook records every interaction you have with another user – from liking a friend’s post or viewing their profile, to sending them a message. These activities provide Facebook with a sense of how close you are to another person, and different activities are weighted differently.
  • Messaging, for instance, is considered the strongest signal. It’s reasonable to assume that you’re closer to somebody you exchange messages with than somebody whose post you once liked.
  • Why is coefficient so valuable? Because Facebook uses it to create a Facebook they think you will like: it guides algorithmic decisions about what content you see and the order in which you see it. It also helps improve ad targeting, by showing you ads for things liked by friends with whom you often interact
  • emphasising time well spent means creating a Facebook that prioritises data-rich personal interactions that Facebook can use to make a more engaging platform.
  • “time well spent” means Facebook can monetise more efficiently. It can prioritise the intensity of data extraction over its extensiveness. This is a wise business move, disguised as a concession to critics
  • industrialists had to find ways to make the time of the worker more valuable – to extract more money from each moment rather than adding more moments. They did this by making industrial production more efficient: developing new technologies and techniques that squeezed more value out of the worker and stretched that value further than ever before.
  • there is another way of thinking about how to live with technology – one that is both truer to the history of our species and useful for building a more democratic future. This tradition does not address “humanity” in the abstract, but as distinct human beings, whose capacities are shaped by the tools they use.
  • It sees us as hybrids of animal and machine – as “cyborgs”, to quote the biologist and philosopher of science Donna Haraway.
  • The cyborg way of thinking, by contrast, tells us that our species is essentially technological. We change as we change our tools, and our tools change us. But even though our continuous co-evolution with our machines is inevitable, the way it unfolds is not. Rather, it is determined by who owns and runs those machines. It is a question of power
  • The various scandals that have stoked the tech backlash all share a single source. Surveillance, fake news and the miserable working conditions in Amazon’s warehouses are profitable. If they were not, they would not exist. They are symptoms of a profound democratic deficit inflicted by a system that prioritises the wealth of the few over the needs and desires of the many.
  • If being technological is a feature of being human, then the power to shape how we live with technology should be a fundamental human right
  • The decisions that most affect our technological lives are far too important to be left to Mark Zuckerberg, rich investors or a handful of “humane designers”. They should be made by everyone, together.
  • Rather than trying to humanise technology, then, we should be trying to democratise it. We should be demanding that society as a whole gets to decide how we live with technology
  • What does this mean in practice? First, it requires limiting and eroding Silicon Valley’s power.
  • Antitrust laws and tax policy offer useful ways to claw back the fortunes Big Tech has built on common resources
  • democratic governments should be making rules about how those firms are allowed to behave – rules that restrict how they can collect and use our personal data, for instance, like the General Data Protection Regulation
  • This means developing publicly and co-operatively owned alternatives that empower workers, users and citizens to determine how they are run.
  • we might demand that tech firms pay for the privilege of extracting our data, so that we can collectively benefit from a resource we collectively create.
Javier E

New Foils for the Right: Google and Facebook - The New York Times - 0 views

  • In a sign of escalation, Peter Schweizer, a right-wing journalist known for his investigations into Hillary Clinton, plans to release a new film focusing on technology companies and their role in filtering the news.
  • The documentary, which has not been previously reported, dovetails with concerns raised in recent weeks by right-wing groups about censorship on digital media — a new front in a rapidly evolving culture war.
  • The critique from conservatives, in contrast, casts the big tech companies as censorious and oppressive, all too eager to stifle right-wing content in an effort to mollify liberal critics.
  • ...9 more annotations...
  • Big Tech is easily associated with West Coast liberalism and Democratic politics, making it a fertile target for the right. And operational opacity at Facebook, Google and Twitter, which are reluctant to reveal details about their algorithms and internal policies, can leave them vulnerable, too.
  • “There’s not even a real basis to establish objective research about what’s happening on Facebook, because it’s closed.”
  • And former President Barack Obama said at an off-the-record conference at the Massachusetts Institute of Technology last month that he worried Americans were living in “entirely different realities” and that large tech companies like Facebook were “not just an invisible platform, they’re shaping our culture in powerful ways.” The contents of the speech were published by Reason magazine.
  • “There are political activists in all of these companies that want to actively push a liberal agenda,” he said. “Why does it matter? Because these companies are so ubiquitous and powerful that they are controlling all the means of mass communication.”
  • He is also the president of the Government Accountability Institute, a conservative nonprofit organization. He and Mr. Bannon founded it with funding from the family of Robert Mercer, the billionaire hedge fund manager and donor to Donald J. Trump’s presidential campaign.
  • Jeffrey A. Zucker, the president of CNN, derided Google and Facebook as “monopolies” and called for regulators to step in during a speech in Spain last month, saying the tech hegemony is “the biggest issue facing the growth of journalism in the years ahead.”
  • The panelists accused social media platforms of delisting their videos or stripping them of advertising. Such charges have long been staples of far-right online discourse, especially among YouTubers, but Mr. Schweizer’s project is poised to bring such arguments to a new — and potentially larger — audience.
  • The Facebook adjustment has affected virtually every media organization that is partly dependent on the platform for audiences, but it appears to have hit some harder than others. They include right-wing sites like Gateway Pundit and the millennial-focused Independent Journal Review, which was forced to lay off staff members last month.
  • The social news giant BuzzFeed recently bought ads on Facebook with the message, “Facebook is taking the news out of your News Feed, but we’ve got you covered,” directing users to download its app. Away from the political scrum, the viral lifestyle site LittleThings, once a top publisher on the platform, announced last week that it would cease operations, blaming “a full-on catastrophic update” to Facebook’s revised algorithms.
Javier E

Fight the Future - The Triad - 1 views

  • In large part because our major tech platforms reduced the coefficient of friction (μ for my mechanics nerd posse) to basically zero. QAnons crept out of the dark corners of the web—obscure boards like 4chan and 8kun—and got into the mainstream platforms YouTube, Facebook, Instagram, and Twitter.
  • Why did QAnon spread like wildfire in America?
  • These platforms not only made it easy for conspiracy nuts to share their crazy, but they used algorithms that actually boosted the spread of crazy, acting as a force multiplier.
  • ...24 more annotations...
  • So it sounds like a simple fix: Impose more friction at the major platform level and you’ll clean up the public square.
  • But it’s not actually that simple because friction runs counter to the very idea of the internet.
  • The fundamental precept of the internet is that it reduces marginal costs to zero. And this fact is why the design paradigm of the internet is to continually reduce friction experienced by users to zero, too. Because if the second unit of everything is free, then the internet has a vested interest in pushing that unit in front of your eyeballs as smoothly as possible.
  • the internet is “broken,” but rather it’s been functioning exactly as it was designed to:
  • Perhaps more than any other job in the world, you do not want the President of the United States to live in a frictionless state of posting. The Presidency is not meant to be a frictionless position, and the United States government is not a frictionless entity, much to the chagrin of many who have tried to change it. Prior to this administration, decisions were closely scrutinized for, at the very least, legality, along with the impact on diplomacy, general norms, and basic grammar. This kind of legal scrutiny and due diligence is also a kind of friction--one that we now see has a lot of benefits. 
  • The deep lesson here isn’t about Donald Trump. It’s about the collision between the digital world and the real world.
  • In the real world, marginal costs are not zero. And so friction is a desirable element in helping to get to the optimal state. You want people to pause before making decisions.
  • described friction this summer as: “anything that inhibits user action within a digital interface, particularly anything that requires an additional click or screen.” For much of my time in the technology sector, friction was almost always seen as the enemy, a force to be vanquished. A “frictionless” experience was generally held up as the ideal state, the optimal product state.
  • Trump was riding the ultimate frictionless optimized engagement Twitter experience: he rode it all the way to the presidency, and then he crashed the presidency into the ground.
  • From a metrics and user point of view, the abstract notion of the President himself tweeting was exactly what Twitter wanted in its original platonic ideal. Twitter has been built to incentivize someone like Trump to engage and post
  • The other day we talked a little bit about how fighting disinformation, extremism, and online cults is like fighting a virus: There is no “cure.” Instead, what you have to do is create enough friction that the rate of spread becomes slow.
  • Our challenge is that when human and digital design comes into conflict, the artificial constraints we impose should be on the digital world to become more in service to us. Instead, we’ve let the digital world do as it will and tried to reconcile ourselves to the havoc it wreaks.
  • And one of the lessons of the last four years is that when you prize the digital design imperatives—lack of friction—over the human design imperatives—a need for friction—then bad things can happen.
  • We have an ongoing conflict between the design precepts of humans and the design precepts of computers.
  • Anyone who works with computers learns to fear their capacity to forget. Like so many things with computers, memory is strictly binary. There is either perfect recall or total oblivion, with nothing in between. It doesn't matter how important or trivial the information is. The computer can forget anything in an instant. If it remembers, it remembers for keeps.
  • This doesn't map well onto human experience of memory, which is fuzzy. We don't remember anything with perfect fidelity, but we're also not at risk of waking up having forgotten our own name. Memories tend to fade with time, and we remember only the more salient events.
  • And because we live in a time when storage grows ever cheaper, we learn to save everything, log everything, and keep it forever. You never know what will come in useful. Deleting is dangerous.
  • Our lives have become split between two worlds with two very different norms around memory.
  • [A] lot of what's wrong with the Internet has to do with memory. The Internet somehow contrives to remember too much and too little at the same time, and it maps poorly on our concepts of how memory should work.
  • The digital world is designed to never forget anything. It has perfect memory. Forever. So that one time you made a crude joke 20 years ago? It can now ruin your life.
  • Memory in the carbon-based world is imperfect. People forget things. That can be annoying if you’re looking for your keys but helpful if you’re trying to broker peace between two cultures. Or simply become a better person than you were 20 years ago.
  • The digital and carbon-based worlds have different design parameters. Marginal cost is one of them. Memory is another.
  • 2. Forget Me Now
  • 1. Fix Tech, Fix America
Javier E

Facebook and Twitter Dodge a 2016 Repeat, and Ignite a 2020 Firestorm - The New York Times - 1 views

  • It’s true that banning links to a story published by a 200-year-old American newspaper — albeit one that is now a Rupert Murdoch-owned tabloid — is a more dramatic step than cutting off WikiLeaks or some lesser-known misinformation purveyor. Still, it’s clear that what Facebook and Twitter were actually trying to prevent was not free expression, but a bad actor using their services as a conduit for a damaging cyberattack or misinformation.
  • These decisions get made quickly, in the heat of the moment, and it’s possible that more contemplation and debate would produce more satisfying choices. But time is a luxury these platforms don’t always have. In the past, they have been slow to label or remove dangerous misinformation about Covid-19, mail-in voting and more, and have only taken action after the bad posts have gone viral, defeating the purpose.
  • That left the companies with three options, none of them great. Option A: They could treat the Post’s article as part of a hack-and-leak operation, and risk a backlash if it turned out to be more innocent. Option B: They could limit the article’s reach, allowing it to stay up but choosing not to amplify it until more facts emerged. Or, Option C: They could do nothing, and risk getting played again by a foreign actor seeking to disrupt an American election.
  • ...8 more annotations...
  • On Wednesday, several prominent Republicans, including Mr. Trump, repeated their calls for Congress to repeal Section 230 of the Communications Decency Act, a law that shields tech platforms from many lawsuits over user-generated content.
  • That leaves the companies in a precarious spot. They are criticized when they allow misinformation to spread. They are also criticized when they try to prevent it.
  • Perhaps the strangest idea to emerge in the past couple of days, though, is that these services are only now beginning to exert control over what we see. Representative Doug Collins, Republican of Georgia, made this point in a letter to Mark Zuckerberg, the chief executive of Facebook, in which he derided the social network for using “its monopoly to control what news Americans have access to.”
  • The truth, of course, is that tech platforms have been controlling our information diets for years, whether we realized it or not. Their decisions were often buried in obscure “community standards” updates, or hidden in tweaks to the black-box algorithms that govern which posts users see.
  • Their leaders have always been editors masquerading as engineers.
  • What’s happening now is simply that, as these companies move to rid their platforms of bad behavior, their influence is being made more visible.
  • Rather than letting their algorithms run amok (which is an editorial choice in itself), they’re making high-stakes decisions about flammable political misinformation in full public view, with human decision makers who can be debated and held accountable for their choices.
  • After years of inaction, Facebook and Twitter are finally starting to clean up their messes. And in the process, they’re enraging the powerful people who have thrived under the old system.
1 - 20 of 209 Next › Last »
Showing 20 items per page