Skip to main content

Home/ TOK Friends/ Group items tagged offense

Rss Feed Group items tagged

knudsenlu

Quinn Norton: The New York Times Fired My Doppelgänger - The Atlantic - 0 views

  • Quinn Norton
  • The day before Valentine’s Day, social media created a bizarro-world version of me. I have seen strange ideas about me online before, but this doppelgänger was so far from resembling me that I told friends and loved ones I didn’t want to even try to rebut it. It was a leading question turned into a human form. The net created a person with my name and face, but with so little relationship to me, she could have been an invader from an alternate universe.
  • It started when The New York Times hired me for its editorial board. In January, the Times sought me out because, editorial leaders told me, the Times as an institution is struggling with understanding how technology is shifting society and politics. We talked for a while. I discussed my work, my beliefs, and my background.
  • ...9 more annotations...
  • I was hesitant with the Times. They were far out of my comfort zone, but I felt that the people I was talking to had a sincerity greater than their confusion. Nothing that has happened since then has dissuaded me from that impression.
  • If you’re reading this, especially on the internet, you are the teacher for those institutions at a local, national, and global level. I understand that you didn’t ask for this position. Neither did I. History doesn’t ask you if you want to be born in a time of upheaval, it just tells you when you are. When the backlash began, I got the call from the person who had sought me out and recruited me. The fear I heard in that shaky voice coming through my mobile phone was unmistakable. It was the fear of a mob, of the unknown, and of the idea that maybe they had gotten it wrong and done something terrible. I have felt all of those things. Many of us have. It’s not a place of strength, even when it seems to be coming from someone standing in a place of power. The Times didn’t know what the internet was doing—tearing down a new hire, exposing a fraud, threatening them—everything seemed to be in the mix.
  • I had even written about context collapse myself, but that hadn’t saved me from falling into it, and then hurting other people I didn’t mean to hurt. This particular collapse didn’t create much of a doppelgänger, but it did find me spending a morning as a defensive jerk. I’m very sorry for that dumb mistake. It helped me learn a lesson: Be damn sure when you make angry statements. Check them out long enough that, even if the statements themselves are still angry, you are not angry by the time you make them. Again and again, I have learned this: Don’t internet angry. If you’re angry, internet later.
  • I think if I’d gotten to write for the Times as part of their editorial board, this might have been different. I might have been in a position to show how our media doppelgängers get invented, and how we can unwind them. It takes time and patience. It doesn’t come from denying the doppelgänger—there’s nothing there to deny. I was accused of homophobia because of the in-group language I used with anons when I worked with them. (“Anons” refers to people who identify as part of the activist collective Anonymous.) I was accused of racism for use of taboo language, mainly in a nine-year-old retweet in support of Obama. Intentions aside, it wasn’t a great tweet, and I was probably overemotional when I retweeted it.
  • In late 2015 I woke up a little before 6 a.m., jet-lagged in New York, and started looking at Twitter. There was a hashtag, I don’t remember if it was trending or just in my timeline, called #whitegirlsaremagic. I clicked on it, and found it was racist and sexist dross. It was being promulgated in opposition to another hashtag, #blackgirlsaremagic. I clicked on that, and found a few model shots and borderline soft-core porn of black women. Armed with this impression, I set off to tweet in righteous anger about how much I disliked women being reduced to sex objects regardless of race. I was not just wrong in this moment, I was incoherently wrong. I had made my little mental model of what #blackgirlsaremagic was, and I had no clue that I had no clue what I was talking about. My 60-second impression of #whitegirlsaremagic was dead-on, but #blackgirlsaremagic didn’t fit in the last few tweets my browser had loaded.
  • I had been a victim of something the sociologists Alice Marwick and danah boyd call context collapse, where people create online culture meant for one in-group, but exposed to any number of out-groups without its original context by social-media platforms, where it can be recontextualized easily and accidentally.
  • Not everyone believes loving engagement is the best way to fight evil beliefs, but it has a good track record. Not everyone is in a position to engage safely with racists, sexists, anti-Semites, and homophobes, but for those who are, it’s a powerful tool. Engagement is not the one true answer to the societal problems destabilizing America today, but there is no one true answer. The way forward is as multifarious and diverse as America is, and a method of nonviolent confrontation and accountability, arising from my pacifism, is what I can bring to helping my society.
  • Here is your task, person on the internet, reader of journalism, speaker to the world on social media: You make the world now, in a way that you never did before. Your beliefs have a power they’ve never had in human history. You must learn to investigate with a scientific and loving mind not only what is true, but what is effective in the world. Right now we are a world of geniuses who constantly love to call each other idiots. But humanity is the most complicated thing we’ve found in the universe, and so far as we know, we’re the only thing even looking. We are miracles by the billions with powers and luxuries beyond the dreams of kings of old.
  • We are powerful creatures, but power must come with gentleness and responsibility. No one prepared us for this, no one trained us, no one came before us with an understanding of our world. There were hints, and wise people, and I lean on and cherish them. But their philosophies and imaginations can only take us so far. We have to build our own philosophies and imagine great futures for our world in order to have any futures at all. Let mercy guide us forward in these troubled times. Let yourself imagine, because imagination is the wellspring of hope. Here, in the beginning of the 21st century, hope is our duty to the future.
tongoscar

Vagueness | The First Amendment Encyclopedia - 0 views

shared by tongoscar on 03 Nov 19 - No Cached
  • A law that defines a crime in vague terms is likely to raise due-process issues.
  • Vague laws raise problems with due process
  • a law is unconstitutionally vague when people “of common intelligence must necessarily guess at its meaning.”
  • ...7 more annotations...
  • Thus, in overturning a California loitering law that required persons who wander or loiter on the streets to provide “credible and reliable” identification in Kolender v. Lawson (1983), the Supreme Court explained that “the void-for-vagueness doctrine requires that a penal statute define the criminal offense with sufficient definiteness that ordinary people can understand what conduct is prohibited and in a manner that does not encourage arbitrary and discriminatory treatment.”
  • the requirement that every law clearly define and articulate “the right to be observed, and the wrongs to be eschewed. . . .”
  • These examples undoubtedly were known to early American commentators and jurists, who often reiterated the importance of clarity in criminal statutes. James Madison in Federalist No. 62 warns of the “calamitous” results if laws are “so incoherent that they cannot be understood. . . .” In an early federal court case, United States v. Sharp (1815), the Court argued that laws that “create crimes, ought to be so explicit in themselves, or by reference to some other standard, that all men, subject to their penalties, may know what acts it is their duty to avoid.”
  • Court has shown three reasons vague statutes are unconstitutional
  • First, due process requires that a law provide fair warning and provides a “persons of ordinary intelligence a reasonable opportunity to know what is prohibited, so that he may act accordingly.”
  • Second, the law must provide “explicit standards” to law enforcement officials, judges, and juries so as to avoid “arbitrary and discriminatory application.”
  • Third, a vague statute can “inhibit the exercise” of First Amendment freedoms and may cause speakers to “steer far wider of the unlawful zone . . . than if the boundaries of the forbidden areas were clearly marked.”
annabaldwin_

She's 26, and Brought Down Uber's C.E.O. What's Next? - The New York Times - 0 views

  • a “super-women-friendly” place to work, boasting 25 percent female engineers.
  • “I only use Lyft,” he said. “Did you read that interview with the C.E.O., Travis, where he talked about how Uber helps him get girls? He’s a misogynist. I could never use his product.”
  • The essay that shook Silicon Valley was called “Reflecting on One Very, Very Strange Year at Uber,” and Ms. Fowler began by noting that it was “a strange, fascinating, and slightly horrifying story.”
  • ...6 more annotations...
  • The blog post was notable for its dispassionate tone, as the young engineer who had left the company after a year walked the reader through everything that had gone very, very wrong in the brozilla culture of kegs, sexual coarseness and snaky competition.
  • Ms. Fowler took screen shots and reported the manager to human resources, thinking, “They’ll do the right thing.”
  • But they didn’t, explaining that the manager was “a high performer” and it was his first offense, something Ms. Fowler later discovered to be untrue.
  • “I didn’t really care if they branded me a troublemaker,” she says
  • In her memo, she says, “I knew I had to be super-careful about how I said it if I wanted anybody to take it seriously.
  • So I was disappointed in her because I expected her to be an advocate,”
katherineharron

The national security adviser says there's no systemic racism in policing. Studies sugg... - 0 views

  • When a Trump administration official said he doesn't think systemic racism exists in policing, many were stunned -- especially after studies have shown different races are often treated differently.
  • "There is no doubt that there are some racist police," O'Brien added. "I think they're the minority. I think they're the few bad apples, and we need to root them out."
  • "Of course there is" systemic racism, St. Paul Police Chief Todd Axtell said. "It's not just in police departments across this country. My goodness, there's systemic racism within pretty much everything in this country."
  • ...11 more annotations...
  • frican-Americans are at greater risk of being killed by police, even though they are less likely to pose an objective threat to law enforcement, according to research by Northeastern University Professor Matt Miller.
  • Among those who were "unarmed and appeared to show no objective threat to police, nearly two-thirds of the victims were Hispanic or Black," the researchers found.
  • "there is profound racial disparity in the misdemeanor arrest rate for most -- but not all -- offense types,"
  • The 2017 study found that black residents were more likely to file complaints than white residents -- but "NCPD sustained complaints filed by Black residents only 31 percent of the time compared to sustaining complaints filed by White residents 50 percent of the time," the study says. "NCPD defines 'sustained' as 'the allegation is supported by sufficient evidence to justify a reasonable conclusion that the allegation is factual."
  • Latino youth are 65% more likely to be detained or committed than their white peers, according to a 2017 report from The Sentencing Project.
  • African Americans and whites use drugs at similar rates, but the imprisonment rate of African Americans for drug charges is almost 6 times that of whites, the NAACP said.
  • But they found that "blacks were 2.7 times more likely to be pulled over in an investigatory stop," NPR station KCUR reported. "Blacks were also subject to searches five times more often than white drivers."
  • But black drivers who had an infraction like a burnt out light were more often "questioned about what they were doing in a particular neighborhood, where they were heading, and whether they were carrying drugs," the report said. "Many were subject to vehicle searches."
  • "When you have the national security adviser saying he doesn't see systemic racism, well you know what? White folks also didn't see systemic racism even in the 1960s," Wise said.
  • "If white America didn't get it even when it was obvious in retrospect to everyone, what in the world would make the national security adviser believe that he or anyone else knows what they're talking about now? I think it probably stands to reason that black and brown folks know their reality better than we do."
  • "Especially after a tragedy like we saw in Minneapolis, we need to do two things -- take a hard look at our own actions and conduct, correct them where necessary, and to regain that trust by continuing to hold ourselves to the highest possible standard in a transparent way."
Javier E

Slate Suspends Podcast Host After Debate Over Racial Slur - The New York Times - 0 views

  • The online publication Slate has suspended a well-known podcast host after he debated with colleagues over whether people who are not Black should be able to quote a racial slur in some contexts.
  • he was suspended indefinitely on Monday after defending the use of the slur in certain contexts. He made his argument during a conversation last week with colleagues on the interoffice messaging platform Slack.
  • Slate staff members were discussing the resignation of Donald G. McNeil Jr., a reporter who said this month that he was resigning from The New York Times after he had used the slur during a discussion of racism while working as a guide on a student trip in 2019.
  • ...11 more annotations...
  • Mr. Pesca, who is white, said he felt there were contexts in which the slur could be used, according to screen shots of the Slack conversation that were shared with The Times
  • In November 2019, Slate introduced a policy that required podcast hosts and producers to discuss the use of racist terms in a pending episode, in or out of quoted material, before recording it.
  • Mr. Pesca explored the argument over the use of the slur in a 2019 podcast about a Black security guard who was fired for using it. In one recording of the episode, Mr. Pesca said, he used the term while quoting the man, but asked his producer to make a version without the term. After consulting with his producers and his supervisor, who objected to his quotation of the slur, they decided to go with the version without it, he said
  • “The version of the story with the offensive word never aired, and this is how I think the editorial process should go,” Mr. Pesca said in the interview.
  • No action was taken against him after a human resources investigation into his quotation of the slur, Mr. Pesca said
  • He said he had apologized to the producers involved.
  • Mr. Pesca said Mr. Check, the chief executive, and Jared Hohlt, Slate’s editor in chief, had brought up the previous instance of his quoting the slur when they spoke with him after the Slack conversation
  • Mr. Pesca, whose interview style at times seemed to embody Slate's contrarian brand, said he was told on Friday that he would be suspended for a week without pay. On Monday he was informed that the suspension was indefinite,
  • Mr. Pesca, who has worked at Slate for seven years, said he was “heartsick” over hurting his colleagues but added, “I hate the idea of things that are beyond debate and things that cannot be said.”
  • “I don’t think he did anything that merits discipline or consequences, and I think it’s an example of a kind of overreaction and a lack of judgment and perspective that is unfortunately spreading,”
  • Joel Anderson, a Black staff member at Slate who hosted the third season of the podcast “Slow Burn,” disagreed. “For Black employees, it’s an extremely small ask to not hear that particular slur and not have debate about whether it’s OK for white employees to use that particular slur,”
Javier E

Free Speech and Civic Virtue between "Fake News" and "Wokeness" | History News Network - 1 views

  • none of these arguments reaches past adversarial notions of democracy. They all characterize free speech as a matter of conflicting rights-claims and competing factions.
  • As long as political polarization precludes rational consensus, she argues, we are left to “[make] personal choices and pronouncements regarding what we are willing (or unwilling) to tolerate, in an attempt to slightly nudge the world in our preferred direction.” Notably, she makes no mention of how we might discern the validity of those preferences or how we might arbitrate between them in cases of conflict.
  • Free speech advocates are hypocritical or ignore some extenuating context, they claim, while those stifling disagreeable or offensive views are merely rectifying past injustices or paying their opponents back in kind, operating practically in a flawed public sphere.
  • ...16 more annotations...
  • It is telling, however, that the letter’s critics focus on speakers and what they deserve to say far more than the listening public and what we deserve to hear
  • In Free Speech and Its Relation to Self-Government (1948), Meikeljohn challenges us to approach public discourse from the perspective of the “good man”: that is to say, the virtuous citizen
  • One cannot appreciate the freedom of speech, he writes, unless one sees it as an act of collective deliberation, carried out by “a man who, in his political activities, is not merely fighting for what…he can get, but is eagerly and generously serving the common welfare”
  • Free speech is not only about discovering truth, or encouraging ethical individualism, or protecting minority opinions—liberals’ usual lines of defense—it is ultimately about binding our fate to others’ by “sharing” the truth with our fellow citizens
  • Sharing truth requires mutual respect and a jealous defense of intellectual freedom, so that “no idea, no opinion, no doubt, no belief, no counter belief, no relevant information” is withheld from the electorate
  • For their part, voters must judge these arguments individually, through introspection, virtue, and meditation on the common good. 
  • The “marketplace of ideas” is dangerous because it relieves citizens of exactly these duties. As Meikeljohn writes:   As separate thinkers, we have no obligation to test our thinking, to make sure that it is worthy of a citizen who is one of the ‘rulers of the nation.’ That testing is to be done, we believe, not by us, but by ‘the competition of the market.
  • this is precisely the sort of self-interested posturing that many on the Left resent in their opponents, but which they now propose to embrace as their own, casually accepting the notion that their fellow citizens are incapable of exercising public reason or considering alternative viewpoints with honesty, bravery, humility, and compassion. 
  • In practice, curtailing public speech is likely to worsen polarization and further empower dominant cultural interests. As an ideal (or a lack thereof), it undermines the intelligibility and mutual respect that form the very basis of citizenship.
  • political polarization has induced Americans to abandon “truth-directed methods of persuasion”—such as argumentation and evidence—for a form of non-rational “messaging,” in which “every speech act is classified as friend or foe… and in which very little faith exists as to the rational faculties of those being spoken to.”
  • “In such a context,” she writes, “even the cry for ‘free speech’ invites a nonliteral interpretation, as being nothing but the most efficient way for its advocates to acquire or consolidate power.”
  • Segments of the Right have pushed this sort of political messaging to its cynical extremes—taking Donald Trump’s statements “seriously but not literally” or taking antagonistic positions simply to “own the libs.”
  • Rather than assuming the supremacy of our own opinions or aspersing the motives of those with whom we disagree, our duty as Americans is to think with, learn from, and correct each other.
  • some critics of the Harper’s letter seem eager to reduce all public debate to a form of power politics
  • Trans activist Julia Serano merely punctuates the tendency when she writes that calls for free speech represent a “misconception that we, as a society, are all in the midst of some grand rational debate, and that marginalized people simply need to properly plea our case for acceptance, and once we do, reason-minded people everywhere will eventually come around. This notion is utterly ludicrous.”
  • one could say that critics of the Harper’s letter take the “bad man” as their unit of analysis. By their lights, all participants in public debate are prejudiced, particular, and self-interested
Javier E

The Age of Social Media Is Ending - The Atlantic - 0 views

  • Slowly and without fanfare, around the end of the aughts, social media took its place. The change was almost invisible, but it had enormous consequences. Instead of facilitating the modest use of existing connections—largely for offline life (to organize a birthday party, say)—social software turned those connections into a latent broadcast channel. All at once, billions of people saw themselves as celebrities, pundits, and tastemakers.
  • A global broadcast network where anyone can say anything to anyone else as often as possible, and where such people have come to think they deserve such a capacity, or even that withholding it amounts to censorship or suppression—that’s just a terrible idea from the outset. And it’s a terrible idea that is entirely and completely bound up with the concept of social media itself: systems erected and used exclusively to deliver an endless stream of content.
  • “social media,” a name so familiar that it has ceased to bear meaning. But two decades ago, that term didn’t exist
  • ...35 more annotations...
  • a “web 2.0” revolution in “user-generated content,” offering easy-to-use, easily adopted tools on websites and then mobile apps. They were built for creating and sharing “content,”
  • As the original name suggested, social networking involved connecting, not publishing. By connecting your personal network of trusted contacts (or “strong ties,” as sociologists call them) to others’ such networks (via “weak ties”), you could surface a larger network of trusted contacts
  • The whole idea of social networks was networking: building or deepening relationships, mostly with people you knew. How and why that deepening happened was largely left to the users to decide.
  • That changed when social networking became social media around 2009, between the introduction of the smartphone and the launch of Instagram. Instead of connection—forging latent ties to people and organizations we would mostly ignore—social media offered platforms through which people could publish content as widely as possible, well beyond their networks of immediate contacts.
  • Social media turned you, me, and everyone into broadcasters (if aspirational ones). The results have been disastrous but also highly pleasurable, not to mention massively profitable—a catastrophic combination.
  • A social network is an idle, inactive system—a Rolodex of contacts, a notebook of sales targets, a yearbook of possible soul mates. But social media is active—hyperactive, really—spewing material across those networks instead of leaving them alone until needed.
  • The authors propose social media as a system in which users participate in “information exchange.” The network, which had previously been used to establish and maintain relationships, becomes reinterpreted as a channel through which to broadcast.
  • The toxicity of social media makes it easy to forget how truly magical this innovation felt when it was new. From 2004 to 2009, you could join Facebook and everyone you’d ever known—including people you’d definitely lost track of—was right there, ready to connect or reconnect. The posts and photos I saw characterized my friends’ changing lives, not the conspiracy theories that their unhinged friends had shared with them
  • Twitter, which launched in 2006, was probably the first true social-media site, even if nobody called it that at the time. Instead of focusing on connecting people, the site amounted to a giant, asynchronous chat room for the world. Twitter was for talking to everyone—which is perhaps one of the reasons journalists have flocked to it
  • on Twitter, anything anybody posted could be seen instantly by anyone else. And furthermore, unlike posts on blogs or images on Flickr or videos on YouTube, tweets were short and low-effort, making it easy to post many of them a week or even a day.
  • soon enough, all social networks became social media first and foremost. When groups, pages, and the News Feed launched, Facebook began encouraging users to share content published by others in order to increase engagement on the service, rather than to provide updates to friends. LinkedIn launched a program to publish content across the platform, too. Twitter, already principally a publishing platform, added a dedicated “retweet” feature, making it far easier to spread content virally across user networks.
  • When we look back at this moment, social media had already arrived in spirit if not by name. RSS readers offered a feed of blog posts to catch up on, complete with unread counts. MySpace fused music and chatter; YouTube did it with video (“Broadcast Yourself”)
  • From being asked to review every product you buy to believing that every tweet or Instagram image warrants likes or comments or follows, social media produced a positively unhinged, sociopathic rendition of human sociality.
  • Other services arrived or evolved in this vein, among them Reddit, Snapchat, and WhatsApp, all far more popular than Twitter. Social networks, once latent routes for possible contact, became superhighways of constant content
  • Although you can connect the app to your contacts and follow specific users, on TikTok, you are more likely to simply plug into a continuous flow of video content that has oozed to the surface via algorithm.
  • In the social-networking era, the connections were essential, driving both content creation and consumption. But the social-media era seeks the thinnest, most soluble connections possible, just enough to allow the content to flow.
  • This is also why journalists became so dependent on Twitter: It’s a constant stream of sources, events, and reactions—a reporting automat, not to mention an outbound vector for media tastemakers to make tastes.
  • “influencer” became an aspirational role, especially for young people for whom Instagram fame seemed more achievable than traditional celebrity—or perhaps employment of any kind.
  • social-media operators discovered that the more emotionally charged the content, the better it spread across its users’ networks. Polarizing, offensive, or just plain fraudulent information was optimized for distribution. By the time the platforms realized and the public revolted, it was too late to turn off these feedback loops.
  • The ensuing disaster was multipar
  • Rounding up friends or business contacts into a pen in your online profile for possible future use was never a healthy way to understand social relationships.
  • when social networking evolved into social media, user expectations escalated. Driven by venture capitalists’ expectations and then Wall Street’s demands, the tech companies—Google and Facebook and all the rest—became addicted to massive scale
  • Social media showed that everyone has the potential to reach a massive audience at low cost and high gain—and that potential gave many people the impression that they deserve such an audience.
  • On social media, everyone believes that anyone to whom they have access owes them an audience: a writer who posted a take, a celebrity who announced a project, a pretty girl just trying to live her life, that anon who said something afflictive
  • When network connections become activated for any reason or no reason, then every connection seems worthy of traversing.
  • people just aren’t meant to talk to one another this much. They shouldn’t have that much to say, they shouldn’t expect to receive such a large audience for that expression, and they shouldn’t suppose a right to comment or rejoinder for every thought or notion either.
  • Facebook and all the rest enjoyed a massive rise in engagement and the associated data-driven advertising profits that the attention-driven content economy created. The same phenomenon also created the influencer economy, in which individual social-media users became valuable as channels for distributing marketing messages or product sponsorships by means of their posts’ real or imagined reach
  • That’s no surprise, I guess, given that the model was forged in the fires of Big Tech companies such as Facebook, where sociopathy is a design philosophy.
  • If change is possible, carrying it out will be difficult, because we have adapted our lives to conform to social media’s pleasures and torments. It’s seemingly as hard to give up on social media as it was to give up smoking en masse
  • Quitting that habit took decades of regulatory intervention, public-relations campaigning, social shaming, and aesthetic shifts. At a cultural level, we didn’t stop smoking just because the habit was unpleasant or uncool or even because it might kill us. We did so slowly and over time, by forcing social life to suffocate the practice. That process must now begin in earnest for social media.
  • Something may yet survive the fire that would burn it down: social networks, the services’ overlooked, molten core. It was never a terrible idea, at least, to use computers to connect to one another on occasion, for justified reasons, and in moderation
  • The problem came from doing so all the time, as a lifestyle, an aspiration, an obsession. The offer was always too good to be true, but it’s taken us two decades to realize the Faustian nature of the bargain.
  • when I first wrote about downscale, the ambition seemed necessary but impossible. It still feels unlikely—but perhaps newly plausible.
  • To win the soul of social life, we must learn to muzzle it again, across the globe, among billions of people. To speak less, to fewer people and less often–and for them to do the same to you, and everyone else as well
  • We cannot make social media good, because it is fundamentally bad, deep in its very structure. All we can do is hope that it withers away, and play our small part in helping abandon it.
Javier E

His Job Was to Make Instagram Safe for Teens. His 14-Year-Old Showed Him What the App W... - 0 views

  • The experience of young users on Meta’s Instagram—where Bejar had spent the previous two years working as a consultant—was especially acute. In a subsequent email to Instagram head Adam Mosseri, one statistic stood out: One in eight users under the age of 16 said they had experienced unwanted sexual advances on the platform over the previous seven days.
  • For Bejar, that finding was hardly a surprise. His daughter and her friends had been receiving unsolicited penis pictures and other forms of harassment on the platform since the age of 14, he wrote, and Meta’s systems generally ignored their reports—or responded by saying that the harassment didn’t violate platform rules.
  • “I asked her why boys keep doing that,” Bejar wrote to Zuckerberg and his top lieutenants. “She said if the only thing that happens is they get blocked, why wouldn’t they?”
  • ...39 more annotations...
  • For the well-being of its users, Bejar argued, Meta needed to change course, focusing less on a flawed system of rules-based policing and more on addressing such bad experiences
  • The company would need to collect data on what upset users and then work to combat the source of it, nudging those who made others uncomfortable to improve their behavior and isolating communities of users who deliberately sought to harm others.
  • “I am appealing to you because I believe that working this way will require a culture shift,” Bejar wrote to Zuckerberg—the company would have to acknowledge that its existing approach to governing Facebook and Instagram wasn’t working.
  • During and after Bejar’s time as a consultant, Meta spokesman Andy Stone said, the company has rolled out several product features meant to address some of the Well-Being Team’s findings. Those features include warnings to users before they post comments that Meta’s automated systems flag as potentially offensive, and reminders to be kind when sending direct messages to users like content creators who receive a large volume of messages. 
  • Meta’s classifiers were reliable enough to remove only a low single-digit percentage of hate speech with any degree of precision.
  • Bejar was floored—all the more so when he learned that virtually all of his daughter’s friends had been subjected to similar harassment. “DTF?” a user they’d never met would ask, using shorthand for a vulgar proposition. Instagram acted so rarely on reports of such behavior that the girls no longer bothered reporting them. 
  • Meta’s own statistics suggested that big problems didn’t exist. 
  • Meta had come to approach governing user behavior as an overwhelmingly automated process. Engineers would compile data sets of unacceptable content—things like terrorism, pornography, bullying or “excessive gore”—and then train machine-learning models to screen future content for similar material.
  • While users could still flag things that upset them, Meta shifted resources away from reviewing them. To discourage users from filing reports, internal documents from 2019 show, Meta added steps to the reporting process. Meta said the changes were meant to discourage frivolous reports and educate users about platform rules. 
  • The outperformance of Meta’s automated enforcement relied on what Bejar considered two sleights of hand. The systems didn’t catch anywhere near the majority of banned content—only the majority of what the company ultimately removed
  • “Please don’t talk about my underage tits,” Bejar’s daughter shot back before reporting his comment to Instagram. A few days later, the platform got back to her: The insult didn’t violate its community guidelines.
  • Also buttressing Meta’s statistics were rules written narrowly enough to ban only unambiguously vile material. Meta’s rules didn’t clearly prohibit adults from flooding the comments section on a teenager’s posts with kiss emojis or posting pictures of kids in their underwear, inviting their followers to “see more” in a private Facebook Messenger group. 
  • “Mark personally values freedom of expression first and foremost and would say this is a feature and not a bug,” Rosen responded
  • Narrow rules and unreliable automated enforcement systems left a lot of room for bad behavior—but they made the company’s child-safety statistics look pretty good according to Meta’s metric of choice: prevalence.
  • Defined as the percentage of content viewed worldwide that explicitly violates a Meta rule, prevalence was the company’s preferred measuring stick for the problems users experienced.
  • According to prevalence, child exploitation was so rare on the platform that it couldn’t be reliably estimated, less than 0.05%, the threshold for functional measurement. Content deemed to encourage self-harm, such as eating disorders, was just as minimal, and rule violations for bullying and harassment occurred in just eight of 10,000 views. 
  • “There’s a grading-your-own-homework problem,”
  • Meta defines what constitutes harmful content, so it shapes the discussion of how successful it is at dealing with it.”
  • It could reconsider its AI-generated “beauty filters,” which internal research suggested made both the people who used them and those who viewed the images more self-critical
  • the team built a new questionnaire called BEEF, short for “Bad Emotional Experience Feedback.
  • A recurring survey of issues 238,000 users had experienced over the past seven days, the effort identified problems with prevalence from the start: Users were 100 times more likely to tell Instagram they’d witnessed bullying in the last week than Meta’s bullying-prevalence statistics indicated they should.
  • “People feel like they’re having a bad experience or they don’t,” one presentation on BEEF noted. “Their perception isn’t constrained by policy.
  • they seemed particularly common among teens on Instagram.
  • Among users under the age of 16, 26% recalled having a bad experience in the last week due to witnessing hostility against someone based on their race, religion or identity
  • More than a fifth felt worse about themselves after viewing others’ posts, and 13% had experienced unwanted sexual advances in the past seven days. 
  • The vast gap between the low prevalence of content deemed problematic in the company’s own statistics and what users told the company they experienced suggested that Meta’s definitions were off, Bejar argued
  • To minimize content that teenagers told researchers made them feel bad about themselves, Instagram could cap how much beauty- and fashion-influencer content users saw.
  • Proving to Meta’s leadership that the company’s prevalence metrics were missing the point was going to require data the company didn’t have. So Bejar and a group of staffers from the Well-Being Team started collecting it
  • And it could build ways for users to report unwanted contacts, the first step to figuring out how to discourage them.
  • One experiment run in response to BEEF data showed that when users were notified that their comment or post had upset people who saw it, they often deleted it of their own accord. “Even if you don’t mandate behaviors,” said Krieger, “you can at least send signals about what behaviors aren’t welcome.”
  • But among the ranks of Meta’s senior middle management, Bejar and Krieger said, BEEF hit a wall. Managers who had made their careers on incrementally improving prevalence statistics weren’t receptive to the suggestion that the approach wasn’t working. 
  • After three decades in Silicon Valley, he understood that members of the company’s C-Suite might not appreciate a damning appraisal of the safety risks young users faced from its product—especially one citing the company’s own data. 
  • “This was the email that my entire career in tech trained me not to send,” he says. “But a part of me was still hoping they just didn’t know.”
  • “Policy enforcement is analogous to the police,” he wrote in the email Oct. 5, 2021—arguing that it’s essential to respond to crime, but that it’s not what makes a community safe. Meta had an opportunity to do right by its users and take on a problem that Bejar believed was almost certainly industrywide.
  • fter Haugen’s airing of internal research, Meta had cracked down on the distribution of anything that would, if leaked, cause further reputational damage. With executives privately asserting that the company’s research division harbored a fifth column of detractors, Meta was formalizing a raft of new rules for employees’ internal communication.
  • Among the mandates for achieving “Narrative Excellence,” as the company called it, was to keep research data tight and never assert a moral or legal duty to fix a problem.
  • “I had to write about it as a hypothetical,” Bejar said. Rather than acknowledging that Instagram’s survey data showed that teens regularly faced unwanted sexual advances, the memo merely suggested how Instagram might help teens if they faced such a problem.
  • The hope that the team’s work would continue didn’t last. The company stopped conducting the specific survey behind BEEF, then laid off most everyone who’d worked on it as part of what Zuckerberg called Meta’s “year of efficiency.
  • If Meta was to change, Bejar told the Journal, the effort would have to come from the outside. He began consulting with a coalition of state attorneys general who filed suit against the company late last month, alleging that the company had built its products to maximize engagement at the expense of young users’ physical and mental health. Bejar also got in touch with members of Congress about where he believes the company’s user-safety efforts fell short. 
Javier E

Why the Past 10 Years of American Life Have Been Uniquely Stupid - The Atlantic - 0 views

  • Social scientists have identified at least three major forces that collectively bind together successful democracies: social capital (extensive social networks with high levels of trust), strong institutions, and shared stories.
  • Social media has weakened all three.
  • gradually, social-media users became more comfortable sharing intimate details of their lives with strangers and corporations. As I wrote in a 2019 Atlantic article with Tobias Rose-Stockwell, they became more adept at putting on performances and managing their personal brand—activities that might impress others but that do not deepen friendships in the way that a private phone conversation will.
  • ...118 more annotations...
  • the stage was set for the major transformation, which began in 2009: the intensification of viral dynamics.
  • Before 2009, Facebook had given users a simple timeline––a never-ending stream of content generated by their friends and connections, with the newest posts at the top and the oldest ones at the bottom
  • That began to change in 2009, when Facebook offered users a way to publicly “like” posts with the click of a button. That same year, Twitter introduced something even more powerful: the “Retweet” button, which allowed users to publicly endorse a post while also sharing it with all of their followers.
  • “Like” and “Share” buttons quickly became standard features of most other platforms.
  • Facebook developed algorithms to bring each user the content most likely to generate a “like” or some other interaction, eventually including the “share” as well.
  • Later research showed that posts that trigger emotions––especially anger at out-groups––are the most likely to be shared.
  • By 2013, social media had become a new game, with dynamics unlike those in 2008. If you were skillful or lucky, you might create a post that would “go viral” and make you “internet famous”
  • If you blundered, you could find yourself buried in hateful comments. Your posts rode to fame or ignominy based on the clicks of thousands of strangers, and you in turn contributed thousands of clicks to the game.
  • This new game encouraged dishonesty and mob dynamics: Users were guided not just by their true preferences but by their past experiences of reward and punishment,
  • As a social psychologist who studies emotion, morality, and politics, I saw this happening too. The newly tweaked platforms were almost perfectly designed to bring out our most moralistic and least reflective selves. The volume of outrage was shocking.
  • It was just this kind of twitchy and explosive spread of anger that James Madison had tried to protect us from as he was drafting the U.S. Constitution.
  • The Framers of the Constitution were excellent social psychologists. They knew that democracy had an Achilles’ heel because it depended on the collective judgment of the people, and democratic communities are subject to “the turbulency and weakness of unruly passions.”
  • The key to designing a sustainable republic, therefore, was to build in mechanisms to slow things down, cool passions, require compromise, and give leaders some insulation from the mania of the moment while still holding them accountable to the people periodically, on Election Day.
  • The tech companies that enhanced virality from 2009 to 2012 brought us deep into Madison’s nightmare.
  • a less quoted yet equally important insight, about democracy’s vulnerability to triviality.
  • Madison notes that people are so prone to factionalism that “where no substantial occasion presents itself, the most frivolous and fanciful distinctions have been sufficient to kindle their unfriendly passions and excite their most violent conflicts.”
  • Social media has both magnified and weaponized the frivolous.
  • It’s not just the waste of time and scarce attention that matters; it’s the continual chipping-away of trust.
  • a democracy depends on widely internalized acceptance of the legitimacy of rules, norms, and institutions.
  • when citizens lose trust in elected leaders, health authorities, the courts, the police, universities, and the integrity of elections, then every decision becomes contested; every election becomes a life-and-death struggle to save the country from the other side
  • The most recent Edelman Trust Barometer (an international measure of citizens’ trust in government, business, media, and nongovernmental organizations) showed stable and competent autocracies (China and the United Arab Emirates) at the top of the list, while contentious democracies such as the United States, the United Kingdom, Spain, and South Korea scored near the bottom (albeit above Russia).
  • The literature is complex—some studies show benefits, particularly in less developed democracies—but the review found that, on balance, social media amplifies political polarization; foments populism, especially right-wing populism; and is associated with the spread of misinformation.
  • When people lose trust in institutions, they lose trust in the stories told by those institutions. That’s particularly true of the institutions entrusted with the education of children.
  • Facebook and Twitter make it possible for parents to become outraged every day over a new snippet from their children’s history lessons––and math lessons and literature selections, and any new pedagogical shifts anywhere in the country
  • The motives of teachers and administrators come into question, and overreaching laws or curricular reforms sometimes follow, dumbing down education and reducing trust in it further.
  • young people educated in the post-Babel era are less likely to arrive at a coherent story of who we are as a people, and less likely to share any such story with those who attended different schools or who were educated in a different decade.
  • former CIA analyst Martin Gurri predicted these fracturing effects in his 2014 book, The Revolt of the Public. Gurri’s analysis focused on the authority-subverting effects of information’s exponential growth, beginning with the internet in the 1990s. Writing nearly a decade ago, Gurri could already see the power of social media as a universal solvent, breaking down bonds and weakening institutions everywhere it reached.
  • he notes a constructive feature of the pre-digital era: a single “mass audience,” all consuming the same content, as if they were all looking into the same gigantic mirror at the reflection of their own society. I
  • The digital revolution has shattered that mirror, and now the public inhabits those broken pieces of glass. So the public isn’t one thing; it’s highly fragmented, and it’s basically mutually hostile
  • Facebook, Twitter, YouTube, and a few other large platforms unwittingly dissolved the mortar of trust, belief in institutions, and shared stories that had held a large and diverse secular democracy together.
  • I think we can date the fall of the tower to the years between 2011 (Gurri’s focal year of “nihilistic” protests) and 2015, a year marked by the “great awokening” on the left and the ascendancy of Donald Trump on the right.
  • Twitter can overpower all the newspapers in the country, and stories cannot be shared (or at least trusted) across more than a few adjacent fragments—so truth cannot achieve widespread adherence.
  • fter Babel, nothing really means anything anymore––at least not in a way that is durable and on which people widely agree.
  • Politics After Babel
  • “Politics is the art of the possible,” the German statesman Otto von Bismarck said in 1867. In a post-Babel democracy, not much may be possible.
  • The ideological distance between the two parties began increasing faster in the 1990s. Fox News and the 1994 “Republican Revolution” converted the GOP into a more combative party.
  • So cross-party relationships were already strained before 2009. But the enhanced virality of social media thereafter made it more hazardous to be seen fraternizing with the enemy or even failing to attack the enemy with sufficient vigor.
  • What changed in the 2010s? Let’s revisit that Twitter engineer’s metaphor of handing a loaded gun to a 4-year-old. A mean tweet doesn’t kill anyone; it is an attempt to shame or punish someone publicly while broadcasting one’s own virtue, brilliance, or tribal loyalties. It’s more a dart than a bullet
  • from 2009 to 2012, Facebook and Twitter passed out roughly 1 billion dart guns globally. We’ve been shooting one another ever since.
  • “devoted conservatives,” comprised 6 percent of the U.S. population.
  • the warped “accountability” of social media has also brought injustice—and political dysfunction—in three ways.
  • First, the dart guns of social media give more power to trolls and provocateurs while silencing good citizens.
  • a small subset of people on social-media platforms are highly concerned with gaining status and are willing to use aggression to do so.
  • Across eight studies, Bor and Petersen found that being online did not make most people more aggressive or hostile; rather, it allowed a small number of aggressive people to attack a much larger set of victims. Even a small number of jerks were able to dominate discussion forums,
  • Additional research finds that women and Black people are harassed disproportionately, so the digital public square is less welcoming to their voices.
  • Second, the dart guns of social media give more power and voice to the political extremes while reducing the power and voice of the moderate majority.
  • The “Hidden Tribes” study, by the pro-democracy group More in Common, surveyed 8,000 Americans in 2017 and 2018 and identified seven groups that shared beliefs and behaviors.
  • Social media has given voice to some people who had little previously, and it has made it easier to hold powerful people accountable for their misdeeds
  • The group furthest to the left, the “progressive activists,” comprised 8 percent of the population. The progressive activists were by far the most prolific group on social media: 70 percent had shared political content over the previous year. The devoted conservatives followed, at 56 percent.
  • These two extreme groups are similar in surprising ways. They are the whitest and richest of the seven groups, which suggests that America is being torn apart by a battle between two subsets of the elite who are not representative of the broader society.
  • they are the two groups that show the greatest homogeneity in their moral and political attitudes.
  • likely a result of thought-policing on social media:
  • political extremists don’t just shoot darts at their enemies; they spend a lot of their ammunition targeting dissenters or nuanced thinkers on their own team.
  • Finally, by giving everyone a dart gun, social media deputizes everyone to administer justice with no due process. Platforms like Twitter devolve into the Wild West, with no accountability for vigilantes.
  • Enhanced-virality platforms thereby facilitate massive collective punishment for small or imagined offenses, with real-world consequences, including innocent people losing their jobs and being shamed into suicide
  • we don’t get justice and inclusion; we get a society that ignores context, proportionality, mercy, and truth.
  • Since the tower fell, debates of all kinds have grown more and more confused. The most pervasive obstacle to good thinking is confirmation bias, which refers to the human tendency to search only for evidence that confirms our preferred beliefs
  • search engines were supercharging confirmation bias, making it far easier for people to find evidence for absurd beliefs and conspiracy theorie
  • The most reliable cure for confirmation bias is interaction with people who don’t share your beliefs. They confront you with counterevidence and counterargument.
  • In his book The Constitution of Knowledge, Jonathan Rauch describes the historical breakthrough in which Western societies developed an “epistemic operating system”—that is, a set of institutions for generating knowledge from the interactions of biased and cognitively flawed individuals
  • English law developed the adversarial system so that biased advocates could present both sides of a case to an impartial jury.
  • Newspapers full of lies evolved into professional journalistic enterprises, with norms that required seeking out multiple sides of a story, followed by editorial review, followed by fact-checking.
  • Universities evolved from cloistered medieval institutions into research powerhouses, creating a structure in which scholars put forth evidence-backed claims with the knowledge that other scholars around the world would be motivated to gain prestige by finding contrary evidence.
  • Part of America’s greatness in the 20th century came from having developed the most capable, vibrant, and productive network of knowledge-producing institutions in all of human history
  • But this arrangement, Rauch notes, “is not self-maintaining; it relies on an array of sometimes delicate social settings and understandings, and those need to be understood, affirmed, and protected.”
  • This, I believe, is what happened to many of America’s key institutions in the mid-to-late 2010s. They got stupider en masse because social media instilled in their members a chronic fear of getting darted
  • it was so pervasive that it established new behavioral norms backed by new policies seemingly overnight
  • Participants in our key institutions began self-censoring to an unhealthy degree, holding back critiques of policies and ideas—even those presented in class by their students—that they believed to be ill-supported or wrong.
  • The stupefying process plays out differently on the right and the left because their activist wings subscribe to different narratives with different sacred values.
  • The “Hidden Tribes” study tells us that the “devoted conservatives” score highest on beliefs related to authoritarianism. They share a narrative in which America is eternally under threat from enemies outside and subversives within; they see life as a battle between patriots and traitors.
  • they are psychologically different from the larger group of “traditional conservatives” (19 percent of the population), who emphasize order, decorum, and slow rather than radical change.
  • The traditional punishment for treason is death, hence the battle cry on January 6: “Hang Mike Pence.”
  • Right-wing death threats, many delivered by anonymous accounts, are proving effective in cowing traditional conservatives
  • The wave of threats delivered to dissenting Republican members of Congress has similarly pushed many of the remaining moderates to quit or go silent, giving us a party ever more divorced from the conservative tradition, constitutional responsibility, and reality.
  • The stupidity on the right is most visible in the many conspiracy theories spreading across right-wing media and now into Congress.
  • The Democrats have also been hit hard by structural stupidity, though in a different way. In the Democratic Party, the struggle between the progressive wing and the more moderate factions is open and ongoing, and often the moderates win.
  • The problem is that the left controls the commanding heights of the culture: universities, news organizations, Hollywood, art museums, advertising, much of Silicon Valley, and the teachers’ unions and teaching colleges that shape K–12 education. And in many of those institutions, dissent has been stifled:
  • Liberals in the late 20th century shared a belief that the sociologist Christian Smith called the “liberal progress” narrative, in which America used to be horrifically unjust and repressive, but, thanks to the struggles of activists and heroes, has made (and continues to make) progress toward realizing the noble promise of its founding.
  • It is also the view of the “traditional liberals” in the “Hidden Tribes” study (11 percent of the population), who have strong humanitarian values, are older than average, and are largely the people leading America’s cultural and intellectual institutions.
  • when the newly viralized social-media platforms gave everyone a dart gun, it was younger progressive activists who did the most shooting, and they aimed a disproportionate number of their darts at these older liberal leaders.
  • Confused and fearful, the leaders rarely challenged the activists or their nonliberal narrative in which life at every institution is an eternal battle among identity groups over a zero-sum pie, and the people on top got there by oppressing the people on the bottom. This new narrative is rigidly egalitarian––focused on equality of outcomes, not of rights or opportunities. It is unconcerned with individual rights.
  • The universal charge against people who disagree with this narrative is not “traitor”; it is “racist,” “transphobe,” “Karen,” or some related scarlet letter marking the perpetrator as one who hates or harms a marginalized group.
  • The punishment that feels right for such crimes is not execution; it is public shaming and social death.
  • anyone on Twitter had already seen dozens of examples teaching the basic lesson: Don’t question your own side’s beliefs, policies, or actions. And when traditional liberals go silent, as so many did in the summer of 2020, the progressive activists’ more radical narrative takes over as the governing narrative of an organization.
  • This is why so many epistemic institutions seemed to “go woke” in rapid succession that year and the next, beginning with a wave of controversies and resignations at The New York Times and other newspapers, and continuing on to social-justice pronouncements by groups of doctors and medical associations
  • The problem is structural. Thanks to enhanced-virality social media, dissent is punished within many of our institutions, which means that bad ideas get elevated into official policy.
  • In a 2018 interview, Steve Bannon, the former adviser to Donald Trump, said that the way to deal with the media is “to flood the zone with shit.” He was describing the “firehose of falsehood” tactic pioneered by Russian disinformation programs to keep Americans confused, disoriented, and angry.
  • artificial intelligence is close to enabling the limitless spread of highly believable disinformation. The AI program GPT-3 is already so good that you can give it a topic and a tone and it will spit out as many essays as you like, typically with perfect grammar and a surprising level of coherence.
  • Renée DiResta, the research manager at the Stanford Internet Observatory, explained that spreading falsehoods—whether through text, images, or deep-fake videos—will quickly become inconceivably easy. (She co-wrote the essay with GPT-3.)
  • American factions won’t be the only ones using AI and social media to generate attack content; our adversaries will too.
  • In the 20th century, America’s shared identity as the country leading the fight to make the world safe for democracy was a strong force that helped keep the culture and the polity together.
  • In the 21st century, America’s tech companies have rewired the world and created products that now appear to be corrosive to democracy, obstacles to shared understanding, and destroyers of the modern tower.
  • What changes are needed?
  • I can suggest three categories of reforms––three goals that must be achieved if democracy is to remain viable in the post-Babel era.
  • We must harden democratic institutions so that they can withstand chronic anger and mistrust, reform social media so that it becomes less socially corrosive, and better prepare the next generation for democratic citizenship in this new age.
  • Harden Democratic Institutions
  • we must reform key institutions so that they can continue to function even if levels of anger, misinformation, and violence increase far above those we have today.
  • Reforms should reduce the outsize influence of angry extremists and make legislators more responsive to the average voter in their district.
  • One example of such a reform is to end closed party primaries, replacing them with a single, nonpartisan, open primary from which the top several candidates advance to a general election that also uses ranked-choice voting
  • A second way to harden democratic institutions is to reduce the power of either political party to game the system in its favor, for example by drawing its preferred electoral districts or selecting the officials who will supervise elections
  • These jobs should all be done in a nonpartisan way.
  • Reform Social Media
  • Social media’s empowerment of the far left, the far right, domestic trolls, and foreign agents is creating a system that looks less like democracy and more like rule by the most aggressive.
  • it is within our power to reduce social media’s ability to dissolve trust and foment structural stupidity. Reforms should limit the platforms’ amplification of the aggressive fringes while giving more voice to what More in Common calls “the exhausted majority.”
  • the main problem with social media is not that some people post fake or toxic stuff; it’s that fake and outrage-inducing content can now attain a level of reach and influence that was not possible before
  • Perhaps the biggest single change that would reduce the toxicity of existing platforms would be user verification as a precondition for gaining the algorithmic amplification that social media offers.
  • One of the first orders of business should be compelling the platforms to share their data and their algorithms with academic researchers.
  • Prepare the Next Generation
  • Childhood has become more tightly circumscribed in recent generations––with less opportunity for free, unstructured play; less unsupervised time outside; more time online. Whatever else the effects of these shifts, they have likely impeded the development of abilities needed for effective self-governance for many young adults
  • Depression makes people less likely to want to engage with new people, ideas, and experiences. Anxiety makes new things seem more threatening. As these conditions have risen and as the lessons on nuanced social behavior learned through free play have been delayed, tolerance for diverse viewpoints and the ability to work out disputes have diminished among many young people
  • Students did not just say that they disagreed with visiting speakers; some said that those lectures would be dangerous, emotionally devastating, a form of violence. Because rates of teen depression and anxiety have continued to rise into the 2020s, we should expect these views to continue in the generations to follow, and indeed to become more severe.
  • The most important change we can make to reduce the damaging effects of social media on children is to delay entry until they have passed through puberty.
  • The age should be raised to at least 16, and companies should be held responsible for enforcing it.
  • et them out to play. Stop starving children of the experiences they most need to become good citizens: free play in mixed-age groups of children with minimal adult supervision
  • while social media has eroded the art of association throughout society, it may be leaving its deepest and most enduring marks on adolescents. A surge in rates of anxiety, depression, and self-harm among American teens began suddenly in the early 2010s. (The same thing happened to Canadian and British teens, at the same time.) The cause is not known, but the timing points to social media as a substantial contributor—the surge began just as the large majority of American teens became daily users of the major platforms.
  • What would it be like to live in Babel in the days after its destruction? We know. It is a time of confusion and loss. But it is also a time to reflect, listen, and build.
  • In recent years, Americans have started hundreds of groups and organizations dedicated to building trust and friendship across the political divide, including BridgeUSA, Braver Angels (on whose board I serve), and many others listed at BridgeAlliance.us. We cannot expect Congress and the tech companies to save us. We must change ourselves and our communities.
  • when we look away from our dysfunctional federal government, disconnect from social media, and talk with our neighbors directly, things seem more hopeful. Most Americans in the More in Common report are members of the “exhausted majority,” which is tired of the fighting and is willing to listen to the other side and compromise. Most Americans now see that social media is having a negative impact on the country, and are becoming more aware of its damaging effects on children.
Javier E

For Chat-Based AI, We Are All Once Again Tech Companies' Guinea Pigs - WSJ - 0 views

  • The companies touting new chat-based artificial-intelligence systems are running a massive experiment—and we are the test subjects.
  • In this experiment, Microsoft, MSFT -2.18% OpenAI and others are rolling out on the internet an alien intelligence that no one really understands, which has been granted the ability to influence our assessment of what’s true in the world. 
  • Companies have been cautious in the past about unleashing this technology on the world. In 2019, OpenAI decided not to release an earlier version of the underlying model that powers both ChatGPT and the new Bing because the company’s leaders deemed it too dangerous to do so, they said at the time.
  • ...26 more annotations...
  • Microsoft leaders felt “enormous urgency” for it to be the company to bring this technology to market, because others around the world are working on similar tech but might not have the resources or inclination to build it as responsibly, says Sarah Bird, a leader on Microsoft’s responsible AI team.
  • One common starting point for such models is what is essentially a download or “scrape” of most of the internet. In the past, these language models were used to try to understand text, but the new generation of them, part of the revolution in “generative” AI, uses those same models to create texts by trying to guess, one word at a time, the most likely word to come next in any given sequence.
  • Wide-scale testing gives Microsoft and OpenAI a big competitive edge by enabling them to gather huge amounts of data about how people actually use such chatbots. Both the prompts users input into their systems, and the results their AIs spit out, can then be fed back into a complicated system—which includes human content moderators paid by the companies—to improve it.
  • , being first to market with a chat-based AI gives these companies a huge initial lead over companies that have been slower to release their own chat-based AIs, such as Google.
  • rarely has an experiment like Microsoft and OpenAI’s been rolled out so quickly, and at such a broad scale.
  • Among those who build and study these kinds of AIs, Mr. Altman’s case for experimenting on the global public has inspired responses ranging from raised eyebrows to condemnation.
  • The fact that we’re all guinea pigs in this experiment doesn’t mean it shouldn’t be conducted, says Nathan Lambert, a research scientist at the AI startup Huggingface.
  • “I would kind of be happier with Microsoft doing this experiment than a startup, because Microsoft will at least address these issues when the press cycle gets really bad,” says Dr. Lambert. “I think there are going to be a lot of harms from this kind of AI, and it’s better people know they are coming,” he adds.
  • Others, particularly those who study and advocate for the concept of “ethical AI” or “responsible AI,” argue that the global experiment Microsoft and OpenAI are conducting is downright dangerous
  • Celeste Kidd, a professor of psychology at University of California, Berkeley, studies how people acquire knowledge
  • Her research has shown that people learning about new things have a narrow window in which they form a lasting opinion. Seeing misinformation during this critical initial period of exposure to a new concept—such as the kind of misinformation that chat-based AIs can confidently dispense—can do lasting harm, she says.
  • Dr. Kidd likens OpenAI’s experimentation with AI to exposing the public to possibly dangerous chemicals. “Imagine you put something carcinogenic in the drinking water and you were like, ‘We’ll see if it’s carcinogenic.’ After, you can’t take it back—people have cancer now,”
  • Part of the challenge with AI chatbots is that they can sometimes simply make things up. Numerous examples of this tendency have been documented by users of both ChatGPT and OpenA
  • These models also tend to be riddled with biases that may not be immediately apparent to users. For example, they can express opinions gleaned from the internet as if they were verified facts
  • When millions are exposed to these biases across billions of interactions, this AI has the potential to refashion humanity’s views, at a global scale, says Dr. Kidd.
  • OpenAI has talked publicly about the problems with these systems, and how it is trying to address them. In a recent blog post, the company said that in the future, users might be able to select AIs whose “values” align with their own.
  • “We believe that AI should be a useful tool for individual people, and thus customizable by each user up to limits defined by society,” the post said.
  • Eliminating made-up information and bias from chat-based search engines is impossible given the current state of the technology, says Mark Riedl, a professor at Georgia Institute of Technology who studies artificial intelligence
  • He believes the release of these technologies to the public by Microsoft and OpenAI is premature. “We are putting out products that are still being actively researched at this moment,” he adds. 
  • in other areas of human endeavor—from new drugs and new modes of transportation to advertising and broadcast media—we have standards for what can and cannot be unleashed on the public. No such standards exist for AI, says Dr. Riedl.
  • To modify these AIs so that they produce outputs that humans find both useful and not-offensive, engineers often use a process called “reinforcement learning through human feedback.
  • that’s a fancy way of saying that humans provide input to the raw AI algorithm, often by simply saying which of its potential responses to a query are better—and also which are not acceptable at all.
  • Microsoft’s and OpenAI’s globe-spanning experiments on millions of people are yielding a fire hose of data for both companies. User-entered prompts and the AI-generated results are fed back through a network of paid human AI trainers to further fine-tune the models,
  • Huggingface’s Dr. Lambert says that any company, including his own, that doesn’t have this river of real-world usage data helping it improve its AI is at a huge disadvantage
  • In chatbots, in some autonomous-driving systems, in the unaccountable AIs that decide what we see on social media, and now, in the latest applications of AI, again and again we are the guinea pigs on which tech companies are testing new technology.
  • It may be the case that there is no other way to roll out this latest iteration of AI—which is already showing promise in some areas—at scale. But we should always be asking, at times like these: At what price?
Javier E

Opinion | Noam Chomsky: The False Promise of ChatGPT - The New York Times - 0 views

  • we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.
  • OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning. Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly humanlike language and thought
  • if machine learning programs like ChatGPT continue to dominate the field of A.I
  • ...22 more annotations...
  • , we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.
  • It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make “infinite use of finite means,” creating ideas and theories with universal reach.
  • The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question
  • the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations
  • such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case
  • Those are the ingredients of explanation, the mark of true intelligence.
  • Here’s an example. Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.”
  • an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or “because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking.
  • The crux of machine learning is description and prediction; it does not posit any causal mechanisms or physical laws
  • any human-style explanation is not necessarily correct; we are fallible. But this is part of what it means to think: To be right, it must be possible to be wrong. Intelligence consists not only of creative conjectures but also of creative criticism. Human-style thought is based on possible explanations and error correction, a process that gradually limits what possibilities can be rationally considered.
  • ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible.
  • Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.
  • For this reason, the predictions of machine learning systems will always be superficial and dubious.
  • some machine learning enthusiasts seem to be proud that their creations can generate correct “scientific” predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newton’s laws of motion and universal gravitation). But this kind of prediction, even when successful, is pseudoscienc
  • While scientists certainly seek theories that have a high degree of empirical corroboration, as the philosopher Karl Popper noted, “we do not seek highly probable theories but explanations; that is to say, powerful and highly improbable theories.”
  • The theory that apples fall to earth because mass bends space-time (Einstein’s view) is highly improbable, but it actually tells you why they fall. True intelligence is demonstrated in the ability to think and express improbable but insightful things.
  • This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism)
  • True intelligence is also capable of moral thinking
  • To be useful, ChatGPT must be empowered to generate novel-looking output; to be acceptable to most of its users, it must steer clear of morally objectionable content
  • In 2016, for example, Microsoft’s Tay chatbot (a precursor to ChatGPT) flooded the internet with misogynistic and racist content, having been polluted by online trolls who filled it with offensive training data. How to solve the problem in the future? In the absence of a capacity to reason from moral principles, ChatGPT was crudely restricted by its programmers from contributing anything novel to controversial — that is, important — discussions. It sacrificed creativity for a kind of amorality.
  • Here, ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation. It summarizes the standard arguments in the literature by a kind of super-autocomplete, refuses to take a stand on anything, pleads not merely ignorance but lack of intelligence and ultimately offers a “just following orders” defense, shifting responsibility to its creators.
  • In short, ChatGPT and its brethren are constitutionally unable to balance creativity with constraint. They either overgenerate (producing both truths and falsehoods, endorsing ethical and unethical decisions alike) or undergenerate (exhibiting noncommitment to any decisions and indifference to consequences). Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.
« First ‹ Previous 41 - 51 of 51
Showing 20 items per page