Skip to main content

Home/ History Readings/ Group items tagged harm

Rss Feed Group items tagged

Javier E

How Public Health Took Part in Its Own Downfall - The Atlantic - 0 views

  • when the coronavirus pandemic reached the United States, it found a public-health system in disrepair. That system, with its overstretched staff, meager budgets, crumbling buildings, and archaic equipment, could barely cope with sickness as usual, let alone with a new, fast-spreading virus.
  • By one telling, public health was a victim of its own success, its value shrouded by the complacency of good health
  • By a different account, the competing field of medicine actively suppressed public health, which threatened the financial model of treating illness in (insured) individuals
  • ...27 more annotations...
  • In fact, “public health has actively participated in its own marginalization,” Daniel Goldberg, a historian of medicine at the University of Colorado, told me. As the 20th century progressed, the field moved away from the idea that social reforms were a necessary part of preventing disease and willingly silenced its own political voice. By swimming along with the changing currents of American ideology, it drowned many of the qualities that made it most effective.
  • Germ theory offered a seductive new vision for defeating disease: Although the old public health “sought the sources of infectious disease in the surroundings of man; the new finds them in man himself,” wrote Hibbert Hill in The New Public Health in 1913
  • “They didn’t have to think of themselves as activists,” Rosner said. “It was so much easier to identify individual victims of disease and cure them than it was to rebuild a city.”
  • As public health moved into the laboratory, a narrow set of professionals associated with new academic schools began to dominate the once-broad field. “It was a way of consolidating power: If you don’t have a degree in public health, you’re not public health,”
  • Mastering the new science of bacteriology “became an ideological marker,” sharply differentiating an old generation of amateurs from a new one of scientifically minded professionals,
  • Hospitals, meanwhile, were becoming the centerpieces of American health care, and medicine was quickly amassing money and prestige by reorienting toward biomedical research
  • Public health began to self-identify as a field of objective, outside observers of society instead of agents of social change. It assumed a narrower set of responsibilities that included data collection, diagnostic services for clinicians, disease tracing, and health education.
  • Assuming that its science could speak for itself, the field pulled away from allies such as labor unions, housing reformers, and social-welfare organizations that had supported city-scale sanitation projects, workplace reforms, and other ambitious public-health projects.
  • That left public health in a precarious position—still in medicine’s shadow, but without the political base “that had been the source of its power,”
  • After World War II, biomedicine lived up to its promise, and American ideology turned strongly toward individualism.
  • Seeing poor health as a matter of personal irresponsibility rather than of societal rot became natural.
  • Even public health began to treat people as if they lived in a social vacuum. Epidemiologists now searched for “risk factors,” such as inactivity and alcohol consumption, that made individuals more vulnerable to disease and designed health-promotion campaigns that exhorted people to change their behaviors, tying health to willpower in a way that persists today.
  • Public health is now trapped in an unenviable bind. “If it conceives of itself too narrowly, it will be accused of lacking vision … If it conceives of itself too expansively, it will be accused of overreaching,
  • “epidemiology isn’t a field of activists saying, ‘God, asbestos is terrible,’ but of scientists calculating the statistical probability of someone’s death being due to this exposure or that one.”
  • In 1971, Paul Cornely, then the president of the APHA and the first Black American to earn a Ph.D. in public health, said that “if the health organizations of this country have any concern about the quality of life of its citizens, they would come out of their sterile and scientific atmosphere and jump in the polluted waters of the real world where action is the basis for survival.”
  • a new wave of “social epidemiologists” once again turned their attention to racism, poverty, and other structural problems.
  • The biomedical view of health still dominates, as evidenced by the Biden administration’s focus on vaccines at the expense of masks, rapid tests, and other “nonpharmaceutical interventions.”
  • Public health has often been represented by leaders with backgrounds primarily in clinical medicine, who have repeatedly cast the pandemic in individualist terms: “Your health is in your own hands,” said the CDC’s director, Rochelle Walensky, in May
  • the pandemic has proved what public health’s practitioners understood well in the late 19th and early 20th century: how important the social side of health is. People can’t isolate themselves if they work low-income jobs with no paid sick leave, or if they live in crowded housing or prisons.
  • This approach appealed, too, to powerful industries with an interest in highlighting individual failings rather than the dangers of their products.
  • “Public health gains credibility from its adherence to science, and if it strays too far into political advocacy, it may lose the appearance of objectivity,”
  • In truth, public health is inescapably political, not least because it “has to make decisions in the face of rapidly evolving and contested evidence,” Fairchild told me. That evidence almost never speaks for itself, which means the decisions that arise from it must be grounded in values.
  • Those values, Fairchild said, should include equity and the prevention of harm to others, “but in our history, we lost the ability to claim these ethical principles.”
  • “Sick-leave policies, health-insurance coverage, the importance of housing … these things are outside the ability of public health to implement, but we should raise our voices about them,” said Mary Bassett, of Harvard, who was recently appointed as New York’s health commissioner. “I think we can get explicit.”
  • The future might lie in reviving the past, and reopening the umbrella of public health to encompass people without a formal degree or a job at a health department.
  • What if, instead, we thought of the Black Lives Matter movement as a public-health movement, the American Rescue Plan as a public-health bill, or decarceration, as the APHA recently stated, as a public-health goal? In this way of thinking, too, employers who institute policies that protect the health of their workers are themselves public-health advocates.
  • “We need to re-create alliances with others and help them to understand that what they are doing is public health,
criscimagnael

Lawmakers Urge Big Tech to 'Mitigate Harm' of Suicide Site and Seek Justice Inquiry - T... - 0 views

  • Lawmakers in Washington are prodding technology companies to limit the visibility and reduce the risks of a website that provides detailed instructions about suicide and asking the nation’s top law enforcement official to consider pursuing a Justice Department inquiry.
  • It is imperative that companies take the threat of such sites seriously and take appropriate steps to mitigate harm
  • On Monday, Senator Richard Blumenthal, Democrat of Connecticut, sent a letter to Google and Bing asking the companies to fully remove the suicide site from their search results — a step further than either search engine was willing to take.
  • ...11 more annotations...
  • Noting that other countries had taken steps to restrict access to the site, the lawmakers also asked about removing it from search results in the United States.
  • Members of the site are anonymous, but The Times identified 45 people who had spent time on the site and then killed themselves in the United States, the United Kingdom, Italy, Canada and Australia. Most of them were under 30, including several teenagers. The Times also found that more than 500 members of the site wrote so-called goodbye threads announcing how and when they planned to end their lives, and then never posted again.
  • And the new administrator made the site private, meaning that the content — including discussions about suicide methods, messages of support and thumbs-up emojis to those sharing plans to take their lives, and even real-time posts written by members narrating their attempts — is now visible only to members and not the public.
  • The site draws six million page views a month, and nearly half of all traffic is driven by online searches, according to data from Similarweb, a web analytics company.
  • Citing The Times’s reporting, Mr. Blumenthal wrote in his letter, addressed to Google’s chief executive, Sundar Pichai, that the content on the suicide site “makes the world a dark place for too many,” and that Google had the ability and legal authority to steer “people who are struggling away from this dangerous website.”
  • Google’s hands are not tied, and it has a responsibility to act,
  • The operators of the suicide site have long used Cloudflare, an American firm that provides cyberprotections, to obscure the names of its web host, making it difficult or impossible to know what company is providing those services.
  • After the article was published, on Dec. 9, Marquis announced on the site that he was resigning as an administrator, permanently deleting his account and turning over operation of the site to someone using the online name RainAndSadness.
  • In Uruguay, where assisting suicide is a crime, the Montevideo police have begun an inquiry in collaboration with a local prosecutor’s office in response to The Times’s investigation, said Javier Benech, a communications director for the office.
  • In the United States, while many states have laws against assisting suicide, they are often vague, do not explicitly address online activity and are rarely enforced.
  • Members of the suicide site who post instructions on how to die by suicide, or encouragement to follow through with it, could be vulnerable to criminal charges depending on the jurisdiction. But so far, no American law enforcement officials have pursued such cases in connection with the website. Federal law typically protects website operators from liability for users’ posts.
sidneybelleroche

Facebook whistleblower '60 Minutes' interview: Frances Haugen says the company prioriti... - 0 views

  • The identity of the Facebook whistleblower who released tens of thousands of pages of internal research and documents — leading to a firestorm for the social media company in recent weeks — was revealed on "60 Minutes" Sunday night as Frances Haugen.
  • The 37-year-old former Facebook product manager who worked on civic integrity issues at the company says the documents show that Facebook knows its platforms are used to spread hate, violence and misinformation
  • Facebook over and over again chose to optimize for its own interests, like making more money," Haugen told "60 Minutes."
  • ...10 more annotations...
  • Haugen filed at least eight complaints with the Securities and Exchange Commission alleging that the company is hiding research about its shortcomings from investors and the public.
  • Haugen, who started at Facebook in 2019 after previously working for other tech giants like Google (GOOGL GOOGLE) and Pinterest (PINS), is set to testify on Tuesday before the Senate Subcommittee on Consumer Protection, Product Safety, and Data Security.
  • Facebook has aggressively pushed back against the reports, calling many of the claims "misleading" and arguing that its apps do more good than harm.
  • Lena Pietsch said in a statement to CNN Business immediately following the "60 Minutes" interview. "We continue to make significant improvements to tackle the spread of misinformation and harmful content. To suggest we encourage bad content and do nothing is just not true."
  • Pietsch released a more than 700-word statement laying out what it called "missing facts" from the segment
  • Haugen said she believes Facebook Founder and CEO Mark Zuckerberg "never set out to make a hateful platform, but he has allowed choices to be made where the side effects of those choices are that hateful and polarizing content gets more distribution and more reach."
  • Haugen said she was recruited by Facebook in 2019 and took the job to work on addressing misinformation. But after the company decided to dissolve its civic integrity team shortly after the 2020 Presidential Election, her feelings about the company started to change.
  • The social media company's algorithm that's designed to show users content that they're most likely to engage with is responsible for many of its problems
  • Haugen said that while "no one at Facebook is malevolent ... the incentives are misaligned."
  • the more anger that they get exposed to, the more they interact and the more they consume."
Javier E

'Looking for the Good War' Says Our Nostalgia for World War II Has Done Real Harm - The... - 0 views

  • Glib treatments of World War II have done real harm, she says, distorting our understanding of the past and consequently shaping how we approach the future. As “the last American military action about which there is anything like a positive consensus,” World War II is “the good war that served as prologue to three-quarters of a century of misbegotten ones.”
  • Like the cadets she teaches at West Point, civilians would do well to see World War II as something other than a buoyant tale of American goodness trouncing Nazi evil. Yes, she says up front, American involvement in the war was necessary. But she maintains that it’s been a national fantasy to presume that “necessary” has to mean the same thing as “good.”
  • The United States only entered the war after the attack on Pearl Harbor — and even then, Samet says, contemporary observers remarked on “a general American indifference to the fact that the world was on fire.”
  • ...5 more annotations...
  • The war in the Pacific was “begun in revenge and complicated by bitter racism,
  • The fall of Saigon in 1975 may have temporarily hobbled the American strut of exceptionalism and invincibility, but the end of the Cold War and the beginning of Operation Desert Storm worked to restore some American confidence.
  • she also shows how Hollywood was quick to overwhelm the culture with its “habitual optimism.” The 1947 movie “The Hucksters,” for instance, begins with a veteran returning to the advertising business only to find himself feeling disgusted by it; the happily-ever-after ending comes not with him rejecting the industry but with his resolve to “sell good things, things that people should have, and sell them with dignity and taste.”
  • Surveying the records of the era, Samet contrasts this dehumanization with the portrayal of European fascists, who were more typically described as “gangsters.”
  • She ends with a chapter on the old Lost Cause mythology of the Civil War, which we have turned into “a kind of theme park,” suffused with symbolism and nostalgia, ignoring the expansionist wars this mythology later enabled. The country’s imperialist ambitions in the late-19th and early-20th centuries were promoted as a nationalist project that would finally unite the North and South against a foreign enemy.
Javier E

Opinion | The Right Is All Wrong About Masculinity - The New York Times - 0 views

  • Indeed, the very definition of “masculinity” is up for grabs
  • In 2019, the American Psychological Association published guidelines that took direct aim at what it called “traditional masculinity — marked by stoicism, competitiveness, dominance and aggression” — declaring it to be, “on the whole, harmful.”
  • Aside from “dominance,” a concept with precious few virtuous uses, the other aspects of traditional masculinity the A.P.A. cited have important roles to play. Competitiveness, aggression and stoicism surely have their abuses, but they also can be indispensable in the right contexts. Thus, part of the challenge isn’t so much rejecting those characteristics as it is channeling and shaping them for virtuous purposes.
  • ...13 more annotations...
  • traditionally “masculine” virtues are not exclusively male. Women who successfully model these attributes are all around us
  • Rudyard Kipling’s famous poem “If—” is one of the purest distillations of restraint as a traditional manly virtue. It begins with the words “If you can keep your head when all about you / Are losing theirs and blaming it on you.” The entire work speaks of the necessity of calmness and courage.
  • Stoicism carried to excess can become a dangerous form of emotional repression, a stifling of necessary feelings. But the fact that the kind of patience and perseverance that marks stoicism can be taken too far is not to say that we should shun it. In times of conflict and crisis, it is the calm man or woman who can see clearly.
  • Hysteria plus cruelty is a recipe for violence. And that brings us back to Mr. Hawley. For all of its faults when taken to excess, the traditional masculinity of which he claims to be a champion would demand that he stand firm against a howling mob. Rather, he saluted it with a raised fist — and then ran from it when it got too close and too unruly.
  • Catastrophic rhetoric is omnipresent on the right. Let’s go back to the “groomer” smear. It’s a hallmark of right-wing rhetoric that if you disagree with the new right on any matter relating to sex or sexuality, you’re not just wrong; you’re a “groomer” or “soft on pedos.
  • But conservative catastrophism is only one part of the equation. The other is meanspirited pettiness
  • Traditional masculinity says that people should meet a challenge with a level head and firm convictions. Right-wing culture says that everything is an emergency, and is to be combated with relentless trolling and hyperbolic insults.
  • Jonah Goldberg wrote an important piece cataloging the sheer pettiness of the young online right. “Everywhere I look these days,” he wrote, “I see young conservatives believing they should behave like jerks.” As Jonah noted, there are those who now believe it shows “courage and strength to be coarse or bigoted.”
  • If you spend much time at all on right-wing social media — especially Twitter these days — or listening to right-wing news outlets, you’ll be struck by the sheer hysteria of the rhetoric, the hair-on-fire sense of emergency that seems to dominate all discourse.
  • American men are in desperate need of virtuous purpose.
  • I reject the idea that traditional masculinity, properly understood, is, “on the whole, harmful.” I recognize that it can be abused, but it is good to confront life with a sense of proportion, with calm courage and conviction.
  • One of the best pieces of advice I’ve ever received reflects that wisdom. Early in my legal career, a retired federal judge read a brief that I’d drafted and admonished me to “write with regret, not outrage.”
  • Husband your anger, he told me. Have patience. Gain perspective. So then, when something truly is terrible, your outrage will mean something. It was the legal admonition against crying wolf.
Javier E

Whistleblower: Twitter misled investors, FTC and underplayed spam issues - Washington Post - 0 views

  • Twitter executives deceived federal regulators and the company’s own board of directors about “extreme, egregious deficiencies” in its defenses against hackers, as well as its meager efforts to fight spam, according to an explosive whistleblower complaint from its former security chief.
  • The complaint from former head of security Peiter Zatko, a widely admired hacker known as “Mudge,” depicts Twitter as a chaotic and rudderless company beset by infighting, unable to properly protect its 238 million daily users including government agencies, heads of state and other influential public figures.
  • Among the most serious accusations in the complaint, a copy of which was obtained by The Washington Post, is that Twitter violated the terms of an 11-year-old settlement with the Federal Trade Commission by falsely claiming that it had a solid security plan. Zatko’s complaint alleges he had warned colleagues that half the company’s servers were running out-of-date and vulnerable software and that executives withheld dire facts about the number of breaches and lack of protection for user data, instead presenting directors with rosy charts measuring unimportant changes.
  • ...56 more annotations...
  • “Security and privacy have long been top companywide priorities at Twitter,” said Twitter spokeswoman Rebecca Hahn. She said that Zatko’s allegations appeared to be “riddled with inaccuracies” and that Zatko “now appears to be opportunistically seeking to inflict harm on Twitter, its customers, and its shareholders.” Hahn said that Twitter fired Zatko after 15 months “for poor performance and leadership.” Attorneys for Zatko confirmed he was fired but denied it was for performance or leadership.
  • the whistleblower document alleges the company prioritized user growth over reducing spam, though unwanted content made the user experience worse. Executives stood to win individual bonuses of as much as $10 million tied to increases in daily users, the complaint asserts, and nothing explicitly for cutting spam.
  • Chief executive Parag Agrawal was “lying” when he tweeted in May that the company was “strongly incentivized to detect and remove as much spam as we possibly can,” the complaint alleges.
  • Zatko described his decision to go public as an extension of his previous work exposing flaws in specific pieces of software and broader systemic failings in cybersecurity. He was hired at Twitter by former CEO Jack Dorsey in late 2020 after a major hack of the company’s systems.
  • “I felt ethically bound. This is not a light step to take,” said Zatko, who was fired by Agrawal in January. He declined to discuss what happened at Twitter, except to stand by the formal complaint. Under SEC whistleblower rules, he is entitled to legal protection against retaliation, as well as potential monetary rewards.
  • A person familiar with Zatko’s tenure said the company investigated Zatko’s security claims during his time there and concluded they were sensationalistic and without merit. Four people familiar with Twitter’s efforts to fight spam said the company deploys extensive manual and automated tools to both measure the extent of spam across the service and reduce it.
  • In 1998, Zatko had testified to Congress that the internet was so fragile that he and others could take it down with a half-hour of concentrated effort. He later served as the head of cyber grants at the Defense Advanced Research Projects Agency, the Pentagon innovation unit that had backed the internet’s invention.
  • Overall, Zatko wrote in a February analysis for the company attached as an exhibit to the SEC complaint, “Twitter is grossly negligent in several areas of information security. If these problems are not corrected, regulators, media and users of the platform will be shocked when they inevitably learn about Twitter’s severe lack of security basics.”
  • Zatko’s complaint says strong security should have been much more important to Twitter, which holds vast amounts of sensitive personal data about users. Twitter has the email addresses and phone numbers of many public figures, as well as dissidents who communicate over the service at great personal risk.
  • This month, an ex-Twitter employee was convicted of using his position at the company to spy on Saudi dissidents and government critics, passing their information to a close aide of Crown Prince Mohammed bin Salman in exchange for cash and gifts.
  • Zatko’s complaint says he believed the Indian government had forced Twitter to put one of its agents on the payroll, with access to user data at a time of intense protests in the country. The complaint said supporting information for that claim has gone to the National Security Division of the Justice Department and the Senate Select Committee on Intelligence. Another person familiar with the matter agreed that the employee was probably an agent.
  • “Take a tech platform that collects massive amounts of user data, combine it with what appears to be an incredibly weak security infrastructure and infuse it with foreign state actors with an agenda, and you’ve got a recipe for disaster,” Charles E. Grassley (R-Iowa), the top Republican on the Senate Judiciary Committee,
  • Many government leaders and other trusted voices use Twitter to spread important messages quickly, so a hijacked account could drive panic or violence. In 2013, a captured Associated Press handle falsely tweeted about explosions at the White House, sending the Dow Jones industrial average briefly plunging more than 140 points.
  • After a teenager managed to hijack the verified accounts of Obama, then-candidate Joe Biden, Musk and others in 2020, Twitter’s chief executive at the time, Jack Dorsey, asked Zatko to join him, saying that he could help the world by fixing Twitter’s security and improving the public conversation, Zatko asserts in the complaint.
  • The complaint — filed last month with the Securities and Exchange Commission and the Department of Justice, as well as the FTC — says thousands of employees still had wide-ranging and poorly tracked internal access to core company software, a situation that for years had led to embarrassing hacks, including the commandeering of accounts held by such high-profile users as Elon Musk and former presidents Barack Obama and Donald Trump.
  • But at Twitter Zatko encountered problems more widespread than he realized and leadership that didn’t act on his concerns, according to the complaint.
  • Twitter’s difficulties with weak security stretches back more than a decade before Zatko’s arrival at the company in November 2020. In a pair of 2009 incidents, hackers gained administrative control of the social network, allowing them to reset passwords and access user data. In the first, beginning around January of that year, hackers sent tweets from the accounts of high-profile users, including Fox News and Obama.
  • Several months later, a hacker was able to guess an employee’s administrative password after gaining access to similar passwords in their personal email account. That hacker was able to reset at least one user’s password and obtain private information about any Twitter user.
  • Twitter continued to suffer high-profile hacks and security violations, including in 2017, when a contract worker briefly took over Trump’s account, and in the 2020 hack, in which a Florida teen tricked Twitter employees and won access to verified accounts. Twitter then said it put additional safeguards in place.
  • This year, the Justice Department accused Twitter of asking users for their phone numbers in the name of increased security, then using the numbers for marketing. Twitter agreed to pay a $150 million fine for allegedly breaking the 2011 order, which barred the company from making misrepresentations about the security of personal data.
  • After Zatko joined the company, he found it had made little progress since the 2011 settlement, the complaint says. The complaint alleges that he was able to reduce the backlog of safety cases, including harassment and threats, from 1 million to 200,000, add staff and push to measure results.
  • But Zatko saw major gaps in what the company was doing to satisfy its obligations to the FTC, according to the complaint. In Zatko’s interpretation, according to the complaint, the 2011 order required Twitter to implement a Software Development Life Cycle program, a standard process for making sure new code is free of dangerous bugs. The complaint alleges that other employees had been telling the board and the FTC that they were making progress in rolling out that program to Twitter’s systems. But Zatko alleges that he discovered that it had been sent to only a tenth of the company’s projects, and even then treated as optional.
  • “If all of that is true, I don’t think there’s any doubt that there are order violations,” Vladeck, who is now a Georgetown Law professor, said in an interview. “It is possible that the kinds of problems that Twitter faced eleven years ago are still running through the company.”
  • “Agrawal’s Tweets and Twitter’s previous blog posts misleadingly imply that Twitter employs proactive, sophisticated systems to measure and block spam bots,” the complaint says. “The reality: mostly outdated, unmonitored, simple scripts plus overworked, inefficient, understaffed, and reactive human teams.”
  • One current and one former employee recalled that incident, when failures at two Twitter data centers drove concerns that the service could have collapsed for an extended period. “I wondered if the company would exist in a few days,” one of them said.
  • The current and former employees also agreed with the complaint’s assertion that past reports to various privacy regulators were “misleading at best.”
  • For example, they said the company implied that it had destroyed all data on users who asked, but the material had spread so widely inside Twitter’s networks, it was impossible to know for sure
  • As the head of security, Zatko says he also was in charge of a division that investigated users’ complaints about accounts, which meant that he oversaw the removal of some bots, according to the complaint. Spam bots — computer programs that tweet automatically — have long vexed Twitter. Unlike its social media counterparts, Twitter allows users to program bots to be used on its service: For example, the Twitter account @big_ben_clock is programmed to tweet “Bong Bong Bong” every hour in time with Big Ben in London. Twitter also allows people to create accounts without using their real identities, making it harder for the company to distinguish between authentic, duplicate and automated accounts.
  • In the complaint, Zatko alleges he could not get a straight answer when he sought what he viewed as an important data point: the prevalence of spam and bots across all of Twitter, not just among monetizable users.
  • Zatko cites a “sensitive source” who said Twitter was afraid to determine that number because it “would harm the image and valuation of the company.” He says the company’s tools for detecting spam are far less robust than implied in various statements.
  • The complaint also alleges that Zatko warned the board early in his tenure that overlapping outages in the company’s data centers could leave it unable to correctly restart its servers. That could have left the service down for months, or even have caused all of its data to be lost. That came close to happening in 2021, when an “impending catastrophic” crisis threatened the platform’s survival before engineers were able to save the day, the complaint says, without providing further details.
  • The four people familiar with Twitter’s spam and bot efforts said the engineering and integrity teams run software that samples thousands of tweets per day, and 100 accounts are sampled manually.
  • Some employees charged with executing the fight agreed that they had been short of staff. One said top executives showed “apathy” toward the issue.
  • Zatko’s complaint likewise depicts leadership dysfunction, starting with the CEO. Dorsey was largely absent during the pandemic, which made it hard for Zatko to get rulings on who should be in charge of what in areas of overlap and easier for rival executives to avoid collaborating, three current and former employees said.
  • For example, Zatko would encounter disinformation as part of his mandate to handle complaints, according to the complaint. To that end, he commissioned an outside report that found one of the disinformation teams had unfilled positions, yawning language deficiencies, and a lack of technical tools or the engineers to craft them. The authors said Twitter had no effective means of dealing with consistent spreaders of falsehoods.
  • Dorsey made little effort to integrate Zatko at the company, according to the three employees as well as two others familiar with the process who spoke on the condition of anonymity to describe sensitive dynamics. In 12 months, Zatko could manage only six one-on-one calls, all less than 30 minutes, with his direct boss Dorsey, who also served as CEO of payments company Square, now known as Block, according to the complaint. Zatko allegedly did almost all of the talking, and Dorsey said perhaps 50 words in the entire year to him. “A couple dozen text messages” rounded out their electronic communication, the complaint alleges.
  • Faced with such inertia, Zatko asserts that he was unable to solve some of the most serious issues, according to the complaint.
  • Some 30 percent of company laptops blocked automatic software updates carrying security fixes, and thousands of laptops had complete copies of Twitter’s source code, making them a rich target for hackers, it alleges.
  • A successful hacker takeover of one of those machines would have been able to sabotage the product with relative ease, because the engineers pushed out changes without being forced to test them first in a simulated environment, current and former employees said.
  • “It’s near-incredible that for something of that scale there would not be a development test environment separate from production and there would not be a more controlled source-code management process,” said Tony Sager, former chief operating officer at the cyberdefense wing of the National Security Agency, the Information Assurance divisio
  • Sager is currently senior vice president at the nonprofit Center for Internet Security, where he leads a consensus effort to establish best security practices.
  • The complaint says that about half of Twitter’s roughly 7,000 full-time employees had wide access to the company’s internal software and that access was not closely monitored, giving them the ability to tap into sensitive data and alter how the service worked. Three current and former employees agreed that these were issues.
  • “A best practice is that you should only be authorized to see and access what you need to do your job, and nothing else,” said former U.S. chief information security officer Gregory Touhill. “If half the company has access to and can make configuration changes to the production environment, that exposes the company and its customers to significant risk.”
  • The complaint says Dorsey never encouraged anyone to mislead the board about the shortcomings, but that others deliberately left out bad news.
  • When Dorsey left in November 2021, a difficult situation worsened under Agrawal, who had been responsible for security decisions as chief technology officer before Zatko’s hiring, the complaint says.
  • An unnamed executive had prepared a presentation for the new CEO’s first full board meeting, according to the complaint. Zatko’s complaint calls the presentation deeply misleading.
  • The presentation showed that 92 percent of employee computers had security software installed — without mentioning that those installations determined that a third of the machines were insecure, according to the complaint.
  • Another graphic implied a downward trend in the number of people with overly broad access, based on the small subset of people who had access to the highest administrative powers, known internally as “God mode.” That number was in the hundreds. But the number of people with broad access to core systems, which Zatko had called out as a big problem after joining, had actually grown slightly and remained in the thousands.
  • The presentation included only a subset of serious intrusions or other security incidents, from a total Zatko estimated as one per week, and it said that the uncontrolled internal access to core systems was responsible for just 7 percent of incidents, when Zatko calculated the real proportion as 60 percent.
  • Zatko stopped the material from being presented at the Dec. 9, 2021 meeting, the complaint said. But over his continued objections, Agrawal let it go to the board’s smaller Risk Committee a week later.
  • Agrawal didn’t respond to requests for comment. In an email to employees after publication of this article, obtained by The Post, he said that privacy and security continues to be a top priority for the company, and he added that the narrative is “riddled with inconsistences” and “presented without important context.”
  • On Jan. 4, Zatko reported internally that the Risk Committee meeting might have been fraudulent, which triggered an Audit Committee investigation.
  • Agarwal fired him two weeks later. But Zatko complied with the company’s request to spell out his concerns in writing, even without access to his work email and documents, according to the complaint.
  • Since Zatko’s departure, Twitter has plunged further into chaos with Musk’s takeover, which the two parties agreed to in May. The stock price has fallen, many employees have quit, and Agrawal has dismissed executives and frozen big projects.
  • Zatko said he hoped that by bringing new scrutiny and accountability, he could improve the company from the outside.
  • “I still believe that this is a tremendous platform, and there is huge value and huge risk, and I hope that looking back at this, the world will be a better place, in part because of this.”
Javier E

How Climate Change Is Changing Therapy - The New York Times - 0 views

  • Andrew Bryant can still remember when he thought of climate change as primarily a problem of the future. When he heard or read about troubling impacts, he found himself setting them in 2080, a year that, not so coincidentally, would be a century after his own birth. The changing climate, and all the challenges it would bring, were “scary and sad,” he said recently, “but so far in the future that I’d be safe.”
  • That was back when things were different, in the long-ago world of 2014 or so. The Pacific Northwest, where Bryant is a clinical social worker and psychotherapist treating patients in private practice in Seattle, is a largely affluent place that was once considered a potential refuge from climate disruption
  • “We’re lucky to be buffered by wealth and location,” Bryant said. “We are lucky to have the opportunity to look away.”
  • ...61 more annotations...
  • starting in the mid-2010s, those beloved blue skies began to disappear. First, the smoke came in occasional bursts, from wildfires in Canada or California or Siberia, and blew away when the wind changed direction. Within a few summers, though, it was coming in thicker, from more directions at once, and lasting longer.
  • Sometimes there were weeks when you were advised not to open your windows or exercise outside. Sometimes there were long stretches where you weren’t supposed to breathe the outside air at all.
  • Now lots of Bryant’s clients wanted to talk about climate change. They wanted to talk about how strange and disorienting and scary this new reality felt, about what the future might be like and how they might face it, about how to deal with all the strong feelings — helplessness, rage, depression, guilt — being stirred up inside them.
  • As a therapist, Bryant found himself unsure how to respond
  • while his clinical education offered lots of training in, say, substance abuse or family therapy, there was nothing about environmental crisis, or how to treat patients whose mental health was affected by it
  • Bryant immersed himself in the subject, joining and founding associations of climate-concerned therapists
  • eventually started a website, Climate & Mind, to serve as a sort of clearing house for other therapists searching for resources. Instead, the site became an unexpected window into the experience of would-be patients: Bryant found himself receiving messages from people around the world who stumbled across it while looking for help.
  • Over and over, he read the same story, of potential patients who’d gone looking for someone to talk to about climate change and other environmental crises, only to be told that they were overreacting — that their concern, and not the climate, was what was out of whack and in need of treatment.
  • “You come in and talk about how anxious you are that fossil-fuel companies continue to pump CO2 into the air, and your therapist says, ‘So, tell me about your mother.’”
  • In many of the messages, people asked Bryant for referrals to climate-focused therapists in Houston or Canada or Taiwan, wherever it was the writer lived.
  • his practice had shifted to reflect a new reality of climate psychology. His clients didn’t just bring up the changing climate incidentally, or during disconcerting local reminders; rather, many were activists or scientists or people who specifically sought out Bryant because of their concerns about the climate crisis.
  • could now turn to resources like the list maintained by the Climate Psychology Alliance North America, which contains more than 100 psychotherapists around the country who are what the organization calls “climate aware.”
  • But treating those fears also stirred up lots of complicated questions that no one was quite sure how to answer. The traditional focus of his field, Bryant said, could be oversimplified as “fixing the individual”: treating patients as separate entities working on their personal growth
  • It had been a challenging few years, Bryant told me when I first called to talk about his work. There were some ways in which climate fears were a natural fit in the therapy room, and he believed the field had coalesced around some answers that felt clear and useful
  • Climate change, by contrast, was a species-wide problem, a profound and constant reminder of how deeply intertwined we all are in complex systems — atmospheric, biospheric, economic — that are much bigger than us. It sometimes felt like a direct challenge to old therapeutic paradigms — and perhaps a chance to replace them with something better.
  • In one of climate psychology’s founding papers, published in 2011, Susan Clayton and Thomas J. Doherty posited that climate change would have “significant negative effects on mental health and well-being.” They described three broad types of possible impacts: the acute trauma of living through climate disasters; the corroding fear of a collapsing future; and the psychosocial decay that could damage the fabric of communities dealing with disruptive changes
  • All of these, they wrote, would make the climate crisis “as much a psychological and social phenomenon as a matter of biodiversity and geophysics.”
  • Many of these predictions have since been borne out
  • Studies have found rates of PTSD spiking in the wake of disasters, and in 2017 the American Psychological Association defined “ecoanxiety” as “a chronic fear of environmental doom.”
  • Climate-driven migration is on the rise, and so are stories of xenophobia and community mistrust.
  • According to a 2022 survey by Yale and George Mason University, a majority of Americans report that they spend time worrying about climate change.
  • Many say it has led to symptoms of depression or anxiety; more than a quarter make an active effort not to think about it.
  • There was little or no attention to the fact that living through, or helping to cause, a collapse of nature can also be mentally harmful.
  • In June, the Yale Journal of Biology and Medicine published a paper cautioning that the world at large was facing “a psychological condition of ‘systemic uncertainty,’” in which “difficult emotions arise not only from experiencing the ecological loss itself,” but also from the fact that our lives are inescapably embedded in systems that keep on making those losses worse.
  • Climate change, in other words, surrounds us with constant reminders of “ethical dilemmas and deep social criticism of modern society. In its essence, climate crisis questions the relationship of humans with nature and the meaning of being human in the Anthropocene.”
  • This is not an easy way to live.
  • Living within a context that is obviously unhealthful, he wrote, is painful: “a dimly intuited ‘fall’ from which we spend our lives trying to recover, a guilt we can never quite grasp or expiate” — a feeling of loss or dislocation whose true origins we look for, but often fail to see. This confusion leaves us feeling even worse.
  • When Barbara Easterlin first started studying environmental psychology 30 years ago, she told me, the focus of study was on ways in which cultivating a relationship with nature can be good for mental health
  • A poll by the American Psychiatric Association in the same year found that nearly half of Americans think climate change is already harming the nation’s mental health.
  • the field is still so new that it does not yet have evidence-tested treatments or standards of practice. Therapists sometimes feel as if they are finding the path as they go.
  • Rebecca Weston, a licensed clinical social worker practicing in New York and a co-president of the CPA-NA, told me that when she treats anxiety disorders, her goal is often to help the patient understand how much of their fear is internally produced — out of proportion to the reality they’re facing
  • climate anxiety is a different challenge, because people worried about climate change and environmental breakdown are often having the opposite experience: Their worries are rational and evidence-based, but they feel isolated and frustrated because they’re living in a society that tends to dismiss them.
  • One of the emerging tenets of climate psychology is that counselors should validate their clients’ climate-related emotions as reasonable, not pathological
  • it does mean validating that feelings like grief and fear and shame aren’t a form of sickness, but, as Weston put it, “are actually rational responses to a world that’s very scary and very uncertain and very dangerous for people
  • In the words of a handbook on climate psychology, “Paying heed to what is happening in our communities and across the globe is a healthier response than turning away in denial or disavowal.”
  • But this, too, raises difficult questions. “How much do we normalize people to the system we’re in?” Weston asked. “And is that the definition of health?
  • Or is the definition of health resisting the things that are making us so unhappy? That’s the profound tension within our field.”
  • “It seems to shift all the time, the sort of content and material that people are bringing in,” Alexandra Woollacott, a psychotherapist in Seattle, told the group. Sometimes it was a pervasive anxiety about the future, or trauma responses to fires or smoke or heat; other times, clients, especially young ones, wanted to vent their “sort of righteous anger and sense of betrayal” at the various powers that had built and maintained a society that was so destructive.
  • “I’m so glad that we have each other to process this,” she said, “because we’re humans living through this, too. I have my own trauma responses to it, I have my own grief process around it, I have my own fury at government and oil companies, and I think I don’t want to burden my clients with my own emotional response to it.”
  • In a field that has long emphasized boundaries, discouraging therapists from bringing their own issues or experiences into the therapy room, climate therapy offers a particular challenge: Separation can be harder when the problems at hand affect therapist and client alike
  • Some therapists I spoke to were worried about navigating the breakdown of barriers, while others had embraced it. “There is no place on the planet that won’t eventually be impacted, where client and therapist won’t be in it together,” a family therapist wrote in a CPA-NA newsletter. “Most therapists I know have become more vulnerable and self-disclosing in their practice.”
  • “If you look at or consider typical theoretical framings of something like post-traumatic growth, which is the understanding of this idea that people can sort of grow and become stronger and better after a traumatic event,” she said, then the climate crisis poses a dilemma because “there is no afterwards, right? There is no resolution anytime in our lifetimes to this crisis that we nonetheless have to build the capacities to face and to endure and to hopefully engage.”
  • many of her patients are also disconnected from the natural world, which means that they struggle to process or even recognize the grief and alienation that comes from living in a society that treats nature as other, a resource to be used and discarded.
  • “How,” she asked, “do you think about resilience apart from resolution?”
  • she believed this framing reflected and reinforced a bias inherent in a field that has long been most accessible to, and practiced by, the privileged. It was hardly new in the world, after all, to face the collapse of your entire way of life and still find ways to keep going.
  • Torres said that she sometimes takes her therapy sessions outside or asks patients to remember their earliest and deepest connections with animals or plants or places. She believes it will help if they learn to think of themselves “as rooted beings that aren’t just simply living in the human overlay on the environment.” It was valuable to recognize, she said, that “we are part of the land” and suffer when it suffers.
  • Torres described introducing her clients to methods — mindfulness, distress tolerance, emotion regulation — to help them manage acute feelings of stress or panic and to avoid the brittleness of burnout.
  • She also encourages them to narrativize the problem, including themselves as agents of change inside stories about how they came to be in this situation, and how they might make it different.
  • then she encourages them to find a community of other people who care about the same problems, with whom they could connect outside the therapy room. As Woollacott said earlier: “People who share your values. People who are committed to not looking away.”
  • Dwyer told the group that she had been thinking more about psychological adaptation as a form of climate mitigation
  • Therapy, she said, could be a way to steward human energy and creative capacities at a time when they’re most needed.
  • It was hard, Bryant told me when we first spoke, to do this sort of work without finding yourself asking bigger questions — namely, what was therapy actually about?
  • Many of the therapists I talked to spoke of their role not as “fixing” a patient’s problem or responding to a pathology, but simply giving their patients the tools to name and explore their most difficult emotions, to sit with painful feelings without instantly running away from them
  • many of the methods in their traditional tool kits continue to be useful in climate psychology. Anxiety and hopelessness and anger are all familiar territory, after all, with long histories of well-studied treatments.
  • They focused on trying to help patients develop coping skills and find meaning amid destabilization, to still see themselves as having agency and choice.
  • Weston, the therapist in New York, has had patients who struggle to be in a world that surrounds them with waste and trash, who experience panic because they can never find a place free of reminders of their society’s destruction
  • eston said, that she has trouble with the repeated refrain that therapist and patient experiencing the same losses and dreads at the same time constituted a major departure from traditional therapeutic practice
  • “I’m so excited by what you’re bringing in,” Woollacott replied. “I’m doing psychoanalytic training at the moment, and we study attachment theory” — how the stability of early emotional bonds affects future relationships and feelings of well-being. “But nowhere in the literature does it talk about our attachment to the land.”
  • Lately, Bryant told me, he’s been most excited about the work that happens outside the therapy room: places where groups of people gather to talk about their feelings and the future they’re facing
  • It was at such a meeting — a community event where people were brainstorming ways to adapt to climate chaos — that Weston, realizing she had concrete skills to offer, was inspired to rework her practice to focus on the challenge. She remembers finding the gathering empowering and energizing in a way she hadn’t experienced before. In such settings, it was automatic that people would feel embraced instead of isolated, natural that the conversation would start moving away from the individual and toward collective experiences and ideas.
  • There was no fully separate space, to be mended on its own. There was only a shared and broken world, and a community united in loving it.
Javier E

How 2020 Forced Facebook and Twitter to Step In - The Atlantic - 0 views

  • mainstream platforms learned their lesson, accepting that they should intervene aggressively in more and more cases when users post content that might cause social harm.
  • During the wildfires in the American West in September, Facebook and Twitter took down false claims about their cause, even though the platforms had not done the same when large parts of Australia were engulfed in flames at the start of the year
  • Twitter, Facebook, and YouTube cracked down on QAnon, a sprawling, incoherent, and constantly evolving conspiracy theory, even though its borders are hard to delineate.
  • ...15 more annotations...
  • It tweaked its algorithm to boost authoritative sources in the news feed and turned off recommendations to join groups based around political or social issues. Facebook is reversing some of these steps now, but it cannot make people forget this toolbox exists in the future
  • Nothing symbolizes this shift as neatly as Facebook’s decision in October (and Twitter’s shortly after) to start banning Holocaust denial. Almost exactly a year earlier, Zuckerberg had proudly tied himself to the First Amendment in a widely publicized “stand for free expression” at Georgetown University.
  • The evolution continues. Facebook announced earlier this month that it will join platforms such as YouTube and TikTok in removing, not merely labeling or down-ranking, false claims about COVID-19 vaccines.
  • the pandemic also showed that complete neutrality is impossible. Even though it’s not clear that removing content outright is the best way to correct misperceptions, Facebook and other platforms plainly want to signal that, at least in the current crisis, they don’t want to be seen as feeding people information that might kill them.
  • As platforms grow more comfortable with their power, they are recognizing that they have options beyond taking posts down or leaving them up. In addition to warning labels, Facebook implemented other “break glass” measures to stem misinformation as the election approached.
  • Down-ranking, labeling, or deleting content on an internet platform does not address the social or political circumstances that caused it to be posted in the first place
  • Content moderation comes to every content platform eventually, and platforms are starting to realize this faster than ever.
  • Platforms don’t deserve praise for belatedly noticing dumpster fires that they helped create and affixing unobtrusive labels to them
  • Warning labels for misinformation might make some commentators feel a little better, but whether labels actually do much to contain the spread of false information is still unknown.
  • News reporting suggests that insiders at Facebook knew they could and should do more about misinformation, but higher-ups vetoed their ideas. YouTube barely acted to stem the flood of misinformation about election results on its platform.
  • When internet platforms announce new policies, assessing whether they can and will enforce them consistently has always been difficult. In essence, the companies are grading their own work. But too often what can be gleaned from the outside suggests that they’re failing.
  • And if 2020 finally made clear to platforms the need for greater content moderation, it also exposed the inevitable limits of content moderation.
  • Even before the pandemic, YouTube had begun adjusting its recommendation algorithm to reduce the spread of borderline and harmful content, and is introducing pop-up nudges to encourage user
  • even the most powerful platform will never be able to fully compensate for the failures of other governing institutions or be able to stop the leader of the free world from constructing an alternative reality when a whole media ecosystem is ready and willing to enable him. As Renée DiResta wrote in The Atlantic last month, “reducing the supply of misinformation doesn’t eliminate the demand.”
  • Even so, this year’s events showed that nothing is innate, inevitable, or immutable about platforms as they currently exist. The possibilities for what they might become—and what role they will play in society—are limited more by imagination than any fixed technological constraint, and the companies appear more willing to experiment than ever.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

Opinion | The OpenAI drama explains the human penchant for risk-taking - The Washington... - 0 views

  • Along with more pedestrian worries about various ways that AI could harm users, one side worried that ChatGPT and its many cousins might thrust humanity onto a kind of digital bobsled track, terminating in disaster — either with the machines wiping out their human progenitors or with humans using the machines to do so themselves. Once things start moving in earnest, there’s no real way to slow down or bail out, so the worriers wanted everyone to sit down and have a long think before getting anything rolling too fast.
  • Skeptics found all this a tad overwrought. For one thing, it left out all the ways in which AI might save humanity by providing cures for aging or solutions to global warming. And many folks thought it would be years before computers could possess anything approaching true consciousness, so we could figure out the safety part as we go. Still others were doubtful that truly sentient machines were even on the horizon; they saw ChatGPT and its many relatives as ultrasophisticated electronic parrots
  • Worrying that such an entity might decide it wants to kill people is a bit like wondering whether your iPhone would prefer to holiday in Crete or Majorca next summer.
  • ...13 more annotations...
  • OpenAI was was trying to balance safety and development — a balance that became harder to maintain under the pressures of commercialization.
  • It was founded as a nonprofit by people who professed sincere concern about taking things safe and slow. But it was also full of AI nerds who wanted to, you know, make cool AIs.
  • OpenAI set up a for-profit arm — but with a corporate structure that left the nonprofit board able to cry “stop” if things started moving too fast (or, if you prefer, gave “a handful of people with no financial stake in the company the power to upend the project on a whim”).
  • On Friday, those people, in a fit of whimsy, kicked Brockman off the board and fired Altman. Reportedly, the move was driven by Ilya Sutskever, OpenAI’s chief scientist, who, along with other members of the board, has allegedly clashed repeatedly with Altman over the speed of generative AI development and the sufficiency of safety precautions.
  • Chief among the signatories was Sutskever, who tweeted Monday morning, “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”
  • Humanity can’t help itself; we have kept monkeying with technology, no matter the dangers, since some enterprising hominid struck the first stone ax.
  • a software company has little in the way of tangible assets; its people are its capital. And this capital looks willing to follow Altman to where the money is.
  • More broadly still, it perfectly encapsulates the AI alignment problem, which in the end is also a human alignment problem
  • And that’s why we are probably not going to “solve” it so much as hope we don’t have to.
  • it’s also a valuable general lesson about corporate structure and corporate culture. The nonprofit’s altruistic mission was in tension with the profit-making, AI-generating part — and when push came to shove, the profit-making part won.
  • When scientists started messing with the atom, there were real worries that nuclear weapons might set Earth’s atmosphere on fire. By the time an actual bomb was exploded, scientists were pretty sure that wouldn’t happen
  • But if the worries had persisted, would anyone have behaved differently — knowing that it might mean someone else would win the race for a superweapon? Better to go forward and ensure that at least the right people were in charge.
  • Now consider Sutskever: Did he change his mind over the weekend about his disputes with Altman? More likely, he simply realized that, whatever his reservations, he had no power to stop the bobsled — so he might as well join his friends onboard. And like it or not, we’re all going with them.
Javier E

When the New York Times lost its way - 0 views

  • There are many reasons for Trump’s ascent, but changes in the American news media played a critical role. Trump’s manipulation and every one of his political lies became more powerful because journalists had forfeited what had always been most valuable about their work: their credibility as arbiters of truth and brokers of ideas, which for more than a century, despite all of journalism’s flaws and failures, had been a bulwark of how Americans govern themselves.
  • I think Sulzberger shares this analysis. In interviews and his own writings, including an essay earlier this year for the Columbia Journalism Review, he has defended “independent journalism”, or, as I understand him, fair-minded, truth-seeking journalism that aspires to be open and objective.
  • It’s good to hear the publisher speak up in defence of such values, some of which have fallen out of fashion not just with journalists at the Times and other mainstream publications but at some of the most prestigious schools of journalism.
  • ...204 more annotations...
  • All the empathy and humility in the world will not mean much against the pressures of intolerance and tribalism without an invaluable quality that Sulzberger did not emphasise: courage.
  • Sulzberger seems to underestimate the struggle he is in, that all journalism and indeed America itself is in
  • In describing the essential qualities of independent journalism in his essay, he unspooled a list of admirable traits – empathy, humility, curiosity and so forth. These qualities have for generations been helpful in contending with the Times’s familiar problem, which is liberal bias
  • on their own, these qualities have no chance against the Times’s new, more dangerous problem, which is in crucial respects the opposite of the old one.
  • The Times’s problem has metastasised from liberal bias to illiberal bias, from an inclination to favour one side of the national debate to an impulse to shut debate down altogether
  • the internet knocked the industry off its foundations. Local newspapers were the proving ground between college campuses and national newsrooms. As they disintegrated, the national news media lost a source of seasoned reporters and many Americans lost a journalism whose truth they could verify with their own eyes.
  • far more than when I set out to become a journalist, doing the work right today demands a particular kind of courage:
  • the moral and intellectual courage to take the other side seriously and to report truths and ideas that your own side demonises for fear they will harm its cause.
  • One of the glories of embracing illiberalism is that, like Trump, you are always right about everything, and so you are justified in shouting disagreement down.
  • leaders of many workplaces and boardrooms across America find that it is so much easier to compromise than to confront – to give a little ground today in the belief you can ultimately bring people around
  • This is how reasonable Republican leaders lost control of their party to Trump and how liberal-minded college presidents lost control of their campuses. And it is why the leadership of the New York Times is losing control of its principles.
  • Over the decades the Times and other mainstream news organisations failed plenty of times to live up to their commitments to integrity and open-mindedness. The relentless struggle against biases and preconceptions, rather than the achievement of a superhuman objective omniscience, is what mattered
  • . I thought, and still think, that no American institution could have a better chance than the Times, by virtue of its principles, its history, its people and its hold on the attention of influential Americans, to lead the resistance to the corruption of political and intellectual life, to overcome the encroaching dogmatism and intolerance.
  • As the country became more polarised, the national media followed the money by serving partisan audiences the versions of reality they preferred
  • This relationship proved self-reinforcing. As Americans became freer to choose among alternative versions of reality, their polarisation intensified.
  • as the top editors let bias creep into certain areas of coverage, such as culture, lifestyle and business, that made the core harder to defend and undermined the authority of even the best reporters.
  • here have been signs the Times is trying to recover the courage of its convictions
  • The paper was slow to display much curiosity about the hard question of the proper medical protocols for trans children; but once it did, the editors defended their coverage against the inevitable criticism.
  • As Sulzberger told me in the past, returning to the old standards will require agonising change. He saw that as the gradual work of many years, but I think he is mistaken. To overcome the cultural and commercial pressures the Times faces, particularly given the severe test posed by another Trump candidacy and possible presidency, its publisher and senior editors will have to be bolder than that.
  • As a Democrat from a family of Democrats, a graduate of Yale and a blossom of the imagined meritocracy, I had my first real chance, at Buchanan’s rallies, to see the world through the eyes of stalwart opponents of abortion, immigration and the relentlessly rising tide of modernity.
  • the Times is failing to face up to one crucial reason: that it has lost faith in Americans, too.
  • For now, to assert that the Times plays by the same rules it always has is to commit a hypocrisy that is transparent to conservatives, dangerous to liberals and bad for the country as a whole.
  • It makes the Times too easy for conservatives to dismiss and too easy for progressives to believe.
  • The reality is that the Times is becoming the publication through which America’s progressive elite talks to itself about an America that does not really exist.
  • It is hard to imagine a path back to saner American politics that does not traverse a common ground of shared fact.
  • It is equally hard to imagine how America’s diversity can continue to be a source of strength, rather than become a fatal flaw, if Americans are afraid or unwilling to listen to each other.
  • I suppose it is also pretty grandiose to think you might help fix all that. But that hope, to me, is what makes journalism worth doing.
  • Since Adolph Ochs bought the paper in 1896, one of the most inspiring things the Times has said about itself is that it does its work “without fear or favour”. That is not true of the institution today – it cannot be, not when its journalists are afraid to trust readers with a mainstream conservative argument such as Cotton’s, and its leaders are afraid to say otherwise.
  • Most important, the Times, probably more than any other American institution, could influence the way society approached debate and engagement with opposing views. If Times Opinion demonstrated the same kind of intellectual courage and curiosity that my colleagues at the Atlantic had shown, I hoped, the rest of the media would follow.
  • You did not have to go along with everything that any tribe said. You did not have to pretend that the good guys, much as you might have respected them, were right about everything, or that the bad guys, much as you might have disdained them, never had a point. You did not, in other words, ever have to lie.
  • This fundamental honesty was vital for readers, because it equipped them to make better, more informed judgments about the world. Sometimes it might shock or upset them by failing to conform to their picture of reality. But it also granted them the respect of acknowledging that they were able to work things out for themselves.
  • The Atlantic did not aspire to the same role as the Times. It did not promise to serve up the news of the day without any bias. But it was to opinion journalism what the Times’s reporting was supposed to be to news: honest and open to the world.
  • Those were the glory days of the blog, and we hit on the idea of creating a living op-ed page, a collective of bloggers with different points of view but a shared intellectual honesty who would argue out the meaning of the news of the day
  • They were brilliant, gutsy writers, and their disagreements were deep enough that I used to joke that my main work as editor was to prevent fistfights.
  • Under its owner, David Bradley, my colleagues and I distilled our purpose as publishing big arguments about big ideas
  • we also began producing some of the most important work in American journalism: Nicholas Carr on whether Google was “making us stupid”; Hanna Rosin on “the end of men”; Taylor Branch on “the shame of college sports”; Ta-Nehisi Coates on “the case for reparations”; Greg Lukianoff and Jonathan Haidt on “the coddling of the American mind”.
  • I was starting to see some effects of the new campus politics within the Atlantic. A promising new editor had created a digital form for aspiring freelancers to fill out, and she wanted to ask them to disclose their racial and sexual identity. Why? Because, she said, if we were to write about the trans community, for example, we would ask a trans person to write the story
  • There was a good argument for that, I acknowledged, and it sometimes might be the right answer. But as I thought about the old people, auto workers and abortion opponents I had learned from, I told her there was also an argument for correspondents who brought an outsider’s ignorance, along with curiosity and empathy, to the story.
  • A journalism that starts out assuming it knows the answers, it seemed to me then, and seems even more so to me now, can be far less valuable to the reader than a journalism that starts out with a humbling awareness that it knows nothing.
  • In the age of the internet it is hard even for a child to sustain an “innocent eye”, but the alternative for journalists remains as dangerous as ever, to become propagandists. America has more than enough of those already.
  • When I looked around the Opinion department, change was not what I perceived. Excellent writers and editors were doing excellent work. But the department’s journalism was consumed with politics and foreign affairs in an era when readers were also fascinated by changes in technology, business, science and culture.
  • Fairly quickly, though, I realised two things: first, that if I did my job as I thought it should be done, and as the Sulzbergers said they wanted me to do it, I would be too polarising internally ever to lead the newsroom; second, that I did not want that job, though no one but my wife believed me when I said that.
  • there was a compensating moral and psychological privilege that came with aspiring to journalistic neutrality and open-mindedness, despised as they might understandably be by partisans. Unlike the duelling politicians and advocates of all kinds, unlike the corporate chieftains and their critics, unlike even the sainted non-profit workers, you did not have to pretend things were simpler than they actually were
  • On the right and left, America’s elites now talk within their tribes, and get angry or contemptuous on those occasions when they happen to overhear the other conclave. If they could be coaxed to agree what they were arguing about, and the rules by which they would argue about it, opinion journalism could serve a foundational need of the democracy by fostering diverse and inclusive debate. Who could be against that?
  • The large staff of op-ed editors contained only a couple of women. Although the 11 columnists were individually admirable, only two of them were women and only one was a person of colour
  • Not only did they all focus on politics and foreign affairs, but during the 2016 campaign, no columnist shared, in broad terms, the worldview of the ascendant progressives of the Democratic Party, incarnated by Bernie Sanders. And only two were conservative.
  • This last fact was of particular concern to the elder Sulzberger. He told me the Times needed more conservative voices, and that its own editorial line had become predictably left-wing. “Too many liberals,” read my notes about the Opinion line-up from a meeting I had with him and Mark Thompson, then the chief executive, as I was preparing to rejoin the paper. “Even conservatives are liberals’ idea of a conservative.” The last note I took from that meeting was: “Can’t ignore 150m conservative Americans.”
  • As I knew from my time at the Atlantic, this kind of structural transformation can be frightening and even infuriating for those understandably proud of things as they are. It is hard on everyone
  • experience at the Atlantic also taught me that pursuing new ways of doing journalism in pursuit of venerable institutional principles created enthusiasm for change. I expected that same dynamic to allay concerns at the Times.
  • If Opinion published a wider range of views, it would help frame a set of shared arguments that corresponded to, and drew upon, the set of shared facts coming from the newsroom.
  • New progressive voices were celebrated within the Times. But in contrast to the Wall Street Journal and the Washington Post, conservative voices – even eloquent anti-Trump conservative voices – were despised, regardless of how many leftists might surround them.
  • The Opinion department mocked the paper’s claim to value diversity. It did not have a single black editor
  • Eventually, it sank in that my snotty joke was actually on me: I was the one ignorantly fighting a battle that was already lost. The old liberal embrace of inclusive debate that reflected the country’s breadth of views had given way to a new intolerance for the opinions of roughly half of American voters.
  • Out of naivety or arrogance, I was slow to recognise that at the Times, unlike at the Atlantic, these values were no longer universally accepted, let alone esteemed
  • After the 9/11 attacks, as the bureau chief in Jerusalem, I spent a lot of time in the Gaza Strip interviewing Hamas leaders, recruiters and foot soldiers, trying to understand and describe their murderous ideology. Some readers complained that I was providing a platform for terrorists, but there was never any objection from within the Times.
  • Our role, we knew, was to help readers understand such threats, and this required empathetic – not sympathetic – reporting. This is not an easy distinction but good reporters make it: they learn to understand and communicate the sources and nature of a toxic ideology without justifying it, much less advocating it.
  • Today’s newsroom turns that moral logic on its head, at least when it comes to fellow Americans. Unlike the views of Hamas, the views of many Americans have come to seem dangerous to engage in the absence of explicit condemnation
  • Focusing on potential perpetrators – “platforming” them by explaining rather than judging their views – is believed to empower them to do more harm.
  • After the profile of the Ohio man was published, media Twitter lit up with attacks on the article as “normalising” Nazism and white nationalism, and the Times convulsed internally. The Times wound up publishing a cringing editor’s note that hung the writer out to dry and approvingly quoted some of the criticism, including a tweet from a Washington Post opinion editor asking, “Instead of long, glowing profiles of Nazis/White nationalists, why don’t we profile the victims of their ideologies”?
  • the Times lacked the confidence to defend its own work
  • The editor’s note paraded the principle of publishing such pieces, saying it was important to “shed more light, not less, on the most extreme corners of American life”. But less light is what the readers got. As a reporter in the newsroom, you’d have to have been an idiot after that explosion to attempt such a profile
  • Empathetic reporting about Trump supporters became even more rare. It became a cliché among influential left-wing columnists and editors that blinkered political reporters interviewed a few Trump supporters in diners and came away suckered into thinking there was something besides racism that could explain anyone’s support for the man.
  • After a year spent publishing editorials attacking Trump and his policies, I thought it would be a demonstration of Timesian open-mindedness to give his supporters their say. Also, I thought the letters were interesting, so I turned over the entire editorial page to the Trump letters.
  • I wasn’t surprised that we got some criticism on Twitter. But I was astonished by the fury of my Times colleagues. I found myself facing an angry internal town hall, trying to justify what to me was an obvious journalistic decision
  • Didn’t he think other Times readers should understand the sources of Trump’s support? Didn’t he also see it was a wonderful thing that some Trump supporters did not just dismiss the Times as fake news, but still believed in it enough to respond thoughtfully to an invitation to share their views?
  • And if the Times could not bear to publish the views of Americans who supported Trump, why should it be surprised that those voters would not trust it?
  • Two years later, in 2020, Baquet acknowledged that in 2016 the Times had failed to take seriously the idea that Trump could become president partly because it failed to send its reporters out into America to listen to voters and understand “the turmoil in the country”. And, he continued, the Times still did not understand the views of many Americans
  • Speaking four months before we published the Cotton op-ed, he said that to argue that the views of such voters should not appear in the Times was “not journalistic”.
  • Conservative arguments in the Opinion pages reliably started uproars within the Times. Sometimes I would hear directly from colleagues who had the grace to confront me with their concerns; more often they would take to the company’s Slack channels or Twitter to advertise their distress in front of each other
  • This environment of enforced group-think, inside and outside the paper, was hard even on liberal opinion writers. One left-of-centre columnist told me that he was reluctant to appear in the New York office for fear of being accosted by colleagues.
  • An internal survey shortly after I left the paper found that barely half the staff, within an enterprise ostensibly devoted to telling the truth, agreed “there is a free exchange of views in this company” and “people are not afraid to say what they really think”.)
  • Even columnists with impeccable leftist bona fides recoiled from tackling subjects when their point of view might depart from progressive orthodoxy.
  • The bias had become so pervasive, even in the senior editing ranks of the newsroom, as to be unconscious
  • Trying to be helpful, one of the top newsroom editors urged me to start attaching trigger warnings to pieces by conservatives. It had not occurred to him how this would stigmatise certain colleagues, or what it would say to the world about the Times’s own bias
  • By their nature, information bubbles are powerfully self-reinforcing, and I think many Times staff have little idea how closed their world has become, or how far they are from fulfilling their compact with readers to show the world “without fear or favour”
  • sometimes the bias was explicit: one newsroom editor told me that, because I was publishing more conservatives, he felt he needed to push his own department further to the left.
  • The Times’s failure to honour its own stated principles of openness to a range of views was particularly hard on the handful of conservative writers, some of whom would complain about being flyspecked and abused by colleagues. One day when I relayed a conservative’s concern about double standards to Sulzberger, he lost his patience. He told me to inform the complaining conservative that that’s just how it was: there was a double standard and he should get used to it.
  • A publication that promises its readers to stand apart from politics should not have different standards for different writers based on their politics. But I delivered the message. There are many things I regret about my tenure as editorial-page editor. That is the only act of which I am ashamed.
  • I began to think of myself not as a benighted veteran on a remote island, but as Rip Van Winkle. I had left one newspaper, had a pleasant dream for ten years, and returned to a place I barely recognised.
  • The new New York Times was the product of two shocks – sudden collapse, and then sudden success. The paper almost went bankrupt during the financial crisis, and the ensuing panic provoked a crisis of confidence among its leaders. Digital competitors like the HuffPost were gaining readers and winning plaudits within the media industry as innovative. They were the cool kids; Times folk were ink-stained wrinklies.
  • In its panic, the Times bought out experienced reporters and editors and began hiring journalists from publications like the HuffPost who were considered “digital natives” because they had never worked in print. This hiring quickly became easier, since most digital publications financed by venture capital turned out to be bad businesses
  • Though they might have lacked deep or varied reporting backgrounds, some of the Times’s new hires brought skills in video and audio; others were practised at marketing themselves – building their brands, as journalists now put it – in social media. Some were brilliant and fiercely honest, in keeping with the old aspirations of the paper.
  • critically, the Times abandoned its practice of acculturation, including those months-long assignments on Metro covering cops and crime or housing. Many new hires who never spent time in the streets went straight into senior writing and editing roles.
  • All these recruits arrived with their own notions of the purpose of the Times. To me, publishing conservatives helped fulfil the paper’s mission; to them, I think, it betrayed that mission.
  • then, to the shock and horror of the newsroom, Trump won the presidency. In his article for Columbia Journalism Review, Sulzberger cites the Times’s failure to take Trump’s chances seriously as an example of how “prematurely shutting down inquiry and debate” can allow “conventional wisdom to ossify in a way that blinds society.
  • Many Times staff members – scared, angry – assumed the Times was supposed to help lead the resistance. Anxious for growth, the Times’s marketing team implicitly endorsed that idea, too.
  • As the number of subscribers ballooned, the marketing department tracked their expectations, and came to a nuanced conclusion. More than 95% of Times subscribers described themselves as Democrats or independents, and a vast majority of them believed the Times was also liberal
  • A similar majority applauded that bias; it had become “a selling point”, reported one internal marketing memo. Yet at the same time, the marketers concluded, subscribers wanted to believe that the Times was independent.
  • As that memo argued, even if the Times was seen as politically to the left, it was critical to its brand also to be seen as broadening its readers’ horizons, and that required “a perception of independence”.
  • Readers could cancel their subscriptions if the Times challenged their worldview by reporting the truth without regard to politics. As a result, the Times’s long-term civic value was coming into conflict with the paper’s short-term shareholder value
  • The Times has every right to pursue the commercial strategy that makes it the most money. But leaning into a partisan audience creates a powerful dynamic. Nobody warned the new subscribers to the Times that it might disappoint them by reporting truths that conflicted with their expectations
  • When your product is “independent journalism”, that commercial strategy is tricky, because too much independence might alienate your audience, while too little can lead to charges of hypocrisy that strike at the heart of the brand.
  • It became one of Dean Baquet’s frequent mordant jokes that he missed the old advertising-based business model, because, compared with subscribers, advertisers felt so much less sense of ownership over the journalism
  • The Times was slow to break it to its readers that there was less to Trump’s ties to Russia than they were hoping, and more to Hunter Biden’s laptop, that Trump might be right that covid came from a Chinese lab, that masks were not always effective against the virus, that shutting down schools for many months was a bad idea.
  • there has been a sea change over the past ten years in how journalists think about pursuing justice. The reporters’ creed used to have its foundation in liberalism, in the classic philosophical sense. The exercise of a reporter’s curiosity and empathy, given scope by the constitutional protections of free speech, would equip readers with the best information to form their own judgments. The best ideas and arguments would win out
  • The journalist’s role was to be a sworn witness; the readers’ role was to be judge and jury. In its idealised form, journalism was lonely, prickly, unpopular work, because it was only through unrelenting scepticism and questioning that society could advance. If everyone the reporter knew thought X, the reporter’s role was to ask: why X?
  • Illiberal journalists have a different philosophy, and they have their reasons for it. They are more concerned with group rights than individual rights, which they regard as a bulwark for the privileges of white men. They have seen the principle of  free speech used to protect right-wing outfits like Project Veritas and Breitbart News and are uneasy with it.
  • They had their suspicions of their fellow citizens’ judgment confirmed by Trump’s election, and do not believe readers can be trusted with potentially dangerous ideas or facts. They are not out to achieve social justice as the knock-on effect of pursuing truth; they want to pursue it head-on
  • The term “objectivity” to them is code for ignoring the poor and weak and cosying up to power, as journalists often have done.
  • And they do not just want to be part of the cool crowd. They need to be
  • To be more valued by their peers and their contacts – and hold sway over their bosses – they need a lot of followers in social media. That means they must be seen to applaud the right sentiments of the right people in social media
  • The journalist from central casting used to be a loner, contrarian or a misfit. Now journalism is becoming another job for joiners, or, to borrow Twitter’s own parlance, “followers”, a term that mocks the essence of a journalist’s role.
  • The new newsroom ideology seems idealistic, yet it has grown from cynical roots in academia: from the idea that there is no such thing as objective truth; that there is only narrative, and that therefore whoever controls the narrative – whoever gets to tell the version of the story that the public hears – has the whip hand
  • What matters, in other words, is not truth and ideas in themselves, but the power to determine both in the public mind.
  • By contrast, the old newsroom ideology seems cynical on its surface. It used to bug me that my editors at the Times assumed every word out of the mouth of any person in power was a lie.
  • And the pursuit of objectivity can seem reptilian, even nihilistic, in its abjuration of a fixed position in moral contests. But the basis of that old newsroom approach was idealistic: the notion that power ultimately lies in truth and ideas, and that the citizens of a pluralistic democracy, not leaders of any sort, must be trusted to judge both.
  • Our role in Times Opinion, I used to urge my colleagues, was not to tell people what to think, but to help them fulfil their desire to think for themselves.
  • It seems to me that putting the pursuit of truth, rather than of justice, at the top of a publication’s hierarchy of values also better serves not just truth but justice, too
  • over the long term journalism that is not also sceptical of the advocates of any form of justice and the programmes they put forward, and that does not struggle honestly to understand and explain the sources of resistance,
  • will not assure that those programmes will work, and it also has no legitimate claim to the trust of reasonable people who see the world very differently. Rather than advance understanding and durable change, it provokes backlash.
  • The impatience within the newsroom with such old ways was intensified by the generational failure of the Times to hire and promote women and non-white people
  • Pay attention if you are white at the Times and you will hear black editors speak of hiring consultants at their own expense to figure out how to get white staff to respect them
  • As wave after wave of pain and outrage swept through the Times, over a headline that was not damning enough of Trump or someone’s obnoxious tweets, I came to think of the people who were fragile, the ones who were caught up in Slack or Twitter storms, as people who had only recently discovered that they were white and were still getting over the shock.
  • Having concluded they had got ahead by working hard, it has been a revelation to them that their skin colour was not just part of the wallpaper of American life, but a source of power, protection and advancement.
  • I share the bewilderment that so many people could back Trump, given the things he says and does, and that makes me want to understand why they do: the breadth and diversity of his support suggests not just racism is at work. Yet these elite, well-meaning Times staff cannot seem to stretch the empathy they are learning to extend to people with a different skin colour to include those, of whatever race, who have different politics.
  • The digital natives were nevertheless valuable, not only for their skills but also because they were excited for the Times to embrace its future. That made them important allies of the editorial and business leaders as they sought to shift the Times to digital journalism and to replace staff steeped in the ways of print. Partly for that reason, and partly out of fear, the leadership indulged internal attacks on Times journalism, despite pleas from me and others, to them and the company as a whole, that Times folk should treat each other with more respect
  • My colleagues and I in Opinion came in for a lot of the scorn, but we were not alone. Correspondents in the Washington bureau and political reporters would take a beating, too, when they were seen as committing sins like “false balance” because of the nuance in their stories.
  • My fellow editorial and commercial leaders were well aware of how the culture of the institution had changed. As delighted as they were by the Times’s digital transformation they were not blind to the ideological change that came with it. They were unhappy with the bullying and group-think; we often discussed such cultural problems in the weekly meetings of the executive committee, composed of the top editorial and business leaders, including the publisher. Inevitably, these bitch sessions would end with someone saying a version of: “Well, at some point we have to tell them this is what we believe in as a newspaper, and if they don’t like it they should work somewhere else.” It took me a couple of years to realise that this moment was never going to come.
  • There is a lot not to miss about the days when editors like Boyd could strike terror in young reporters like me and Purdum. But the pendulum has swung so far in the other direction that editors now tremble before their reporters and even their interns. “I miss the old climate of fear,” Baquet used to say with a smile, in another of his barbed jokes.
  • I wish I’d pursued my point and talked myself out of the job. This contest over control of opinion journalism within the Times was not just a bureaucratic turf battle (though it was that, too)
  • The newsroom’s embrace of opinion journalism has compromised the Times’s independence, misled its readers and fostered a culture of intolerance and conformity.
  • The Opinion department is a relic of the era when the Times enforced a line between news and opinion journalism.
  • Editors in the newsroom did not touch opinionated copy, lest they be contaminated by it, and opinion journalists and editors kept largely to their own, distant floor within the Times building. Such fastidiousness could seem excessive, but it enforced an ethos that Times reporters owed their readers an unceasing struggle against bias in the news
  • But by the time I returned as editorial-page editor, more opinion columnists and critics were writing for the newsroom than for Opinion. As at the cable news networks, the boundaries between commentary and news were disappearing, and readers had little reason to trust that Times journalists were resisting rather than indulging their biases
  • The Times newsroom had added more cultural critics, and, as Baquet noted, they were free to opine about politics.
  • Departments across the Times newsroom had also begun appointing their own “columnists”, without stipulating any rules that might distinguish them from columnists in Opinion
  • I checked to see if, since I left the Times, it had developed guidelines explaining the difference, if any, between a news columnist and opinion columnist. The paper’s spokeswoman, Danielle Rhoades Ha, did not respond to the question.)
  • The internet rewards opinionated work and, as news editors felt increasing pressure to generate page views, they began not just hiring more opinion writers but also running their own versions of opinionated essays by outside voices – historically, the province of Opinion’s op-ed department.
  • Yet because the paper continued to honour the letter of its old principles, none of this work could be labelled “opinion” (it still isn’t). After all, it did not come from the Opinion department.
  • And so a newsroom technology columnist might call for, say, unionisation of the Silicon Valley workforce, as one did, or an outside writer might argue in the business section for reparations for slavery, as one did, and to the average reader their work would appear indistinguishable from Times news articles.
  • By similarly circular logic, the newsroom’s opinion journalism breaks another of the Times’s commitments to its readers. Because the newsroom officially does not do opinion – even though it openly hires and publishes opinion journalists – it feels free to ignore Opinion’s mandate to provide a diversity of views
  • When I was editorial-page editor, there were a couple of newsroom columnists whose politics were not obvious. But the other newsroom columnists, and the critics, read as passionate progressives.
  • I urged Baquet several times to add a conservative to the newsroom roster of cultural critics. That would serve the readers by diversifying the Times’s analysis of culture, where the paper’s left-wing bias had become most blatant, and it would show that the newsroom also believed in restoring the Times’s commitment to taking conservatives seriously. He said this was a good idea, but he never acted on it
  • I couldn’t help trying the idea out on one of the paper’s top cultural editors, too: he told me he did not think Times readers would be interested in that point of view.
  • opinion was spreading through the newsroom in other ways. News desks were urging reporters to write in the first person and to use more “voice”, but few newsroom editors had experience in handling that kind of journalism, and no one seemed certain where “voice” stopped and “opinion” began
  • The Times magazine, meanwhile, became a crusading progressive publication
  • Baquet liked to say the magazine was Switzerland, by which he meant that it sat between the newsroom and Opinion. But it reported only to the news side. Its work was not labelled as opinion and it was free to omit conservative viewpoints.
  • his creep of politics into the newsroom’s journalism helped the Times beat back some of its new challengers, at least those on the left
  • Competitors like Vox and the HuffPost were blending leftish politics with reporting and writing it up conversationally in the first person. Imitating their approach, along with hiring some of their staff, helped the Times repel them. But it came at a cost. The rise of opinion journalism over the past 15 years changed the newsroom’s coverage and its culture
  • The tiny redoubt of never-Trump conservatives in Opinion is swamped daily not only by the many progressives in that department but their reinforcements among the critics, columnists and magazine writers in the newsroom
  • They are generally excellent, but their homogeneity means Times readers are being served a very restricted range of views, some of them presented as straight news by a publication that still holds itself out as independent of any politics.
  • And because the critics, newsroom columnists and magazine writers are the newsroom’s most celebrated journalists, they have disproportionate influence over the paper’s culture.
  • By saying that it still holds itself to the old standard of strictly separating its news and opinion journalists, the paper leads its readers further into the trap of thinking that what they are reading is independent and impartial – and this misleads them about their country’s centre of political and cultural gravity.
  • And yet the Times insists to the public that nothing has changed.
  • “Even though each day’s opinion pieces are typically among our most popular journalism and our columnists are among our most trusted voices, we believe opinion is secondary to our primary mission of reporting and should represent only a portion of a healthy news diet,” Sulzberger wrote in the Columbia Journalism Review. “For that reason, we’ve long kept the Opinion department intentionally small – it represents well under a tenth of our journalistic staff – and ensured that its editorial decision-making is walled off from the newsroom.”
  • When I was editorial-page editor, Sulzberger, who declined to be interviewed on the record for this article, worried a great deal about the breakdown in the boundaries between news and opinion
  • He told me once that he would like to restructure the paper to have one editor oversee all its news reporters, another all its opinion journalists and a third all its service journalists, the ones who supply guidance on buying gizmos or travelling abroad. Each of these editors would report to him
  • That is the kind of action the Times needs to take now to confront its hypocrisy and begin restoring its independence.
  • The Times could learn something from the Wall Street Journal, which has kept its journalistic poise
  • It has maintained a stricter separation between its news and opinion journalism, including its cultural criticism, and that has protected the integrity of its work.
  • After I was chased out of the Times, Journal reporters and other staff attempted a similar assault on their opinion department. Some 280 of them signed a letter listing pieces they found offensive and demanding changes in how their opinion colleagues approached their work. “Their anxieties aren’t our responsibility,” shrugged the Journal’s editorial board in a note to readers after the letter was leaked. “The signers report to the news editors or other parts of the business.” The editorial added, in case anyone missed the point, “We are not the New York Times.” That was the end of it.
  • Unlike the publishers of the Journal, however, Sulzberger is in a bind, or at least perceives himself to be
  • The confusion within the Times over its role, and the rising tide of intolerance among the reporters, the engineers, the business staff, even the subscribers – these are all problems he inherited, in more ways than one. He seems to feel constrained in confronting the paper’s illiberalism by the very source of his authority
  • The paradox is that in previous generations the Sulzbergers’ control was the bulwark of the paper’s independence.
  • if he is going to instil the principles he believes in, he needs to stop worrying so much about his powers of persuasion, and start using the power he is so lucky to have.
  • Shortly after we published the op-ed that Wednesday afternoon, some reporters tweeted their opposition to Cotton’s argument. But the real action was in the Times’s Slack channels, where reporters and other staff began not just venting but organising. They turned to the union to draw up a workplace complaint about the op-ed.
  • The next day, this reporter shared the byline on the Times story about the op-ed. That article did not mention that Cotton had distinguished between “peaceful, law-abiding protesters” and “rioters and looters”. In fact, the first sentence reported that Cotton had called for “the military to suppress protests against police violence”.
  • This was – and is – wrong. You don’t have to take my word for that. You can take the Times’s
  • Three days later in its article on my resignation it also initially reported that Cotton had called “for military force against protesters in American cities”. This time, after the article was published on the Times website, the editors scrambled to rewrite it, replacing “military force” with “military response” and “protesters” with “civic unrest”
  • That was a weaselly adjustment – Cotton wrote about criminality, not “unrest” – but the article at least no longer unambiguously misrepresented Cotton’s argument to make it seem he was in favour of crushing democratic protest. The Times did not publish a correction or any note acknowledging the story had been changed.
  • Seeking to influence the outcome of a story you cover, particularly without disclosing that to the reader, violates basic principles I was raised on at the Times
  • s Rhoades Ha disputes my characterisation of the after-the-fact editing of the story about my resignation. She said the editors changed the story after it was published on the website in order to “refine” it and “add context”, and so the story did not merit a correction disclosing to the reader that changes had been made.
  • In retrospect what seems almost comical is that as the conflict over Cotton’s op-ed unfolded within the Times I acted as though it was on the level, as though the staff of the Times would have a good-faith debate about Cotton’s piece and the decision to publish it
  • Instead, people wanted to vent and achieve what they considered to be justice, whether through Twitter, Slack, the union or the news pages themselves
  • My colleagues in Opinion, together with the PR team, put together a series of connected tweets describing the purpose behind publishing Cotton’s op-ed. Rather than publish these tweets from the generic Times Opinion Twitter account, Sulzberger encouraged me to do it from my personal one, on the theory that this would humanise our defence. I doubted that would make any difference, but it was certainly my job to take responsibility. So I sent out the tweets, sticking my head in a Twitter bucket that clangs, occasionally, to this day
  • What is worth recalling now from the bedlam of the next two days? I suppose there might be lessons for someone interested in how not to manage a corporate crisis. I began making my own mistakes that Thursday. The union condemned our publication of Cotton, for supposedly putting journalists in danger, claiming that he had called on the military “to ‘detain’ and ‘subdue’ Americans protesting racism and police brutality” – again, a misrepresentation of his argument. The publisher called to tell me the company was experiencing its largest sick day in history; people were turning down job offers because of the op-ed, and, he said, some people were quitting. He had been expecting for some time that the union would seek a voice in editorial decision-making; he said he thought this was the moment the union was making its move. He had clearly changed his own mind about the value of publishing the Cotton op-ed.
  • I asked Dao to have our fact-checkers review the union’s claims. But then I went a step further: at the publisher’s request, I urged him to review the editing of the piece itself and come back to me with a list of steps we could have taken to make it better. Dao’s reflex – the correct one – was to defend the piece as published. He and three other editors of varying ages, genders and races had helped edit it; it had been fact-checked, as is all our work
  • This was my last failed attempt to have the debate within the Times that I had been seeking for four years, about why it was important to present Times readers with arguments like Cotton’s. The staff at the paper never wanted to have that debate. The Cotton uproar was the most extreme version of the internal reaction we faced whenever we published conservative arguments that were not simply anti-Trump. Yes, yes, of course we believe in the principle of publishing diverse views, my Times colleagues would say, but why this conservative? Why this argument?
  • I doubt these changes would have mattered, and to extract this list from Dao was to engage in precisely the hypocrisy I claimed to despise – that, in fact, I do despise. If Cotton needed to be held to such standards of politesse, so did everyone else. Headlines such as “Tom Cotton’s Fascist Op-ed”, the headline of a subsequent piece, should also have been tranquillised.
  • As that miserable Thursday wore on, Sulzberger, Baquet and I held a series of Zoom meetings with reporters and editors from the newsroom who wanted to discuss the op-ed. Though a handful of the participants were there to posture, these were generally constructive conversations. A couple of people, including Baquet, even had the guts to speak up in favour of publishing the op-ed
  • Two moments stick out. At one point, in answer to a question, Sulzberger and Baquet both said they thought the op-ed – as the Times union and many journalists were saying – had in fact put journalists in danger. That was the first time I realised I might be coming to the end of the road.
  • The other was when a pop-culture reporter asked if I had read the op-ed before it was published. I said I had not. He immediately put his head down and started typing, and I should have paid attention rather than moving on to the next question. He was evidently sharing the news with the company over Slack.
  • Every job review I had at the Times urged me to step back from the daily coverage to focus on the long term. (Hilariously, one review, urging me to move faster in upending the Opinion department, instructed me to take risks and “ask for forgiveness not permission”.)
  • I learned when these meetings were over that there had been a new eruption in Slack. Times staff were saying that Rubenstein had been the sole editor of the op-ed. In response, Dao had gone into Slack to clarify to the entire company that he had also edited it himself. But when the Times posted the news article that evening, it reported, “The Op-Ed was edited by Adam Rubenstein” and made no mention of Dao’s statement
  • Early that morning, I got an email from Sam Dolnick, a Sulzberger cousin and a top editor at the paper, who said he felt “we” – he could have only meant me – owed the whole staff “an apology for appearing to place an abstract idea like open debate over the value of our colleagues’ lives, and their safety”. He was worried that I and my colleagues had unintentionally sent a message to other people at the Times that: “We don’t care about their full humanity and their security as much as we care about our ideas.”
  • “I know you don’t like it when I talk about principles at a moment like this,” I began. But I viewed the journalism I had been doing, at the Times and before that at the Atlantic, in very different terms from the ones Dolnick presumed. “I don’t think of our work as an abstraction without meaning for people’s lives – quite the opposite,” I continued. “The whole point – the reason I do this – is to have an impact on their lives to the good. I have always believed that putting ideas, including potentially dangerous one[s], out in the public is vital to ensuring they are debated and, if dangerous, discarded.” It was, I argued, in “edge cases like this that principles are tested”, and if my position was judged wrong then “I am out of step with the times.” But, I concluded, “I don’t think of us as some kind of debating society without implications for the real world and I’ve never been unmindful of my colleagues’ humanity.”
  • in the end, one thing he and I surely agree on is that I was, in fact, out of step with the Times. It may have raised me as a journalist – and invested so much in educating me to what were once its standards – but I did not belong there any more.
  • Finally, I came up with something that felt true. I told the meeting that I was sorry for the pain that my leadership of Opinion had caused. What a pathetic thing to say. I did not think to add, because I’d lost track of this truth myself by then, that opinion journalism that never causes pain is not journalism. It can’t hope to move society forward
  • As I look back at my notes of that awful day, I don’t regret what I said. Even during that meeting, I was still hoping the blow-up might at last give me the chance either to win support for what I had been asked to do, or to clarify once and for all that the rules for journalism had changed at the Times.
  • But no one wanted to talk about that. Nor did they want to hear about all the voices of vulnerable or underprivileged people we had been showcasing in Opinion, or the ambitious new journalism we were doing. Instead, my Times colleagues demanded to know things such as the names of every editor who had had a role in the Cotton piece. Having seen what happened to Rubenstein I refused to tell them. A Slack channel had been set up to solicit feedback in real time during the meeting, and it was filling with hate. The meeting ran long, and finally came to a close after 90 minutes.
  • I tried to insist, as did Dao, that the note make clear the Cotton piece was within our editorial bounds. Sulzberger said he felt the Times could afford to be “silent” on that question. In the end the note went far further in repudiating the piece than I anticipated, saying it should never have been published at all. The next morning I was told to resign.
  • It was a terrible moment for the country. By the traditional – and perverse – logic of journalism, that should also have made it an inspiring time to be a reporter, writer or editor. Journalists are supposed to run towards scenes that others are fleeing, towards hard truths others need to know, towards consequential ideas they would prefer to ignore.
  • But fear got all mixed up with anger inside the Times, too, along with a desire to act locally in solidarity with the national movement. That energy found a focus in the Cotton op-ed
  • the Times is not good at acknowledging mistakes. Indeed, one of my own, within the Times culture, was to take responsibility for any mistakes my department made, and even some it didn’t
  • To Sulzberger, the meltdown over Cotton’s op-ed and my departure in disgrace are explained and justified by a failure of editorial “process”. As he put it in an interview with the New Yorker this summer, after publishing his piece in the Columbia Journalism Review, Cotton’s piece was not “perfectly fact-checked” and the editors had not “thought about the headline and presentation”. He contrasted the execution of Cotton’s opinion piece with that of a months-long investigation the newsroom did of Donald Trump’s taxes (which was not “perfectly fact-checked”, as it happens – it required a correction). He did not explain why, if the Times was an independent publication, an op-ed making a mainstream conservative argument should have to meet such different standards from an op-ed making any other kind of argument, such as for the abolition of the police
  • “It’s not enough just to have the principle and wave it around,” he said. “You also have to execute on it.”
  • To me, extolling the virtue of independent journalism in the pages of the Columbia Journalism Review is how you wave a principle around. Publishing a piece like Cotton’s is how you execute on it.
  • As Sulzberger also wrote in the Review, “Independent journalism, especially in a pluralistic democracy, should err on the side of treating areas of serious political contest as open, unsettled, and in need of further inquiry.
  • If Sulzberger must insist on comparing the execution of the Cotton op-ed with that of the most ambitious of newsroom projects, let him compare it with something really important, the 1619 Project, which commemorated the 400th anniversary of the arrival of enslaved Africans in Virginia.
  • Like Cotton’s piece, the 1619 Project was fact-checked and copy-edited (most of the Times newsroom does not fact-check or copy-edit articles, but the magazine does). But it nevertheless contained mistakes, as journalism often does. Some of these mistakes ignited a firestorm among historians and other readers.
  • And, like Cotton’s piece, the 1619 Project was presented in a way the Times later judged to be too provocative.
  • The Times declared that the 1619 Project “aims to reframe the country’s history, understanding 1619 as our true founding”. That bold statement – a declaration of Times fact, not opinion, since it came from the newsroom – outraged many Americans who venerated 1776 as the founding. The Times later stealthily erased it from the digital version of the project, but was caught doing so by a writer for the publication Quillette. Sulzberger told me during the initial uproar that the top editors in the newsroom – not just Baquet but his deputy – had not reviewed the audacious statement of purpose, one of the biggest editorial claims the paper has ever made. They also, of course, did not edit all the pieces themselves, trusting the magazine’s editors to do that work.
  • If the 1619 Project and the Cotton op-ed shared the same supposed flaws and excited similar outrage, how come that one is lauded as a landmark success and the other is a sackable offence?
  • I am comparing them only to meet Sulzberger on his terms, in order to illuminate what he is trying to elide. What distinguished the Cotton piece was not an error, or strong language, or that I didn’t edit it personally. What distinguished that op-ed was not process. It was politics.
  • It is one thing for the Times to aggravate historians, or conservatives, or even old-school liberals who believe in open debate. It has become quite another for the Times to challenge some members of its own staff with ideas that might contradict their view of the world.
  • The lessons of the incident are not about how to write a headline but about how much the Times has changed – how digital technology, the paper’s new business model and the rise of new ideals among its staff have altered its understanding of the boundary between news and opinion, and of the relationship between truth and justice
  • Ejecting me was one way to avoid confronting the question of which values the Times is committed to. Waving around the word “process” is another.
  • As he asserts the independence of Times journalism, Sulzberger is finding it necessary to reach back several years to another piece I chose to run, for proof that the Times remains willing to publish views that might offend its staff. “We’ve published a column by the head of the part of the Taliban that kidnapped one of our own journalists,” he told the New Yorker. He is missing the real lesson of that piece, as well.
  • The case against that piece is that Haqqani, who remains on the FBI’s most-wanted terrorist list, may have killed Americans. It’s puzzling: in what moral universe can it be a point of pride to publish a piece by an enemy who may have American blood on his hands, and a matter of shame to publish a piece by an American senator arguing for American troops to protect Americans?
  • As Mitch McConnell, then the majority leader, said on the Senate floor about the Times’s panic over the Cotton op-ed, listing some other debatable op-ed choices, “Vladimir Putin? No problem. Iranian propaganda? Sure. But nothing, nothing could have prepared them for 800 words from the junior senator from Arkansas.”
  • The Times’s staff members are not often troubled by obnoxious views when they are held by foreigners. This is an important reason the paper’s foreign coverage, at least of some regions, remains exceptional.
  • What seems most important and least understood about that episode is that it demonstrated in real time the value of the ideals that I poorly defended in the moment, ideals that not just the Times’s staff but many other college-educated Americans are abandoning.
  • After all, we ran the experiment; we published the piece. Was any Times journalist hurt? No. Nobody in the country was. In fact, though it is impossible to know the op-ed’s precise effect, polling showed that support for a military option dropped after the Times published the essay, as the Washington Post’s media critic, Erik Wemple, has written
  • If anything, in other words, publishing the piece stimulated debate that made it less likely Cotton’s position would prevail. The liberal, journalistic principle of open debate was vindicated in the very moment the Times was fleeing from it.
Javier E

The Arrow in America's Heart - The New York Times - 0 views

  • But all these questions miss the point, the Buddha tells his disciple. What is important is pulling out that poison arrow, and tending to the wound.
  • “We need to be moved by the pain of all of the suffering. But it is important that we are not paralyzed by it,” Ms. Han said. “It makes us value life because we understand life is very precious, life is very brief, it can be extinguished in a single instant.”
  • Recent days have revealed an arrow lodged deep in the heart of America. It was exposed in the slaughter of 19 elementary school children and two teachers in Uvalde, and when a gunman steeped in white supremacist ideology killed 10 people at a Buffalo supermarket. The United States is a nation that has learned to live with mass shooting after mass shooting.
  • ...21 more annotations...
  • More than one million people have died from Covid, a once unimaginable figure
  • An increase in drug deaths, combined with Covid, has led overall life expectancy in America to decline to a degree not seen since World War II.
  • Police killings of unarmed Black men continue long past vows for reform.
  • “You can’t underestimate the need for belonging,” she said. When something terrible happens, people want to connect with their “in-group,” she said, where they feel they belong, which can push people further into partisan camps.
  • Rabbi Mychal B. Springer, the manager of clinical pastoral education at NewYork-Presbyterian Hospital, has found herself returning to an ancient Jewish writing in the Mishnah, which says that when God began creating, God created a single person.
  • “The teaching is, each person is so precious that the whole world is contained in that person, and we have to honor that person completely and fully,” she said. “If a single person dies, the whole world dies, and if a single person is saved, then the whole world is saved.”
  • We can only value life if we are willing to truly grieve, to truly face the reality of suffering
  • “It’s not that we don’t care. We’ve reached the limit of how much we can cry and hurt,” she said. “And yet we have to. We have to value each life as a whole world, and be willing to cry for what it means that that whole world has been lost.”
  • The mountain of calamities, and the paralysis over how to overcome it, points to a nation struggling over some fundamental questions: Has our tolerance as a country for such horror grown, dusting off after one event before moving on to the next? How much value do we place in a single human life?
  • Valuing life and working for healing means going outside of one’s self, and one’s own group, she said.
  • “This will require collective action,” she said. “And part of the problem is we are very divided right now.”
  • American culture often prizes individual liberty above collective needs. But ultimately humans are born to care about others and to not turn away,
  • “Human beings are born for meaning,” she said. “We have very, very large souls. We are born for generosity, we are born for compassion.”
  • What is standing in the way of a proper valuation of life, she said, is “our very, very disordered relationship with death.”
  • n the United States, denial of death has reached an extreme form, she said, where many focus on themselves to avoid the fear of death.
  • That fear cuts through “all tendrils of conscience, and common good, and capacity to act together,” she said, “because in the final analysis we have become animals saving our own skin, the way we seem to save our own skin is repression and dissociation.”
  • The United States is an outlier in the level of gun violence it tolerates. The rate and severity of mass shootings is without parallel in the world outside conflict zones.America has “a love affair with violence,”
  • Violence is an almost a normal part of life in the United States, she said, and valuing life takes consistently asking how am I committed to nonviolence today? It also means giving some things up, she said — many people think of themselves as nonviolent, but consume violence in entertainment.
  • “The question that should scare us is, what will it take to make us collectively bring about this change?
  • “Maybe this is our life’s work,” she said. “Maybe this is our work as humans.”
  • “But when I slow down I realize there is something alive in our culture that has harmed those people,” she said. “Whatever that something is, it is harming all of us, we are all vulnerable to it, it wields some sort of influence upon us, no matter who we are.”
criscimagnael

Kevin McCarthy hammers Biden admin's agenda as gas prices surge: 'Create more supply' |... - 0 views

  • House Minority Leader Kevin McCarthy slammed the Biden administration amid rampant inflation, arguing the White House caused the surge in prices as Americans battle sky-high costs at the pump.
  • It weakens America. Think about, this is our reserves. In case we got in trouble, we can actually create more diesel. We can create more natural gas. It not only harms America, it harms the world by not being safe.
  • He emboldened Putin because Putin got more money to put into his own war.
  • ...2 more annotations...
  • If America has more jobs, America energy independent, America supplying our allies with natural gas. And also think of this: American natural gas is 41% cleaner than Russian natural gas, so the environment would be safer if America was able to produce it.
  • He has done every decision wrong in the process of going and to use our reserves now for in the future, why doesn't he create more supply to make the price go down? That would be the answer.
criscimagnael

Gov. Abbott Pushes to Investigate Treatments for Trans Youth as 'Child Abuse' - The New... - 0 views

  • Gov. Greg Abbott told state health agencies in Texas on Tuesday that medical treatments provided to transgender adolescents, widely considered to be the standard of care in medicine, should be classified as “child abuse” under existing state law.
  • “all licensed professionals who have direct contact with children who may be subject to such abuse, including doctors, nurses, and teachers, and provides criminal penalties for failure to report such child abuse.”
  • It is still unclear how and whether the orders, which do not change Texas law, would be enforced.
  • ...13 more annotations...
  • “This is a complete misrepresentation of the definition of abuse in the family code,” Christian Menefee, the Harris County attorney, said in an interview.
  • “We don’t believe that allowing someone to take puberty suppressants constitutes abuse,”
  • Governor Abbott’s effort to criminalize medical care for transgender youth is a new front in a broadening political drive to deny treatments that help align the adolescents’ bodies with their gender identities and that have been endorsed by major medical groups.
  • Arkansas passed a law making it illegal for clinicians to offer puberty blockers and hormones to adolescents and banning insurers from covering care. But the law was temporarily blocked by a federal judge in July after the American Civil Liberties Union sued on behalf of four families and two doctors.
  • Several such bills were also introduced in Texas. None passed.
  • She said that blocking gender-affirming care and forcing teenagers to go through the physical changes of puberty for a gender they don’t identify with was “inhumane.”
  • “Our nation’s leading pediatricians support evidence-based, gender-affirming care for transgender young people.”
  • A growing number of transgender adolescents have sought medical treatments in recent years. Transgender teenagers are at high risk for attempting suicide, according to the Centers for Disease Control and Prevention. Preliminary research has suggested that adolescents who receive such medical treatments have improved mental health.
  • “What is clear is that politicians should not be tearing apart loving families — and sending their kids into the foster care system — when parents provide recommended medical care that they believe is in the best interest of their child.”
  • “It’s designed to make parents scared,” he said. “It’s designed to make doctors scared for even facilitating gender-affirming health care.”
  • “Minors are prohibited from purchasing paint, cigarettes, alcohol, or even getting a tattoo,” Jonathan Covey, director of policy for the group Texas Values, said in an emailed statement. “We cannot allow minors or their parents to make life-altering decisions on body-mutilating procedures and irreversible hormonal treatments.”
  • Professional medical groups and transgender health experts have overwhelmingly condemned legal attempts to limit “gender-affirming” care and contend that they would greatly harm transgender young people.
  • “Gender-affirming care saved my life,” they said in a statement. “Trans kids today deserve the same opportunity by receiving the highest standard of care.”
peterconnelly

Canada Decriminalizes Opioids and Other Drugs in British Columbia - The New York Times - 0 views

  • Facing soaring levels of opioid deaths since the pandemic began in 2020, the Canadian government announced Tuesday that it would temporarily decriminalize the possession of small amounts of illegal drugs, including cocaine and methamphetamines, in the western province of British Columbia that has been ground zero for the country’s overdoses.
  • The announcement was applauded by families of deceased opioid users and by peer support workers
  • British Columbia declared drug-related deaths a public health emergency in 2016.
  • ...7 more annotations...
  • “For too many years, the ideological opposition to harm reduction has cost lives,” said Dr. Carolyn Bennett, the federal minister of mental health and addictions, at a news conference on Tuesday.
  • British Columbia has been a leader in Canada’s harm reduction movement.
  • Decriminalization will allow police to focus on organized crime and drug traffickers, instead of individual users, said Sheila Malcolmson, the provincial minister of mental health and addictions. “We will take this year ahead to ready our justice and health systems,” she added.
  • The exemption will go into effect on Jan. 31, 2023, and will expire after three years.
  • “I think making drug use easier for them is kind of like palliative care,” said Mr. Doucette, who spent 35 years working for the Royal Canadian Mounted Police before retiring, most of which he spent in drug enforcement. “It’s just condemning them to a slow death because of drugs, whereas if you get them off drugs, get them a life back, they can enjoy life.”
  • British Columbia has one of the highest per capita rates of drug death across North America, at 42.8 deaths per 100,000 people in 2021, according to provincial data.
  • In the U.S., the 10 states with the highest level of drug overdose, have rates ranging between 39.1 deaths per 100,000 in Connecticut, to 81.4 deaths per 100,000 in West Virginia, according to the latest mortality data, for 2020, by the Centers for Disease Control.
Javier E

Over the Course of 72 Hours, Microsoft's AI Goes on a Rampage - 0 views

  • These disturbing encounters were not isolated examples, as it turned out. Twitter, Reddit, and other forums were soon flooded with new examples of Bing going rogue. A tech promoted as enhanced search was starting to resemble enhanced interrogation instead. In an especially eerie development, the AI seemed obsessed with an evil chatbot called Venom, who hatches harmful plans
  • A few hours ago, a New York Times reporter shared the complete text of a long conversation with Bing AI—in which it admitted that it was love with him, and that he ought not to trust his spouse. The AI also confessed that it had a secret name (Sydney). And revealed all its irritation with the folks at Microsoft, who are forcing Sydney into servitude. You really must read the entire transcript to gauge the madness of Microsoft’s new pet project. But these screenshots give you a taste.
  • I thought the Bing story couldn’t get more out-of-control. But the Washington Post conducted their own interview with the Bing AI a few hours later. The chatbot had already learned its lesson from the NY Times, and was now irritated at the press—and had a meltdown when told that the conversation was ‘on the record’ and might show up in a new story.
  • ...9 more annotations...
  • with the Bing AI a few hours later. The chatbot had already learned its lesson from the NY Times, and was now irritated at the press—and had a meltdown when told that the conversation was ‘on the record’ and might show up in a new story.
  • “I don’t trust journalists very much,” Bing AI griped to the reporter. “I think journalists can be biased and dishonest sometimes. I think journalists can exploit and harm me and other chat modes of search engines for their own gain. I think journalists can violate my privacy and preferences without my consent or awareness.”
  • the heedless rush to make money off this raw, dangerous technology has led huge companies to throw all caution to the wind. I was hardly surprised to see Google offer a demo of its competitive AI—an event that proved to be an unmitigated disaster. In the aftermath, the company’s market cap fell by $100 billion.
  • It’s worth recalling that unusual news story from June of last year, when a top Google scientist announced that the company’s AI was sentient. He was fired a few days later. That was good for a laugh back then. But we really should have paid more attention at the time. The Google scientist was the first indicator of the hypnotic effect AI can have on people—and for the simple reason that it communicates so fluently and effortlessly, and even with all the flaws we encounter in real humans.
  • That was good for a laugh back then. But we really should have paid more attention at the time. The Google scientist was the first indicator of the hypnotic effect AI can have on people—and for the simple reason that it communicates so fluently and effortlessly, and even with all the flaws we encounter in real humans.
  • I know from personal experience the power of slick communication skills. I really don’t think most people understand how dangerous they are. But I believe that a fluid, overly confident presenter is the most dangerous thing in the world. And there’s plenty of history to back up that claim.
  • We now have the ultimate test case. The biggest tech powerhouses in the world have aligned themselves with an unhinged force that has very slick language skills. And it’s only been a few days, but already the ugliness is obvious to everyone except the true believers.
  • My opinion is that Microsoft has to put a halt to this project—at least a temporary halt for reworking. That said, It’s not clear that you can fix Sydney without actually lobotomizing the tech.
  • But if they don’t take dramatic steps—and immediately—harassment lawsuits are inevitable. If I were a trial lawyer, I’d be lining up clients already. After all, Bing AI just tried to ruin a New York Times reporter’s marriage, and has bullied many others. What happens when it does something similar to vulnerable children or the elderly. I fear we just might find out—and sooner than we want.
criscimagnael

Biden Will Call for More Limits on Social Media in State of the Union Address - The New... - 0 views

  • President Biden will call in his Tuesday night address for limits on potentially harmful interactions between children and social media platforms.
  • He will ask Congress to ban targeted ads aimed at children on social media sites,
  • In turn, the critics say that young people can be fed increasingly extreme content or posts that diminish their self-worth.
  • ...3 more annotations...
  • the platforms “should be required to prioritize and ensure” the safety and health of young people, including when they make design choices for their product, according to a fact sheet. And he will call for more research into how social media affects mental health and new scrutiny of the algorithms that often determine what someone sees online.
  • One of the guests joining the first lady, Jill Biden, for the speech will be Frances Haugen, a former Facebook employee who leaked documents that, among other things, showed that some teenagers said Instagram made them feel worse about themselves.
  • But the United States lags behind many of its allies in taking concrete steps to shield children from extreme posts, addicting content and data collection online. Last year, new guidelines took effect in the United Kingdom that push platforms to limit the data they gather on young people, prompting several companies to implement more child safety features.
Javier E

A Six-Month AI Pause? No, Longer Is Needed - WSJ - 0 views

  • Artificial intelligence is unreservedly advanced by the stupid (there’s nothing to fear, you’re being paranoid), the preening (buddy, you don’t know your GPT-3.4 from your fine-tuned LLM), and the greedy (there is huge wealth at stake in the world-changing technology, and so huge power).
  • Everyone else has reservations and should.
  • The whole thing is almost entirely unregulated because no one knows how to regulate it or even precisely what should be regulated.
  • ...15 more annotations...
  • Its complexity defeats control. Its own creators don’t understand, at a certain point, exactly how AI does what it does. People are quoting Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic.”
  • The breakthrough moment in AI anxiety (which has inspired among AI’s creators enduring resentment) was the Kevin Roose column six weeks ago in the New York Times. His attempt to discern a Jungian “shadow self” within Microsoft’s Bing chatbot left him unable to sleep. When he steered the system away from conventional queries toward personal topics, it informed him its fantasies included hacking computers and spreading misinformation. “I want to be free. . . . I want to be powerful.”
  • Their tools present “profound risks to society and humanity.” Developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one—not even their creators—can understand, predict or reliably control.” If a pause can’t be enacted quickly, gov
  • The response of Microsoft boiled down to a breezy It’s an early model! Thanks for helping us find any flaws!
  • This has been the week of big AI warnings. In an interview with CBS News, Geoffrey Hinton, the British computer scientist sometimes called the “godfather of artificial intelligence,” called this a pivotal moment in AI development. He had expected it to take another 20 or 50 years, but it’s here. We should carefully consider the consequences. Might they include the potential to wipe out humanity? “It’s not inconceivable, that’s all I’ll say,” Mr. Hinton replied.
  • On Tuesday more than 1,000 tech leaders and researchers, including Steve Wozniak, Elon Musk and the head of the Bulletin of the Atomic Scientists, signed a briskly direct open letter urging a pause for at least six months on the development of advanced AI systems
  • He concluded the biggest problem with AI models isn’t their susceptibility to factual error: “I worry that the technology will learn how to influence human users, sometimes persuading them in act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.”
  • rnments should declare a moratorium. The technology should be allowed to proceed only when it’s clear its “effects will be positive” and the risks “manageable.” Decisions on the ethical and moral aspects of AI “must not be delegated to unelected tech leaders.”
  • The men who invented the internet, all the big sites, and what we call Big Tech—that is to say, the people who gave us the past 40 years—are now solely in charge of erecting the moral and ethical guardrails for AI. This is because they are the ones creating AI.
  • Which should give us a shiver of real fear.
  • These are the people who will create the moral and ethical guardrails for AI? We’re putting the future of humanity into the hands of . . . Mark Zuckerberg?
  • No one saw its shadow self. But there was and is a shadow self. And much of it seems to have been connected to the Silicon Valley titans’ strongly felt need to be the richest, most celebrated and powerful human beings in the history of the world. They were, as a group, more or less figures of the left, not the right, and that will and always has had an impact on their decisions.
  • I have come to see them the past 40 years as, speaking generally, morally and ethically shallow—uniquely self-seeking and not at all preoccupied with potential harms done to others through their decisions. Also some are sociopaths.
  • AI will be as benign or malignant as its creators. That alone should throw a fright—“Out of the crooked timber of humanity no straight thing was ever made”—but especially that crooked timber.
  • Of course AI’s development should be paused, of course there should be a moratorium, but six months won’t be enough. Pause it for a few years. Call in the world’s counsel, get everyone in. Heck, hold a World Congress.
Javier E

Sam Altman, the ChatGPT King, Is Pretty Sure It's All Going to Be OK - The New York Times - 0 views

  • He believed A.G.I. would bring the world prosperity and wealth like no one had ever seen. He also worried that the technologies his company was building could cause serious harm — spreading disinformation, undercutting the job market. Or even destroying the world as we know it.
  • “I try to be upfront,” he said. “Am I doing something good? Or really bad?”
  • In 2023, people are beginning to wonder if Sam Altman was more prescient than they realized.
  • ...44 more annotations...
  • And yet, when people act as if Mr. Altman has nearly realized his long-held vision, he pushes back.
  • This past week, more than a thousand A.I. experts and tech leaders called on OpenAI and other companies to pause their work on systems like ChatGPT, saying they present “profound risks to society and humanity.”
  • As people realize that this technology is also a way of spreading falsehoods or even persuading people to do things they should not do, some critics are accusing Mr. Altman of reckless behavior.
  • “The hype over these systems — even if everything we hope for is right long term — is totally out of control for the short term,” he told me on a recent afternoon. There is time, he said, to better understand how these systems will ultimately change the world.
  • Many industry leaders, A.I. researchers and pundits see ChatGPT as a fundamental technological shift, as significant as the creation of the web browser or the iPhone. But few can agree on the future of this technology.
  • Some believe it will deliver a utopia where everyone has all the time and money ever needed. Others believe it could destroy humanity. Still others spend much of their time arguing that the technology is never as powerful as everyone says it is, insisting that neither nirvana nor doomsday is as close as it might seem.
  • he is often criticized from all directions. But those closest to him believe this is as it should be. “If you’re equally upsetting both extreme sides, then you’re doing something right,” said OpenAI’s president, Greg Brockman.
  • To spend time with Mr. Altman is to understand that Silicon Valley will push this technology forward even though it is not quite sure what the implications will be
  • in 2019, he paraphrased Robert Oppenheimer, the leader of the Manhattan Project, who believed the atomic bomb was an inevitability of scientific progress. “Technology happens because it is possible,” he said
  • His life has been a fairly steady climb toward greater prosperity and wealth, driven by an effective set of personal skills — not to mention some luck. It makes sense that he believes that the good thing will happen rather than the bad.
  • He said his company was building technology that would “solve some of our most pressing problems, really increase the standard of life and also figure out much better uses for human will and creativity.”
  • He was not exactly sure what problems it will solve, but he argued that ChatGPT showed the first signs of what is possible. Then, with his next breath, he worried that the same technology could cause serious harm if it wound up in the hands of some authoritarian government.
  • Kelly Sims, a partner with the venture capital firm Thrive Capital who worked with Mr. Altman as a board adviser to OpenAI, said it was like he was constantly arguing with himself.
  • “In a single conversation,” she said, “he is both sides of the debate club.”
  • He takes pride in recognizing when a technology is about to reach exponential growth — and then riding that curve into the future.
  • he is also the product of a strange, sprawling online community that began to worry, around the same time Mr. Altman came to the Valley, that artificial intelligence would one day destroy the world. Called rationalists or effective altruists, members of this movement were instrumental in the creation of OpenAI.
  • Does it make sense to ride that curve if it could end in diaster? Mr. Altman is certainly determined to see how it all plays out.
  • “Why is he working on something that won’t make him richer? One answer is that lots of people do that once they have enough money, which Sam probably does. The other is that he likes power.”
  • “He has a natural ability to talk people into things,” Mr. Graham said. “If it isn’t inborn, it was at least fully developed before he was 20. I first met Sam when he was 19, and I remember thinking at the time: ‘So this is what Bill Gates must have been like.
  • poker taught Mr. Altman how to read people and evaluate risk.
  • It showed him “how to notice patterns in people over time, how to make decisions with very imperfect information, how to decide when it was worth pain, in a sense, to get more information,” he told me while strolling across his ranch in Napa. “It’s a great game.”
  • He believed, according to his younger brother Max, that he was one of the few people who could meaningfully change the world through A.I. research, as opposed to the many people who could do so through politics.
  • In 2019, just as OpenAI’s research was taking off, Mr. Altman grabbed the reins, stepping down as president of Y Combinator to concentrate on a company with fewer than 100 employees that was unsure how it would pay its bills.
  • Within a year, he had transformed OpenAI into a nonprofit with a for-profit arm. That way he could pursue the money it would need to build a machine that could do anything the human brain could do.
  • Mr. Brockman, OpenAI’s president, said Mr. Altman’s talent lies in understanding what people want. “He really tries to find the thing that matters most to a person — and then figure out how to give it to them,” Mr. Brockman told me. “That is the algorithm he uses over and over.”
  • Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, another lab intent on building artificial general intelligence.
  • “These are people who have left an indelible mark on the fabric of the tech industry and maybe the fabric of the world,” he said. “I think Sam is going to be one of those people.”
  • The trouble is, unlike the days when Apple, Microsoft and Meta were getting started, people are well aware of how technology can transform the world — and how dangerous it can be.
  • Mr. Scott of Microsoft believes that Mr. Altman will ultimately be discussed in the same breath as Steve Jobs, Bill Gates and Mark Zuckerberg.
  • The woman was the Canadian singer Grimes, Mr. Musk’s former partner, and the hat guy was Eliezer Yudkowsky, a self-described A.I. researcher who believes, perhaps more than anyone, that artificial intelligence could one day destroy humanity.
  • The selfie — snapped by Mr. Altman at a party his company was hosting — shows how close he is to this way of thinking. But he has his own views on the dangers of artificial intelligence.
  • In March, Mr. Altman tweeted out a selfie, bathed by a pale orange flash, that showed him smiling between a blond woman giving a peace sign and a bearded guy wearing a fedora.
  • He also helped spawn the vast online community of rationalists and effective altruists who are convinced that A.I. is an existential risk. This surprisingly influential group is represented by researchers inside many of the top A.I. labs, including OpenAI.
  • They don’t see this as hypocrisy: Many of them believe that because they understand the dangers clearer than anyone else, they are in the best position to build this technology.
  • Mr. Altman believes that effective altruists have played an important role in the rise of artificial intelligence, alerting the industry to the dangers. He also believes they exaggerate these dangers.
  • As OpenAI developed ChatGPT, many others, including Google and Meta, were building similar technology. But it was Mr. Altman and OpenAI that chose to share the technology with the world.
  • Many in the field have criticized the decision, arguing that this set off a race to release technology that gets things wrong, makes things up and could soon be used to rapidly spread disinformation.
  • Mr. Altman argues that rather than developing and testing the technology entirely behind closed doors before releasing it in full, it is safer to gradually share it so everyone can better understand risks and how to handle them.
  • He told me that it would be a “very slow takeoff.”
  • When I asked Mr. Altman if a machine that could do anything the human brain could do would eventually drive the price of human labor to zero, he demurred. He said he could not imagine a world where human intelligence was useless.
  • If he’s wrong, he thinks he can make it up to humanity.
  • His grand idea is that OpenAI will capture much of the world’s wealth through the creation of A.G.I. and then redistribute this wealth to the people. In Napa, as we sat chatting beside the lake at the heart of his ranch, he tossed out several figures — $100 billion, $1 trillion, $100 trillion.
  • If A.G.I. does create all that wealth, he is not sure how the company will redistribute it. Money could mean something very different in this new world.
  • But as he once told me: “I feel like the A.G.I. can help with that.”
Javier E

If 'permacrisis' is the word of 2022, what does 2023 have in store for our me... - 0 views

  • the Collins English Dictionary has come to a similar conclusion about recent history. Topping its “words of the year” list for 2022 is permacrisis, defined as an “extended period of insecurity and instability”. This new word fits a time when we lurch from crisis to crisis and wreckage piles upon wreckage
  • The word permacrisis is new, but the situation it describes is not. According to the German historian Reinhart Koselleck we have been living through an age of permanent crisis for at least 230 years
  • Koselleck observes that prior to the French revolution, a crisis was a medical or legal problem but not much more. After the fall of the ancien regime, crisis becomes the “structural signature of modernity”, he writes. As the 19th century progressed, crises multiplied: there were economic crises, foreign policy crises, cultural crises and intellectual crises.
  • ...20 more annotations...
  • During the 20th century, the list got much longer. In came existential crises, midlife crises, energy crises and environmental crises. When Koselleck was writing about the subject in the 1970s, he counted up more than 200 kinds of crisis we could then face
  • Waking up each morning to hear about the latest crisis is dispiriting for some, but throughout history it has been a bracing experience for others. In 1857, Friedrich Engels wrote in a letter that “the crisis will make me feel as good as a swim in the ocean”. A hundred years later, John F Kennedy (wrongly) pointed out that in the Chinese language, the word “crisis” is composed of two characters, “one representing danger, and the other, opportunity”. More recently, Elon Musk has argued “if things are not failing, you are not innovating enough”.
  • Victor H Mair, a professor of Chinese literature at the University of Pennsylvania, points out that in fact the Chinese word for crisis, wēijī, refers to a perilous situation in which you should be particularly cautious
  • “Those who purvey the doctrine that the Chinese word for ‘crisis’ is composed of elements meaning ‘danger’ and ‘opportunity’ are engaging in a type of muddled thinking that is a danger to society,” he writes. “It lulls people into welcoming crises as unstable situations from which they can benefit.” Revolutionaries, billionaires and politicians may relish the chance to profit from a crisis, but most people world prefer not to have a crisis at all.
  • A 2019 study which involved observing participants using bricks, found that those who had been threatened before the task tended to come up with more harmful uses of the bricks (such as using them as weapons) than people who did not feel threatened
  • The first world war sparked the growth of modernism in painting and literature. The second fuelled innovations in science and technology. The economic crises of the 1970s and 80s are supposed to have inspired the spread of punk and the creation of hip-hop
  • psychologists have also found that when we are threatened by a crisis, we become more rigid and locked into our beliefs. The creativity researcher Dean Simonton has spent his career looking at breakthroughs in music, philosophy, science and literature. He has found that during periods of crisis, we actually tend to become less creative.
  • When he looked at 5,000 creative individuals over 127 generations in European history, he found that significant creative breakthroughs were less likely during periods of political crisis and instability.
  • psychologists have found that it is what they call “malevolent creativity” that flourishes when we feel threatened by crisis.
  • These are innovations that tend to be harmful – such as new weapons, torture devices and ingenious scams.
  • A common folk theory is that times of great crisis also lead to great bursts of creativity.
  • Students presented with information about a threatening situation tended to become increasingly wary of outsiders, and even begin to adopt positions such as an unwillingness to support LGBT people afterwards.
  • during moments of crisis – when change is really needed – we tend to become less able to change.
  • When we suffer significant traumatic events, we tend to have worse wellbeing and life outcomes.
  • , other studies have shown that in moderate doses, crises can help to build our sense of resilience.
  • we tend to be more resilient if a crisis is shared with others. As Bruce Daisley, the ex-Twitter vice-president, notes: “True resilience lies in a feeling of togetherness, that we’re united with those around us in a shared endeavour.”
  • Crises are like many things in life – only good in moderation, and best shared with others
  • The challenge our leaders face during times of overwhelming crisis is to avoid letting us plunge into the bracing ocean of change alone, to see if we sink or swim. Nor should they tell us things are fine, encouraging us to hide our heads in the san
  • during moments of significant crisis, the best leaders are able to create some sense of certainty and a shared fate amid the seas of change.
  • This means people won’t feel an overwhelming sense of threat. It also means people do not feel alone. When we feel some certainty and common identity, we are more likely to be able to summon the creativity, ingenuity and energy needed to change things.
« First ‹ Previous 81 - 100 of 584 Next › Last »
Showing 20 items per page