Skip to main content

Home/ History Readings/ Group items tagged Bombing

Rss Feed Group items tagged

42More

Climate Anxiety | Harvard Medicine Magazine - 0 views

  • A global survey published in Lancet Planetary Health in 2021 reported that among an international cohort of more than 10,000 people between the ages of 16 and 25, 60 percent described themselves as very worried about the climate and nearly half said the anxiety affects their daily functioning.
  • Since young people expect to live longer with climate-related crises than their parents will, “they feel grief in the face of what they’re losing,” Pinsky says.
  • Young survivors of weather-related disasters report high rates of PTSD, depression, sleep deficits, and learning issues.
  • ...39 more annotations...
  • Nearly three quarters of the child and adolescent population in Pakistan experienced learning difficulties after widespread floods devastated the country in 2010.
  • For many young people, worry over threats of future climate change results in panic attacks, insomnia, obsessive thinking, and other symptoms
  • And those feelings are often amplified by a pervasive sense that older people aren’t doing enough to fix the climate problem. “There’s a feeling of intergenerational injustice,” says Lise Van Susteren, a general and forensic psychiatrist based in Washington, DC, who specializes in the mental health effects of climate change. “Many young people feel invalidated, betrayed, and abandoned.”
  • Research on effective interventions is virtually nonexistent, and parents and other people who want to help have little to go on. Professional organizations are only now beginning to provide needed resources.
  • News reports and researchers often refer to these feelings collectively as climate anxiety, or eco-anxiety, but Pinsky admits to having misgivings about the terms.
  • “Many people interpret anxiety as a pathological response that needs to be treated and solved,” she says. “But it’s also a constructive emotion that gives us time to react in the face of danger. And anxiety in the face of climate change is a healthy response to a real threat.”
  • others become progressively hyperaroused and panicky, Pinsky says, or else fall into a sort of emotional paralysis
  • Some people manage their climate-triggered emotions without spiraling into distress
  • These reactions can be especially debilitating for people who already struggle with underlying mental health disorders.
  • anxieties over climate change can interlace with broader feelings of instability over the pace of technological and cultural change,
  • “Technology is accelerating faster than culture can keep up, and humans in general are unmoored and struggling to adapt,” she says. “For some people, climate change is psychologically the last straw. You realize you can no longer count on the stability of your planet, your atmosphere — your very world.”
  • Van Susteren describes that anxiety as a type of pre-traumatic stress disorder, with few existing precedents in the United States apart from fears of nuclear annihilation and the decades-ago experience of living through classroom drills on how to survive an atom bomb attack.
  • Talk therapy for anxiety typically aims to help people identify and replace irrational thoughts, called cognitive distortions, with alternative thinking that isn’t so stressful. But since climate anxiety is based on rational fears, this particular approach risks alienating anyone who might feel their worries are being dismissed.
  • Younger people were increasingly arriving at Bryant’s office frightened, depressed, and confused about how to manage climate-triggered emotions. Some were even wondering if they should bring children into such a world.
  • “We’re not saying that anxiety is good or bad,” he says. “We just want to bring those feelings out into the open. It’s more about validating that climate concerns are reasonable given what we’re reading in the news every day.” Ann-Christine Duhaime
  • Emerging evidence suggests that young people do best by cultivating a sense of agency and hope despite their climate concerns.
  • getting to that point involves talking through feelings like despair, grief, or rage first. Without doing that, he says, many people get stuck in maladaptive coping strategies that can lead to burnout, frustration, or hopelessness. Bryant describes jumping into an urgent, problem-focused coping strategy as “going into action mode so you don’t have to feel any grief.”
  • Problem-focused coping has a societal benefit in that it leads to “pro-environmental behavior,” meaning that young people who engage in it typically spend a lot of time learning about climate change and focusing on what they can do personally to help solve the problem
  • But climate change is far beyond any one person’s control, and problem-focused coping can leave people frustrated by the limits of their own capacity and make them unable to rid themselves of resulting worry and negative emotions
  • she and her colleagues describe emotion-focused coping, whereby young people ignore or deny climate change as a means of avoiding feeling anxious about it. In an email, Ojala notes that people who gravitate toward emotional distancing typically come from families that communicate about social problems in “pessimistic doom-and-gloom ways.”
  • Ojala
  • Ojala and other experts favor a third coping strategy that balances negative feelings about climate change with faith in the power of social forces working to overcome it. Called meaning-focused coping, this approach takes strength from individual actions and climate beliefs, while “trusting th
  • her societal actors are also doing their part,”
  • since meaning-focused coping allows negative and positive climate emotions to coexist, young people who adopt it have an easier time maintaining hope for the future.
  • The overall goal, she says, is for young people to achieve more resilience in the face of climate change, so they can function in spite of their environmental concerns
  • When people find meaning in what they do, she says, they have a greater sense of their own agency and self-efficacy. “You’re more empowered to take action, and that can be a powerful way to deal with strong negative emotions,”
  • Duhaime cautions that anyone taking action against climate change should know they shouldn’t expect a quick payback
  • The brain’s reward system, which forms a core of human decision-making, evolved over eons of history to strengthen neural associations between actions and outcomes that promote short-term survival. And that system, she says, responds to the immediate consequences of what we do. One problem with climate change, Duhaime says, is that because it’s so vast and complex, people can’t assume that any single act will lead to a discernible effect on its trajectory
  • young people may benefit from seeking the rewards that come from being part of a group or a movement working to advance an agenda that furthers actions that protect the planet’s climate. “Social rewards are really powerful in the climate change battle, especially for young people,
  • Recognizing the mismatch between how the brain processes reward and the novel challenges of the climate crisis may help people persist when it feels frustrating and ineffective compared to causes with more immediately visible effects. Even if you don’t see climate improvements or policy changes right away, she says, “that won’t diminish the importance of engaging in these efforts.”
  • Malits adds that she wasn’t overly burdened by her emotions. “I’m an optimist by nature and feel that society does have the capacity to make needed changes,” she says. “And what also helps me avoid climate anxiety on a daily basis is the community that I’ve been lucky enough to connect with here at Harvard. It helps to surround yourself with people who are similarly worried about these issues and are also engaging with you on solutions, in whatever capacity is meaningful to you.”
  • “Climate anxiety is an important catalyst for the work I do,” Malits says. “I think you need avenues to channel it and talk about it with loved ones and peers, and have communities through which you can process those feelings and come up with remedies.” Collaborative activism dampens the anxiety, Malits says, and gives young people a sense of renewed hope for the future. “That’s why it’s important to roll up your sleeves and think about how you’d like to tackle the problem,”
  • Malits says she worries most about how climate change is affecting marginalized communities, singling out those who live in urban heat islands, where inadequate green space intensifies extreme heat.
  • nearly 30 percent of Honduras’s population works for the agricultural sector, where rising temperatures and drought are contributing to a mass exodus, as documented that year by PBS NewsHour.
  • Researchers are finding that young people with the most extreme fears over climate change live predominantly in the developing world. The Philippines and India, for instance, are near the top of a list of recently surveyed countries where young people report climate-driven feelings that “humanity is doomed” and “the future is frightening.”
  • Nearly a year after Hurricane Andrew struck South Florida in 1992, 18 percent of children living in the area were still struggling with PTSD-like symptoms, and nearly 30 percent of those who lived through Hurricane Katrina in 2005 wound up with complicated grief, in which strong feelings of loss linger for a long time.
  • Even when people are not uprooted by disaster, a variety of climate-related mechanisms can affect their mental health or the safety of their mental health treatment. High heat and humidity worsen irritability and cognition, he points out, and they can also exacerbate side effects from some common psychiatric medications
  • Levels of lithium — a mood stabilizer used for treating bipolar disorder and major depression — can rise to potentially toxic concentrations in a person who is perspiring heavily; they can become dehydrated and  may develop impaired kidney funtion, potentially causing tremor, slurred speech, confusion and other dangerous effects
  • “I believe the fundamental and best treatment for youth climate distress is a rapid and just transition from fossil fuels,” Pinsky says. “I genuinely consider all that work to be in the area of mitigating climate anxiety.”    
98More

Peter Thiel Is Taking a Break From Democracy - The Atlantic - 0 views

  • Thiel’s unique role in the American political ecosystem. He is the techiest of tech evangelists, the purest distillation of Silicon Valley’s reigning ethos. As such, he has become the embodiment of a strain of thinking that is pronounced—and growing—among tech founders.
  • why does he want to cut off politicians
  • But the days when great men could achieve great things in government are gone, Thiel believes. He disdains what the federal apparatus has become: rule-bound, stifling of innovation, a “senile, central-left regime.”
  • ...95 more annotations...
  • Peter Thiel has lost interest in democracy.
  • Thiel has cultivated an image as a man of ideas, an intellectual who studied philosophy with René Girard and owns first editions of Leo Strauss in English and German. Trump quite obviously did not share these interests, or Thiel’s libertarian principles.
  • For years, Thiel had been saying that he generally favored the more pessimistic candidate in any presidential race because “if you’re too optimistic, it just shows you’re out of touch.” He scorned the rote optimism of politicians who, echoing Ronald Reagan, portrayed America as a shining city on a hill. Trump’s America, by contrast, was a broken landscape, under siege.
  • Thiel is not against government in principle, his friend Auren Hoffman (who is no relation to Reid) says. “The ’30s, ’40s, and ’50s—which had massive, crazy amounts of power—he admires because it was effective. We built the Hoover Dam. We did the Manhattan Project,” Hoffman told me. “We started the space program.”
  • Their failure to make the world conform to his vision has soured him on the entire enterprise—to the point where he no longer thinks it matters very much who wins the next election.
  • His libertarian critique of American government has curdled into an almost nihilistic impulse to demolish it.
  • “Voting for Trump was like a not very articulate scream for help,” Thiel told me. He fantasized that Trump’s election would somehow force a national reckoning. He believed somebody needed to tear things down—slash regulations, crush the administrative state—before the country could rebuild.
  • He admits now that it was a bad bet.
  • “There are a lot of things I got wrong,” he said. “It was crazier than I thought. It was more dangerous than I thought. They couldn’t get the most basic pieces of the government to work. So that was—I think that part was maybe worse than even my low expectations.”
  • eid Hoffman, who has known Thiel since college, long ago noticed a pattern in his old friend’s way of thinking. Time after time, Thiel would espouse grandiose, utopian hopes that failed to materialize, leaving him “kind of furious or angry” about the world’s unwillingness to bend to whatever vision was possessing him at the moment
  • Thiel. He is worth between $4 billion and $9 billion. He lives with his husband and two children in a glass palace in Bel Air that has nine bedrooms and a 90-foot infinity pool. He is a titan of Silicon Valley and a conservative kingmaker.
  • “Peter tends to be not ‘glass is half empty’ but ‘glass is fully empty,’” Hoffman told me.
  • he tells the story of his life as a series of disheartening setbacks.
  • He met Mark Zuckerberg, liked what he heard, and became Facebook’s first outside investor. Half a million dollars bought him 10 percent of the company, most of which he cashed out for about $1 billion in 2012.
  • Thiel made some poor investments, losing enormous sums by going long on the stock market in 2008, when it nose-dived, and then shorting the market in 2009, when it rallied
  • on the whole, he has done exceptionally well. Alex Karp, his Palantir co-founder, who agrees with Thiel on very little other than business, calls him “the world’s best venture investor.”
  • Thiel told me this is indeed his ambition, and he hinted that he may have achieved it.
  • He longs for radical new technologies and scientific advances on a scale most of us can hardly imagine
  • He longs for a world in which great men are free to work their will on society, unconstrained by government or regulation or “redistributionist economics” that would impinge on their wealth and power—or any obligation, really, to the rest of humanity
  • Did his dream of eternal life trace to The Lord of the Rings?
  • He takes for granted that this kind of progress will redound to the benefit of society at large.
  • More than anything, he longs to live forever.
  • Calling death a law of nature is, in his view, just an excuse for giving up. “It’s something we are told that demotivates us from trying harder,”
  • Thiel grew up reading a great deal of science fiction and fantasy—Heinlein, Asimov, Clarke. But especially Tolkien; he has said that he read the Lord of the Rings trilogy at least 10 times. Tolkien’s influence on his worldview is obvious: Middle-earth is an arena of struggle for ultimate power, largely without government, where extraordinary individuals rise to fulfill their destinies. Also, there are immortal elves who live apart from men in a magical sheltered valley.
  • But his dreams have always been much, much bigger than that.
  • Yes, Thiel said, perking up. “There are all these ways where trying to live unnaturally long goes haywire” in Tolkien’s works. But you also have the elves.
  • How are the elves different from the humans in Tolkien? And they’re basically—I think the main difference is just, they’re humans that don’t die.”
  • During college, he co-founded The Stanford Review, gleefully throwing bombs at identity politics and the university’s diversity-minded reform of the curriculum. He co-wrote The Diversity Myth in 1995, a treatise against what he recently called the “craziness and silliness and stupidity and wickedness” of the left.
  • Thiel laid out a plan, for himself and others, “to find an escape from politics in all its forms.” He wanted to create new spaces for personal freedom that governments could not reach
  • But something changed for Thiel in 2009
  • he people, he concluded, could not be trusted with important decisions. “I no longer believe that freedom and democracy are compatible,” he wrote.
  • ven more notable one followed: “Since 1920, the vast increase in welfare beneficiaries and the extension of the franchise to women—two constituencies that are notoriously tough for libertarians—have rendered the notion of ‘capitalist democracy’ into an oxymoron.”
  • By 2015, six years after declaring his intent to change the world from the private sector, Thiel began having second thoughts. He cut off funding for the Seasteading Institute—years of talk had yielded no practical progress–and turned to other forms of escape
  • The fate of our world may depend on the effort of a single person who builds or propagates the machinery of freedom,” he wrote. His manifesto has since become legendary in Silicon Valley, where his worldview is shared by other powerful men (and men hoping to be Peter Thiel).
  • Thiel’s investment in cryptocurrencies, like his founding vision at PayPal, aimed to foster a new kind of money “free from all government control and dilution
  • His decision to rescue Elon Musk’s struggling SpaceX in 2008—with a $20 million infusion that kept the company alive after three botched rocket launches—came with aspirations to promote space as an open frontier with “limitless possibility for escape from world politics
  • It was seasteading that became Thiel’s great philanthropic cause in the late aughts and early 2010s. The idea was to create autonomous microstates on platforms in international waters.
  • “There’s zero chance Peter Thiel would live on Sealand,” he said, noting that Thiel likes his comforts too much. (Thiel has mansions around the world and a private jet. Seal performed at his 2017 wedding, at the Belvedere Museum in Vienna.)
  • As he built his companies and grew rich, he began pouring money into political causes and candidates—libertarian groups such as the Endorse Liberty super PAC, in addition to a wide range of conservative Republicans, including Senators Orrin Hatch and Ted Cruz
  • Sam Altman, the former venture capitalist and now CEO of OpenAI, revealed in 2016 that in the event of global catastrophe, he and Thiel planned to wait it out in Thiel’s New Zealand hideaway.
  • When I asked Thiel about that scenario, he seemed embarrassed and deflected the question. He did not remember the arrangement as Altman did, he said. “Even framing it that way, though, makes it sound so ridiculous,” he told me. “If there is a real end of the world, there is no place to go.”
  • You’d have eco farming. You’d turn the deserts into arable land. There were sort of all these incredible things that people thought would happen in the ’50s and ’60s and they would sort of transform the world.”
  • None of that came to pass. Even science fiction turned hopeless—nowadays, you get nothing but dystopias
  • He hungered for advances in the world of atoms, not the world of bits.
  • Founders Fund, the venture-capital firm he established in 200
  • The fund, therefore, would invest in smart people solving hard problems “that really have the potential to change the world.”
  • This was not what Thiel wanted to be doing with his time. Bodegas and dog food were making him money, apparently, but he had set out to invest in transformational technology that would advance the state of human civilization.
  • He told me that he no longer dwells on democracy’s flaws, because he believes we Americans don’t have one. “We are not a democracy; we’re a republic,” he said. “We’re not even a republic; we’re a constitutional republic.”
  • “It was harder than it looked,” Thiel said. “I’m not actually involved in enough companies that are growing a lot, that are taking our civilization to the next level.”
  • Founders Fund has holdings in artificial intelligence, biotech, space exploration, and other cutting-edge fields. What bothers Thiel is that his companies are not taking enough big swings at big problems, or that they are striking out.
  • In at least 20 hours of logged face-to-face meetings with Buma, Thiel reported on what he believed to be a Chinese effort to take over a large venture-capital firm, discussed Russian involvement in Silicon Valley, and suggested that Jeffrey Epstein—a man he had met several times—was an Israeli intelligence operative. (Thiel told me he thinks Epstein “was probably entangled with Israeli military intelligence” but was more involved with “the U.S. deep state.”)
  • Buma, according to a source who has seen his reports, once asked Thiel why some of the extremely rich seemed so open to contacts with foreign governments. “And he said that they’re bored,” this source said. “‘They’re bored.’ And I actually believe it. I think it’s that simple. I think they’re just bored billionaires.”
  • he has a sculpture that resembles a three-dimensional game board. Ascent: Above the Nation State Board Game Display Prototype is the New Zealander artist Simon Denny’s attempt to map Thiel’s ideological universe. The board features a landscape in the aesthetic of Dungeons & Dragons, thick with monsters and knights and castles. The monsters include an ogre labeled “Monetary Policy.” Near the center is a hero figure, recognizable as Thiel. He tilts against a lion and a dragon, holding a shield and longbow. The lion is labeled “Fair Elections.” The dragon is labeled “Democracy.” The Thiel figure is trying to kill them.
  • When I asked Thiel to explain his views on democracy, he dodged the question. “I always wonder whether people like you … use the word democracy when you like the results people have and use the word populism when you don’t like the results,” he told me. “If I’m characterized as more pro-populist than the elitist Atlantic is, then, in that sense, I’m more pro-democratic.”
  • “I couldn’t find them,” he said. “I couldn’t get enough of them to work.
  • He said he has no wish to change the American form of government, and then amended himself: “Or, you know, I don’t think it’s realistic for it to be radically changed.” Which is not at all the same thing.
  • When I asked what he thinks of Yarvin’s autocratic agenda, Thiel offered objections that sounded not so much principled as practical.
  • “I don’t think it’s going to work. I think it will look like Xi in China or Putin in Russia,” Thiel said, meaning a malign dictatorship. “It ultimately I don’t think will even be accelerationist on the science and technology side, to say nothing of what it will do for individual rights, civil liberties, things of that sort.”
  • Still, Thiel considers Yarvin an “interesting and powerful” historian
  • he always talks about is the New Deal and FDR in the 1930s and 1940s,” Thiel said. “And the heterodox take is that it was sort of a light form of fascism in the United States.”
  • Yarvin, Thiel said, argues that “you should embrace this sort of light form of fascism, and we should have a president who’s like FDR again.”
  • Did Thiel agree with Yarvin’s vision of fascism as a desirable governing model? Again, he dodged the question.
  • “That’s not a realistic political program,” he said, refusing to be drawn any further.
  • ooking back on Trump’s years in office, Thiel walked a careful line.
  • A number of things were said and done that Thiel did not approve of. Mistakes were made. But Thiel was not going to refashion himself a Never Trumper in retrospect.
  • “I have to somehow give the exact right answer, where it’s like, ‘Yeah, I’m somewhat disenchanted,’” he told me. “But throwing him totally under the bus? That’s like, you know—I’ll get yelled at by Mr. Trump. And if I don’t throw him under the bus, that’s—but—somehow, I have to get the tone exactly right.”
  • Thiel knew, because he had read some of my previous work, that I think Trump’s gravest offense against the republic was his attempt to overthrow the election. I asked how he thought about it.
  • “Look, I don’t think the election was stolen,” he said. But then he tried to turn the discussion to past elections that might have been wrongly decided. Bush-Gore in 2000, for instanc
  • He came back to Trump’s attempt to prevent the transfer of power. “I’ll agree with you that it was not helpful,” he said.
  • there is another piece of the story, which Thiel reluctantly agreed to discuss
  • Puck reported that Democratic operatives had been digging for dirt on Thiel since before the 2022 midterm elections, conducting opposition research into his personal life with the express purpose of driving him out of politic
  • Among other things, the operatives are said to have interviewed a young model named Jeff Thomas, who told them he was having an affair with Thiel, and encouraged Thomas to talk to Ryan Grim, a reporter for The Intercept. Grim did not publish a story during election season, as the opposition researchers hoped he would, but he wrote about Thiel’s affair in March, after Thomas died by suicide.
  • He deplored the dirt-digging operation, telling me in an email that “the nihilism afflicting American politics is even deeper than I knew.”
  • He also seemed bewildered by the passions he arouses on the left. “I don’t think they should hate me this much,”
  • he spoke at the closed-press event with a lot less nuance than he had in our interviews. His after-dinner remarks were full of easy applause lines and in-jokes mocking the left. Universities had become intellectual wastelands, obsessed with a meaningless quest for diversity, he told the crowd. The humanities writ large are “transparently ridiculous,” said the onetime philosophy major, and “there’s no real science going on” in the sciences, which have devolved into “the enforcement of very curious dogmas.”
  • “Diversity—it’s not enough to just hire the extras from the space-cantina scene in Star Wars,” he said, prompting laughter.
  • Nor did Thiel say what genuine diversity would mean. The quest for it, he said, is “very evil and it’s very silly.”
  • “the silliness is distracting us from very important things,” such as the threat to U.S. interests posed by the Chinese Communist Party.
  • “Whenever someone says ‘DEI,’” he exhorted the crowd, “just think ‘CCP.’”
  • Somebody asked, in the Q&A portion of the evening, whether Thiel thought the woke left was deliberately advancing Chinese Communist interests
  • “It’s always the difference between an agent and asset,” he said. “And an agent is someone who is working for the enemy in full mens rea. An asset is a useful idiot. So even if you ask the question ‘Is Bill Gates China’s top agent, or top asset, in the U.S.?’”—here the crowd started roaring—“does it really make a difference?”
  • About 10 years ago, Thiel told me, a fellow venture capitalist called to broach the question. Vinod Khosla, a co-founder of Sun Microsystems, had made the Giving Pledge a couple of years before. Would Thiel be willing to talk with Gates about doing the same?
  • Thiel feels that giving his billions away would be too much like admitting he had done something wrong to acquire them
  • He also lacked sympathy for the impulse to spread resources from the privileged to those in need. When I mentioned the terrible poverty and inequality around the world, he said, “I think there are enough people working on that.”
  • besides, a different cause moves him far more.
  • Should Thiel happen to die one day, best efforts notwithstanding, his arrangements with Alcor provide that a cryonics team will be standing by.
  • Then his body will be cooled to –196 degrees Celsius, the temperature of liquid nitrogen. After slipping into a double-walled, vacuum-insulated metal coffin, alongside (so far) 222 other corpsicles, “the patient is now protected from deterioration for theoretically thousands of years,” Alcor literature explains.
  • All that will be left for Thiel to do, entombed in this vault, is await the emergence of some future society that has the wherewithal and inclination to revive him. And then make his way in a world in which his skills and education and fabulous wealth may be worth nothing at all.
  • I wondered how much Thiel had thought through the implications for society of extreme longevity. The population would grow exponentially. Resources would not. Where would everyone live? What would they do for work? What would they eat and drink? Or—let’s face it—would a thousand-year life span be limited to men and women of extreme wealth?
  • “Well, I maybe self-serve,” he said, perhaps understating the point, “but I worry more about stagnation than about inequality.”
  • Thiel is not alone among his Silicon Valley peers in his obsession with immortality. Oracle’s Larry Ellison has described mortality as “incomprehensible.” Google’s Sergey Brin aspires to “cure death.” Dmitry Itskov, a leading tech entrepreneur in Russia, has said he hopes to live to 10,000.
  • . “I should be investing way more money into this stuff,” he told me. “I should be spending way more time on this.”
  • You haven’t told your husband? Wouldn’t you want him to sign up alongside you?“I mean, I will think about that,” he said, sounding rattled. “I will think—I have not thought about that.”
  • No matter how fervent his desire, Thiel’s extraordinary resources still can’t buy him the kind of “super-duper medical treatments” that would let him slip the grasp of death. It is, perhaps, his ultimate disappointment.
  • There are all these things I can’t do with my money,” Thiel said.
16More

The Fog of War - Wikipedia - 0 views

  • Lesson #1: Empathize with your enemy.
  • Lesson #2: Rationality alone will not save us.
  • McNamara emphasizes that it was luck that prevented nuclear war—rational individuals like Kennedy, Khrushchev, and Castro came close to destroying themselves and each other.
  • ...13 more annotations...
  • Lesson #5: Proportionality should be a guideline in war.
  • McNamara talks about the proportions of cities destroyed in Japan by the US before the dropping of the nuclear bomb, comparing the destroyed Japanese cities to similarly-sized cities in the US: Tokyo, roughly the size of New York City, was 51% destroyed; Toyama, the size of Chattanooga, was 99% destroyed; Nagoya, the size of Los Angeles, was 40% destroyed; Osaka, the size of Chicago, was 35% destroyed; Kobe, the size of Baltimore, was 55% destroyed; etc. He says LeMay once said that, had the United States lost the war, they would have been tried for war crimes, and agrees with this assessment.
  • Lesson #7: Belief and seeing are both often wrong. McNamara affirms Morris' framing of lesson 7 in relation to the Gulf of Tonkin incident: "We see what we want to believe."
  • Lesson #8: Be prepared to reexamine your reasoning. McNamara says that, even though the United States is the strongest nation in the world, it should never use that power unilaterally: "if we can't persuade nations with comparable values of the merit of our cause, we better reexamine our reasoning."
  • We, the richest nation in the world, have failed in our responsibility to our own poor and to the disadvantaged across the world to help them advance their welfare in the most fundamental terms of nutrition, literacy, health and employment.
  • we are not omniscient. If we cannot persuade other nations with similar interests and similar values of the merits of the proposed use of that power, we should not proceed unilaterally except in the unlikely requirement to defend directly the continental U.S., Alaska and Hawaii.
  • War is a blunt instrument by which to settle disputes between or within nations, and economic sanctions are rarely effective. Therefore, we should build a system of jurisprudence based on the International Court—that the U.S. has refused to support—which would hold individuals responsible for crimes against humanity.
  • If we are to deal effectively with terrorists across the globe, we must develop a sense of empathy—I don't mean "sympathy," but rather "understanding"—to counter their attacks on us and the Western World.
  • We underestimated the power of nationalism to motivate a people to fight and die for their beliefs and values.
  • Our misjudgments of friend and foe, alike, reflected our profound ignorance of the history, culture, and politics of the people in the area, and the personalities and habits of their leaders.
  • We failed then—and have since—to recognize the limitations of modern, high-technology military equipment, forces, and doctrine. We failed, as well, to adapt our military tactics to the task of winning the hearts and minds of people from a totally different culture.
  • We did not recognize that neither our people nor our leaders are omniscient. Our judgment of what is in another people's or country's best interest should be put to the test of open discussion in international forums. We do not have the God-given right to shape every nation in our image or as we choose.
  • We did not hold to the principle that U.S. military action … should be carried out only in conjunction with multinational forces supported fully (and not merely cosmetically) by the international community.
23More

'Erase Gaza': War Unleashes Incendiary Rhetoric in Israel - The New York Times - 0 views

  • “We are fighting human animals, and we are acting accordingly,” said Yoav Gallant, the defense minister, two days after the attacks, as he described how the Israeli military planned to eradicate Hamas in Gaza.
  • “We’re fighting Nazis,” declared Naftali Bennett, a former prime minister.
  • “You must remember what Amalek has done to you, says our Holy Bible — we do remember,” said Prime Minister Benjamin Netanyahu, referring to the ancient enemy of the Israelites, in scripture interpreted by scholars as a call to exterminate their “men and women, children and infants.”
  • ...20 more annotations...
  • Inflammatory language has also been used by journalists, retired generals, celebrities, and social media influencers, according to experts who track the statements. Calls for Gaza to be “flattened,” “erased” or “destroyed” had been mentioned about 18,000 times since Oct. 7 in Hebrew posts on X,
  • The cumulative effect, experts say, has been to normalize public discussion of ideas that would have been considered off limits before Oct. 7: talk of “erasing” the people of Gaza, ethnic cleansing, and the nuclear annihilation of the territory.
  • Itamar Ben-Gvir, a right-wing settler who went from fringe figure to minister of national security in Mr. Netanyahu’s cabinet, has a long history of making incendiary remarks about Palestinians. He said in a recent TV interview that anyone who supports Hamas should be “eliminated.”
  • The idea of a nuclear strike on Gaza was raised last week by another right-wing minister, Amichay Eliyahu, who told a Hebrew radio station that there was no such thing as noncombatants in Gaza. Mr. Netanyahu suspended Mr. Eliyahu, saying that his comments were “disconnected from reality.”
  • Mr. Netanyahu says that the Israeli military is trying to prevent harm to civilians. But with the death toll rising to more than 11,000, according to the Gaza health ministry, those claims are being met with skepticism, even in the United States,
  • Such reassurances are also belied by the language Mr. Netanyahu uses with audiences in Israel. His reference to Amalek came in a speech delivered in Hebrew on Oct. 28 as Israel was launching the ground invasion. While some Jewish scholars argue that the scripture’s message is metaphoric not literal, his words resonated widely, as video of his speech was shared on social media, often by critics
  • “These are not just one-off statements, made in the heat of the moment,”
  • “When ministers make statements like that,” Mr. Sfard added, “it opens the door for everyone else.”
  • “Erase Gaza. Don’t leave a single person there,” Mr. Golan said in an interview with Channel 14 on Oct. 15.
  • “I don’t call them human animals because that would be insulting to animals,” Ms. Netanyahu said during a radio interview on Oct. 10, referring to Hamas
  • In the West Bank last week, several academics and officials cited Mr. Eliyahu’s remark about dropping an atomic bomb on Gaza as evidence of Israel’s intention to clear the enclave of all Palestinians — a campaign they call a latter-day nakba.
  • On Saturday, the Israeli agriculture minister, Avi Dichter, said that the military campaign in Gaza was explicitly designed to force the mass displacement of Palestinians. “We are now rolling out the Gaza nakba,” he said in a television interview. “Gaza nakba 2023.”
  • The rise in incendiary statements comes against a backdrop of rising violence in the West Bank. Since Oct. 7, according to the United Nations, Israeli soldiers have killed 150 Palestinians, including 44 children, in clashes.
  • the use of inflammatory language by Israeli leaders is not surprising, and even understandable, given the brutality of the Hamas attacks, which inflicted collective and individual trauma on Israelis.
  • “People in this situation look for very, very clear answers,” Professor Halperin said. “You don’t have the mental luxury of complexity. You want to see a world of good guys and bad guys.”
  • “Leaders understand that,” he added, “and it leads them to use this kind of language, because this kind of language has an audience.”
  • Casting the threat posed by Hamas in stark terms, Professor Halperin said, also helps the government ask people to make sacrifices for the war effort: the compulsory mobilization of 360,000 reservists, the evacuation of 126,000 people from border areas in the north and south, and the shock to the economy.
  • It will also make Israelis more inured to the civilian death toll in Gaza, which has isolated Israel around the world, he added. A civilian death toll of 10,000 or 20,000, he said, could seem to “the average Israeli that it’s not such a big deal.”
  • In the long run, Mr. Sfard said, such language dooms the chance of ending the conflict with the Palestinians, erodes Israel’s democracy and breeds a younger generation that is “easily using the language in their discussion with their friends.”
  • “Once a certain rhetoric becomes legitimized, turning the wheel back requires a lot of education,” he said. “There is an old Jewish proverb: ‘A hundred wise men will struggle a long time to take out a stone that one stupid person dropped into the well.’”
12More

Opinion | The Worst Scandal in American Higher Education Isn't in the Ivy League - The ... - 0 views

  • I’d argue that the moral collapse at Liberty University in Virginia may well be the most consequential education scandal in the United States, not simply because the details themselves are shocking and appalling, but because Liberty’s misconduct both symbolizes and contributes to the crisis engulfing Christian America. It embodies a cultural and political approach that turns Christian theology on its head.
  • Last week, Fox News reported that Liberty is facing the possibility of an “unprecedented” $37.5 million fine from the U.S. Department of Education
  • While Liberty’s fine is not yet set, the contents of a leaked education department report — first reported by Susan Svrluga in The Washington Post — leave little doubt as to why it may be this large.
  • ...9 more annotations...
  • The report, as Svrluga writes, “paints a picture of a university that discouraged people from reporting crimes, underreported the claims it received and, meanwhile, marketed its Virginia campus as one of the safest in the country.” The details are grim. According to the report, “Liberty failed to warn the campus community about gas leaks, bomb threats and people credibly accused of repeated acts of sexual violence — including a senior administrator and an athlete.”
  • A campus safety consultant told Svrluga, “This is the single most blistering Clery report I have ever read. Ever.”
  • I’ve been following (and covering) Liberty’s moral collapse for years, and the list of scandals and lawsuits plaguing the school is extraordinarily long. The best known of these is the saga of Jerry Falwell Jr. Falwell, the former president and son of the school’s founder, resigned amid allegations of sexual misconduct involving himself, his wife and a pool boy turned business associate named Giancarlo Granda.
  • Why? Because he realized the health of the church wasn’t up to the state, nor was it dependent on the church’s nonbelieving neighbors.
  • Paul demonstrates ferocious anger at the church’s internal sin, but says this about those outside the congregation: “What business is it of mine to judge those outside the church? Are you not to judge those inside? God will judge those outside. ‘Expel the wicked person from among you.’”
  • Yet as we witness systemic misconduct unfold at institution after institution after institution, often without any real accountability, we can understand that many members of the church have gotten Paul’s equation exactly backward. They are remarkably tolerant of even the most wayward, dishonest and cruel individuals and institutions in American Christianity. At the same time, they approach those outside with a degree of anger and ferocity that’s profoundly contributing to American polarization.
  • Under this moral construct, internal critique is perceived as a threat, a way of weakening American evangelicalism. It’s seen as contributing to external hostility and possibly even the rapid secularization of American life that’s now underway. But Paul would scoff at such a notion. One of the church’s greatest apostles didn’t hold back from critiquing a church that faced far greater cultural or political headwinds — including brutal and deadly persecution at the hands of the Roman state — than the average evangelical can possibly imagine.
  • Falwell is nationally prominent in part because he was one of Donald Trump’s earliest and most enthusiastic evangelical supporters. Falwell sued the school, the school sued Falwell, and in September Falwell filed a scorching amended complaint, claiming that other high-ranking Liberty officers and board members had committed acts of sexual and financial misconduct yet were permitted to retain their positions
  • Liberty University is consequential not just because it’s an academic superpower in Christian America, but also because it’s a symbol of a key reality of evangelical life — we have met the enemy of American Christianity, and it is us.
168More

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
16More

Opinion | The OpenAI drama explains the human penchant for risk-taking - The Washington... - 0 views

  • Along with more pedestrian worries about various ways that AI could harm users, one side worried that ChatGPT and its many cousins might thrust humanity onto a kind of digital bobsled track, terminating in disaster — either with the machines wiping out their human progenitors or with humans using the machines to do so themselves. Once things start moving in earnest, there’s no real way to slow down or bail out, so the worriers wanted everyone to sit down and have a long think before getting anything rolling too fast.
  • Skeptics found all this a tad overwrought. For one thing, it left out all the ways in which AI might save humanity by providing cures for aging or solutions to global warming. And many folks thought it would be years before computers could possess anything approaching true consciousness, so we could figure out the safety part as we go. Still others were doubtful that truly sentient machines were even on the horizon; they saw ChatGPT and its many relatives as ultrasophisticated electronic parrots
  • Worrying that such an entity might decide it wants to kill people is a bit like wondering whether your iPhone would prefer to holiday in Crete or Majorca next summer.
  • ...13 more annotations...
  • OpenAI was was trying to balance safety and development — a balance that became harder to maintain under the pressures of commercialization.
  • It was founded as a nonprofit by people who professed sincere concern about taking things safe and slow. But it was also full of AI nerds who wanted to, you know, make cool AIs.
  • OpenAI set up a for-profit arm — but with a corporate structure that left the nonprofit board able to cry “stop” if things started moving too fast (or, if you prefer, gave “a handful of people with no financial stake in the company the power to upend the project on a whim”).
  • On Friday, those people, in a fit of whimsy, kicked Brockman off the board and fired Altman. Reportedly, the move was driven by Ilya Sutskever, OpenAI’s chief scientist, who, along with other members of the board, has allegedly clashed repeatedly with Altman over the speed of generative AI development and the sufficiency of safety precautions.
  • Chief among the signatories was Sutskever, who tweeted Monday morning, “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”
  • it’s also a valuable general lesson about corporate structure and corporate culture. The nonprofit’s altruistic mission was in tension with the profit-making, AI-generating part — and when push came to shove, the profit-making part won.
  • a software company has little in the way of tangible assets; its people are its capital. And this capital looks willing to follow Altman to where the money is.
  • More broadly still, it perfectly encapsulates the AI alignment problem, which in the end is also a human alignment problem
  • And that’s why we are probably not going to “solve” it so much as hope we don’t have to.
  • Humanity can’t help itself; we have kept monkeying with technology, no matter the dangers, since some enterprising hominid struck the first stone ax.
  • When scientists started messing with the atom, there were real worries that nuclear weapons might set Earth’s atmosphere on fire. By the time an actual bomb was exploded, scientists were pretty sure that wouldn’t happen
  • But if the worries had persisted, would anyone have behaved differently — knowing that it might mean someone else would win the race for a superweapon? Better to go forward and ensure that at least the right people were in charge.
  • Now consider Sutskever: Did he change his mind over the weekend about his disputes with Altman? More likely, he simply realized that, whatever his reservations, he had no power to stop the bobsled — so he might as well join his friends onboard. And like it or not, we’re all going with them.
11More

Resources for Talking and Teaching About the School Shooting in Uvalde, Texas - The New... - 0 views

  • Only 11 days ago there was Buffalo, with a man driven by racism gunning down 10 people at a supermarket. The next day another angry man walked into a Presbyterian church in Laguna Woods, Calif., and killed one person and wounded five others. And now, Uvalde, Texas — a repeat of what was once thought unfathomable: the killing of at least 19 elementary school children in second, third and fourth grades.
  • Above all, we want you to know we are listening. If it helps your students to share their thoughts and feelings publicly, we have a space for that. And if teachers or parents have thoughts, ideas, questions, concerns or suggestions, please post them here.
  • Because The Learning Network is for students 13 and older, most of the resources in this resource focus on understanding this shooting and its implications. The Times has published this age-by-age guide to talking to children about mass shootings. And for parents and teachers of younger students this advice from The Times Parenting section might be helpful:
  • ...8 more annotations...
  • Think about the lives lost.Think about the teachers.Think about the children.They were family, friends, and loved ones.And a gun killed them all.It was only last week that we posted a similar prompt in response to the racist massacre in Buffalo. Like all of our student forums, this one will be moderated.
  • Students might find their own ways to respond, perhaps through writing or art. It may also be helpful to look at how victims of other tragedies have been memorialized, in ways big and small. For example: The 26 playgrounds built to remember the children of Sandy Hook; the memorial for the Oklahoma City bombing, with its “field of chairs,” including 19 smaller ones for the children who lost their lives; and the New York Times Portraits of Grief series, which profiled those lost in the Sept. 11 terrorist attacks. Here are more examples, from the El Paso Times. In what ways can your students or school respond, individually or collectively?
  • What is it like to be a student in the shadow of this violence? How have repeated mass shootings shaped young people? We invite your students to reflect on these questions in this writing prompt, and post their answers to our forum if they would like to join a public conversation on the topic.To help students think about the issue from different angles, we invite them to read the article “A ‘Mass Shooting Generation’ Cries Out for Change,” which was published in 2018 following the shooting at Marjory Stoneman Douglas High School in Parkland, Fla. Then we ask questions such as:
  • The authors of the 2018 Times article described how the Parkland shooting moved students around the country to become more involved in activism. Do you think something similar will happen in the wake of the shooting in Uvalde, Texas? Why or why not? How do you think school shootings are shaping the generation of students who are in school right now?Invite your students to weigh in here.
  • Democrats moved quickly to clear the way for votes on legislation to strengthen background checks for gun purchasers. Republicans, even as they expressed horror about the shooting, did not signal that they would drop their longstanding opposition to gun safety measures. Gov. Greg Abbott of Texas pointed the blame at Uvalde’s lack of mental health care, even though the suspect had no record of problems.
  • Which efforts might be the most effective? Students might also take a look at the forum on guns we posted during the 2016 election as part of our Civil Conversation Challenge in which we invited teenagers to have productive, respectful conversations on several issues dividing Americans. We received more than 700 responses to the questions we posed about gun rights, the Second Amendment and more.
  • This article takes on three of the most prominent rumors that have spread via online platforms such as Twitter, Gab, 4chan and Reddit and explains why they are false. What rumors are your students seeing in their feeds, and what steps can they take to find out the truth? From double-checking via sites like Snopes to learning habits like lateral reading, this article (and related lesson plan) has suggestions.
  • While the town of Uvalde grapples with the aftermath of the shooting, community members, local leaders and organizations have mobilized. Two local funeral homes said in social media posts that they would not charge families of victims for their funeral services. Volunteers have lined up to give blood for the shooting victims.
4More

Hungary's Oil Embargo Exemption Is a Sign of Orban's Affinity for Russia - The New York... - 0 views

  • The European Union’s long-delayed deal to embargo Russian oil, finalized late Monday, effectively exempts Hungary from the costly step the rest of the bloc is taking to punish Russia for its invasion of Ukraine.
  • He has also painted Hungary’s interests as being distinct from the West by fanning culture wars and fears of liberal values lapping at Hungary’s borders, speaking in March about “the gender insanity sweeping across the Western world.”
  • “Hungary is exempt from the oil embargo!” Mr. Orban declared on his Facebook page Monday. He had previously said cutting off Russian oil “amounts to an atomic bomb being dropped on the Hungarian economy.”
  • ...1 more annotation...
  • Since the war’s start, Hungary has treaded a fine line, joining the first rounds of sanctions against Russia and accepting Ukrainian refugees, while refusing to allow deliveries of arms bound for Ukraine to go through the country or to accept additional U.S. troops.
6More

Israel Moves Blood Bank Underground to Safeguard It From Attacks - The New York Times - 0 views

  • When the sirens warning of incoming rockets split the skies, Israel’s national blood bank moves into high alert to keep the nation’s blood supply safe. The heavy machinery for blood processing, plasma freezers and centrifuges are transferred to a basement bomb shelter, a cumbersome operation that takes 10 to 12 hours.
  • By the end of the year, the blood bank will be relocated to a bright, state-of-the-art subterranean facility built to withstand chemical, biological and conventional weapons, including a direct hit from a large missile, as well as earthquakes and cyberattacks.
  • “It will save the lives of our loved ones, our frontline workers and our soldiers in times of routine emergencies and conflict,”
  • ...3 more annotations...
  • But in recent years, as the Tel Aviv area has increasingly become a target of rocket attacks, the building has been judged unsafe.
  • In addition, Israel sits on two seismic faults that in the event of a major earthquake would leave only the lobby of the existing center intact.
  • The vault, 50 feet down, is cocooned in concrete and steel, and has a separate air supply and filtering system. Moshe Noyovich, the engineer overseeing the project, said the inventory of blood components stored in the vault should suffice for four or five days of war.
14More

Ukraine Tells Story of War in Museum Show - The New York Times - 0 views

  • KYIV, Ukraine — Just days after Russian troops retreated from the suburbs surrounding Kyiv, Yuriy Savchuk, director of a World War II museum in the city, joined the police and prosecutors who were investigating the full extent of the suffering inflicted there by enemy soldiers.
  • Over the next month, Mr. Savchuk and his colleagues meticulously documented what they saw, taking more than 3,000 photographs.
  • The sign, and everything else in the basement, was taken from a bomb shelter in a Kyiv suburb, Hostomel, the site of an airport that Russian soldiers tried to take in the first days of the war.
  • ...11 more annotations...
  • The exhibition is one of several ways that Ukraine’s government is highlighting the devastation its people have endured even as new suffering is inflicted every day.
  • And Ukraine has taken the rare step of prosecuting Russian soldiers for war crimes just months after they were allegedly committed, greatly accelerating the normal judicial timetable.
  • “It is necessary to explain to our children what is happening in Ukraine now,” Mr. Spodinskiy said, as other visitors took photographs of the debris. “We cannot speak with our children as if nothing is happening,” he added, “because they clearly understand everything, and they see what happens in our country.”
  • “The history of our country is being created, and now this is an opportunity to get in touch with it,” said another visitor, Serhiy Pashchukov, a 31-year-old from Luhansk, which was occupied by Russia in 2014.
  • Those discoveries and many others have become items in an exhibition called “Crucified Ukraine” that opened on May 8 at Mr. Savchuk’s museum, an unusual effort to chronicle the war even as battles continue to rage in Ukraine’s east and south.
  • The rooms are dank and cold, but the most striking thing, many visitors said, was that it smells as if the people who sheltered with their belongings there — including onions, blankets, and toys — had just left.
  • “We had a similar basement in Bucha in a newly built apartment building,” said Evgeniya Skrypnyk, a 32-year-old from a suburb of Kyiv where Russian soldiers killed and terrorized civilians.
  • The one historical inaccuracy in the shelter was the absence of the five buckets that stood in the hallway where the people who lived underground for more than a month relieved themselves.
  • Remembrance of World War II has become more complex since the war started. In Russia, the Kremlin has sought to glorify the Soviet victory — to which millions of Ukrainians contributed — as a source of national pride. But it has also called upon memories of that war to justify and build support for the invasion of Ukraine, with Mr. Putin seeking to falsely portray Ukrainian leaders as “Nazis.”
  • Mr. Savchuk said that in light of the current war, people were talking about a “complete reconstruction” of the museum complex, whose architecture is intended to awe visitors with the memory of the Soviet victory in World War II, to de-emphasize the fight against Nazi Germany.
  • “This war changed everything,” he said. “A museum is not only an exhibition, it is a territory, it is its monuments, it is a place of memory. We are thinking about changing not only the ideology, but also the architecture, the emphasis.”
14More

China's reaction to Russian incursion into Ukraine muted, denies backing it - The Washi... - 0 views

  • The Russian attacks are the greatest test yet for an emerging Moscow-Beijing axis, which has recently shown signs of evolving from what many considered a “marriage of convenience” to something resembling a formal alliance
  • In recent weeks, China has voiced support for Russia’s “legitimate security concerns” but has balanced that with calls for restraint and negotiations, echoing the approach China took during the 2014 invasion of Crimea. Beijing appeared to be repeating that tightrope walk on Thursday, as it called for calm while news of the attacks sent regional markets plunging.
  • Despite the outward show of mutual support between the two countries, there have been indications that China was caught flat footed by Russian President Vladimir Putin’s announcement of military action.
  • ...11 more annotations...
  • That same day, when China warned its nationals in Ukraine about a worsening situation, it did not tell them to leave the country. On Thursday, with explosions going off nearby, many of the 8,000-odd Chinese passport holders in the country took to microblog Weibo to call for help.
  • Yun Sun, Director of the China Program at the Stimson Center, noted Tuesday that the Chinese policy community appeared to be in “shock” at the sudden escalation of fighting after having “subscribed to the theory that Putin was only posturing and that U.S. intelligence was inaccurate as in the case of invading Iraq.”
  • Minutes after the declaration, Chinese representative to the United Nations Zhang Jun was telling a Security Council meeting: “we believe that the door to a peaceful solution to the Ukraine situation is not fully shut, nor should it be.”
  • In recent weeks, Chinese experts have argued that de-escalation was possible even as they adopted Russia’s view of the conflict. Wang Yiwei, director of the Center for European Studies at Renmin University, wrote in late January that only the actions of Ukraine or the United States could bring about a war, but because the former lacked “gall” and the latter lacked strength for a direct conflict with Russia, tensions could be dispelled.
  • “When can China evacuate?” asked a user with the handle LumpyCut. “We are in Kyiv near the airport. I just heard three enormous bombings and can estimate the size of the mushroom clouds by sight.”
  • In an interview on Thursday, Wang defended his prediction as being primarily about the possibility of a direct conflict between the United States and Russia, not fighting in eastern Ukraine.
  • Hua also rejected suggestions that China might adhere to U.S.-led sanctions against Russia, pointing to China’s long-held stance against the use of sanctions adopted outside of United Nations deliberations.
  • China’s support for Russia has also stopped short of direct approval for Russian military action. Over the weekend, Chinese Foreign Minister Wang Yi reiterated that all countries sovereignty must be respected, adding that “Ukraine is not an exception.”
  • Such hesitation comes, however, during a time of growing strategic alignment between Moscow and Beijing, built primarily on shared disdain for the United States and the Western-led world order.
  • Hawkish commentators in China were quick to explain Putin’s attack on Thursday as the result of provocation from the United States. “That the situation came to today’s step is due to spiraling escalation,” Fu Qianshao, a military commentator, told nationalist publication the Shanghai Observer
  • “Russia had already said many times that it would withdraw troops, but America always promoted an atmosphere of conflict.”
19More

Xi and Putin's 'No Limits' Bond Leaves China Few Options on Ukraine - The New York Times - 0 views

  • They had just finalized a statement declaring their vision of a new international order with Moscow and Beijing at its core, untethered from American power.
  • Over dinner, according to China’s official readout, they discussed “major hot-spot issues of mutual concern.”
  • Publicly, Mr. Xi and Mr. Putin had vowed that their countries’ friendship had “no limits.” The Chinese leader also declared that there would be “no wavering” in their partnership, and he added his weight to Mr. Putin’s accusations of Western betrayal in Europe.
  • ...16 more annotations...
  • Mr. Xi’s statement with Mr. Putin on Feb. 4 endorsed a Russian security proposal that would exclude Ukraine from joining the North Atlantic Treaty Organization.
  • “He’s damned if he did know, and damned if he didn’t,” Paul Haenle, a former director for China on the National Security Council, said of whether Mr. Xi had been aware of Russia’s plans to invade. “If he did know and he didn’t tell people, he’s complicit; if he wasn’t told by Putin, it’s an affront.”
  • In any case, the invasion evidently surprised many in Beijing’s establishment
  • The implications for China extend beyond Ukraine, and even Europe.
  • Even so, as Mr. Putin became determined to reverse Ukraine’s turn to Western security protections, Chinese officials began to echo Russian arguments. Beijing also saw a growing threat from American-led military blocs.
  • “Putin may have done this anyway, but also it was unquestionably an enabling backdrop that was provided by the joint statement, the visit and Xi’s association with all of these things,”
  • Before and shortly after the invasion, Beijing sounded sympathetic to Moscow’s security demands, mocking Western warnings of war and accusing the United States of goading Russia. Over the past two weeks, though, China has sought to edge slightly away from Russia. It has softened its tone, expressing grief over civilian casualties. It has cast itself as an impartial party, calling for peace talks and for the war to stop as soon as possible.
  • For decades it sought to build ties with Russia while also keeping Ukraine close.
  • Over the past years, as growing numbers of Ukrainians supported joining NATO, Chinese diplomats did not raise objections with Kyiv, said Sergiy Gerasymchuk, an analyst with Ukrainian Prism, a foreign policy research organization in Kyiv.
  • For both leaders, their partnership was an answer to Mr. Biden’s effort to forge an “alliance of democracies.”
  • “He owns that relationship with Putin,” Mr. Haenle said. “If you’re suggesting in the Chinese system right now that it was not smart to get that close to Russia, you’re in effect criticizing the leader.”
  • Beijing had its own complaints with NATO, rooted in the bombing of the Chinese Embassy in Belgrade, Serbia, during NATO’s war in 1999 to protect a breakaway region, Kosovo. Those suspicions deepened when NATO in 2021 began to describe China as an emerging challenge to the alliance.
  • n Feb. 23, a foreign ministry spokeswoman, Hua Chunying, accused Washington of “manufacturing panic.”
  • Chinese officials tweaked their calls to heed Russia’s security, stressing that “any country’s legitimate security concerns should be respected.” They still did not use the word “invasion,” but have acknowledged a “conflict between Ukraine and Russia.”
  • “Many decision makers in China began to perceive relations in black and white: either you are a Chinese ally or an American one,”
  • “They still want to remain sort of neutral, but they bitterly failed.”
6More

Russia-Ukraine live updates: 'Don't even think' about moving in NATO territory: Biden -... - 0 views

  • The attack began Feb. 24, when Russian President Vladimir Putin announced a "special military operation."Russian forces moving from neighboring Belarus toward Ukraine's capital, Kyiv, have advanced closer to the city center in recent days despite the resistance. Heavy shelling and missile attacks, many on civilian buildings, continue in Kyiv, as well as major cities like Kharkiv and Mariupol. Russia also bombed western cities for the first time last week, targeting Lviv and a military base near the Poland border.
  • The U.S. will be providing Ukraine with $100 million in "civilian security" assistance, U.S. Secretary of State Antony Blinken announced Saturday, hours after he and Defense Secretary Lloyd Austin met with their Ukrainian counterparts.The aid will provide equipment including armored vehicles, medical supplies, personal protective equipment and communications equipment, according to the Department of State.
  • "We’ll not cease the efforts to get humanitarian relief wherever it is needed in Ukraine and for the people who’ve made it out of Ukraine. Notwithstanding the brutality of Vladimir Putin, let there be no doubt that this war [has] already been a strategic failure for Russia," Biden said.
  • ...3 more annotations...
  • Biden also addressed the Russian people, telling them: "You, the Russian people, are not our enemy.""The American people stand with you and the brave people of Ukraine for peace," Biden said.
  • In an address from Warsaw Saturday, President Joe Biden made remarks seemingly directed at Russian President Vladimir Putin and his invasion of Ukraine. "For god's sake, this man cannot remain in power," Biden said.After the speech, the White House released a statement saying the president wasn't calling for a regime change.
  • "Vladimir Putin's aggression have cut you, the Russian people, off from the rest of the world, and it’s taking Russia back to the 19th century. This is not who you are," Biden said.Biden praised Ukrainian resistance, saying the U.S. stands with the people of Ukraine and will continue to support them.
47More

Sam Altman, the ChatGPT King, Is Pretty Sure It's All Going to Be OK - The New York Times - 0 views

  • He believed A.G.I. would bring the world prosperity and wealth like no one had ever seen. He also worried that the technologies his company was building could cause serious harm — spreading disinformation, undercutting the job market. Or even destroying the world as we know it.
  • “I try to be upfront,” he said. “Am I doing something good? Or really bad?”
  • In 2023, people are beginning to wonder if Sam Altman was more prescient than they realized.
  • ...44 more annotations...
  • And yet, when people act as if Mr. Altman has nearly realized his long-held vision, he pushes back.
  • This past week, more than a thousand A.I. experts and tech leaders called on OpenAI and other companies to pause their work on systems like ChatGPT, saying they present “profound risks to society and humanity.”
  • As people realize that this technology is also a way of spreading falsehoods or even persuading people to do things they should not do, some critics are accusing Mr. Altman of reckless behavior.
  • “The hype over these systems — even if everything we hope for is right long term — is totally out of control for the short term,” he told me on a recent afternoon. There is time, he said, to better understand how these systems will ultimately change the world.
  • Many industry leaders, A.I. researchers and pundits see ChatGPT as a fundamental technological shift, as significant as the creation of the web browser or the iPhone. But few can agree on the future of this technology.
  • Some believe it will deliver a utopia where everyone has all the time and money ever needed. Others believe it could destroy humanity. Still others spend much of their time arguing that the technology is never as powerful as everyone says it is, insisting that neither nirvana nor doomsday is as close as it might seem.
  • he is often criticized from all directions. But those closest to him believe this is as it should be. “If you’re equally upsetting both extreme sides, then you’re doing something right,” said OpenAI’s president, Greg Brockman.
  • To spend time with Mr. Altman is to understand that Silicon Valley will push this technology forward even though it is not quite sure what the implications will be
  • in 2019, he paraphrased Robert Oppenheimer, the leader of the Manhattan Project, who believed the atomic bomb was an inevitability of scientific progress. “Technology happens because it is possible,” he said
  • His life has been a fairly steady climb toward greater prosperity and wealth, driven by an effective set of personal skills — not to mention some luck. It makes sense that he believes that the good thing will happen rather than the bad.
  • He said his company was building technology that would “solve some of our most pressing problems, really increase the standard of life and also figure out much better uses for human will and creativity.”
  • He was not exactly sure what problems it will solve, but he argued that ChatGPT showed the first signs of what is possible. Then, with his next breath, he worried that the same technology could cause serious harm if it wound up in the hands of some authoritarian government.
  • Kelly Sims, a partner with the venture capital firm Thrive Capital who worked with Mr. Altman as a board adviser to OpenAI, said it was like he was constantly arguing with himself.
  • “In a single conversation,” she said, “he is both sides of the debate club.”
  • He takes pride in recognizing when a technology is about to reach exponential growth — and then riding that curve into the future.
  • he is also the product of a strange, sprawling online community that began to worry, around the same time Mr. Altman came to the Valley, that artificial intelligence would one day destroy the world. Called rationalists or effective altruists, members of this movement were instrumental in the creation of OpenAI.
  • Does it make sense to ride that curve if it could end in diaster? Mr. Altman is certainly determined to see how it all plays out.
  • “Why is he working on something that won’t make him richer? One answer is that lots of people do that once they have enough money, which Sam probably does. The other is that he likes power.”
  • “He has a natural ability to talk people into things,” Mr. Graham said. “If it isn’t inborn, it was at least fully developed before he was 20. I first met Sam when he was 19, and I remember thinking at the time: ‘So this is what Bill Gates must have been like.
  • poker taught Mr. Altman how to read people and evaluate risk.
  • It showed him “how to notice patterns in people over time, how to make decisions with very imperfect information, how to decide when it was worth pain, in a sense, to get more information,” he told me while strolling across his ranch in Napa. “It’s a great game.”
  • He believed, according to his younger brother Max, that he was one of the few people who could meaningfully change the world through A.I. research, as opposed to the many people who could do so through politics.
  • In 2019, just as OpenAI’s research was taking off, Mr. Altman grabbed the reins, stepping down as president of Y Combinator to concentrate on a company with fewer than 100 employees that was unsure how it would pay its bills.
  • Within a year, he had transformed OpenAI into a nonprofit with a for-profit arm. That way he could pursue the money it would need to build a machine that could do anything the human brain could do.
  • Mr. Brockman, OpenAI’s president, said Mr. Altman’s talent lies in understanding what people want. “He really tries to find the thing that matters most to a person — and then figure out how to give it to them,” Mr. Brockman told me. “That is the algorithm he uses over and over.”
  • Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, another lab intent on building artificial general intelligence.
  • “These are people who have left an indelible mark on the fabric of the tech industry and maybe the fabric of the world,” he said. “I think Sam is going to be one of those people.”
  • The trouble is, unlike the days when Apple, Microsoft and Meta were getting started, people are well aware of how technology can transform the world — and how dangerous it can be.
  • Mr. Scott of Microsoft believes that Mr. Altman will ultimately be discussed in the same breath as Steve Jobs, Bill Gates and Mark Zuckerberg.
  • The woman was the Canadian singer Grimes, Mr. Musk’s former partner, and the hat guy was Eliezer Yudkowsky, a self-described A.I. researcher who believes, perhaps more than anyone, that artificial intelligence could one day destroy humanity.
  • The selfie — snapped by Mr. Altman at a party his company was hosting — shows how close he is to this way of thinking. But he has his own views on the dangers of artificial intelligence.
  • In March, Mr. Altman tweeted out a selfie, bathed by a pale orange flash, that showed him smiling between a blond woman giving a peace sign and a bearded guy wearing a fedora.
  • He also helped spawn the vast online community of rationalists and effective altruists who are convinced that A.I. is an existential risk. This surprisingly influential group is represented by researchers inside many of the top A.I. labs, including OpenAI.
  • They don’t see this as hypocrisy: Many of them believe that because they understand the dangers clearer than anyone else, they are in the best position to build this technology.
  • Mr. Altman believes that effective altruists have played an important role in the rise of artificial intelligence, alerting the industry to the dangers. He also believes they exaggerate these dangers.
  • As OpenAI developed ChatGPT, many others, including Google and Meta, were building similar technology. But it was Mr. Altman and OpenAI that chose to share the technology with the world.
  • Many in the field have criticized the decision, arguing that this set off a race to release technology that gets things wrong, makes things up and could soon be used to rapidly spread disinformation.
  • Mr. Altman argues that rather than developing and testing the technology entirely behind closed doors before releasing it in full, it is safer to gradually share it so everyone can better understand risks and how to handle them.
  • He told me that it would be a “very slow takeoff.”
  • When I asked Mr. Altman if a machine that could do anything the human brain could do would eventually drive the price of human labor to zero, he demurred. He said he could not imagine a world where human intelligence was useless.
  • If he’s wrong, he thinks he can make it up to humanity.
  • His grand idea is that OpenAI will capture much of the world’s wealth through the creation of A.G.I. and then redistribute this wealth to the people. In Napa, as we sat chatting beside the lake at the heart of his ranch, he tossed out several figures — $100 billion, $1 trillion, $100 trillion.
  • If A.G.I. does create all that wealth, he is not sure how the company will redistribute it. Money could mean something very different in this new world.
  • But as he once told me: “I feel like the A.G.I. can help with that.”
17More

In defense of science fiction - by Noah Smith - Noahpinion - 0 views

  • I’m a big fan of science fiction (see my list of favorites from last week)! So when people start bashing the genre, I tend to leap to its defense
  • this time, the people doing the bashing are some serious heavyweights themselves — Charles Stross, the celebrated award-winning sci-fi author, and Tyler Austin Harper, a professor who studies science fiction for a living
  • The two critiques center around the same idea — that rich people have misused sci-fi, taking inspiration from dystopian stories and working to make those dystopias a reality.
  • ...14 more annotations...
  • [Science fiction’s influence]…leaves us facing a future we were all warned about, courtesy of dystopian novels mistaken for instruction manuals…[T]he billionaires behind the steering wheel have mistaken cautionary tales and entertainments for a road map, and we’re trapped in the passenger seat.
  • t even then it would be hard to argue exogeneity, since censorship is a response to society’s values as well as a potential cause of them.
  • Stross is alleging that the billionaires are getting Gernsback and Campbell’s intentions exactly right. His problem is simply that Gernsback and Campbell were kind of right-wing, at least by modern standards, and he’s worried that their sci-fi acted as propaganda for right-wing ideas.
  • The question of whether literature has a political effect is an empirical one — and it’s a very difficult empirical one. It’s extremely hard to test the hypothesis that literature exerts a diffuse influence on the values and preconceptions of the citizenry
  • I think Stross really doesn’t come up with any credible examples of billionaires mistaking cautionary tales for road maps. Instead, most of his article focuses on a very different critique — the idea that sci-fi authors inculcate rich technologists with bad values and bad visions of what the future ought to look like:
  • I agree that the internet and cell phones have had an ambiguous overall impact on human welfare. If modern technology does have a Torment Nexus, it’s the mobile-social nexus that keeps us riveted to highly artificial, attenuated parasocial interactions for every waking hour of our day. But these technologies are still very young, and it remains to be seen whether the ways in which we use them will get better or worse over time.
  • There are very few technologies — if any — whose impact we can project into the far future at the moment of their inception. So unless you think our species should just refuse to create any new technology at all, you have to accept that each one is going to be a bit of a gamble.
  • As for weapons of war, those are clearly bad in terms of their direct effects on the people on the receiving end. But it’s possible that more powerful weapons — such as the atomic bomb — serve to deter more deaths than they cause
  • yes, AI is risky, but the need to manage and limit risk is a far cry from the litany of negative assumptions and extrapolations that often gets flung in the technology’s directio
  • I think the main problem with Harper’s argument is simply techno-pessimism. So far, technology’s effects on humanity have been mostly good, lifting us up from the muck of desperate poverty and enabling the creation of a healthier, more peaceful, more humane world. Any serious discussion of the effects of innovation on society must acknowledge that. We might have hit an inflection point where it all goes downhill from here, and future technologies become the Torment Nexuses that we’ve successfully avoided in the past. But it’s very premature to assume we’ve hit that point.
  • I understand that the 2020s are an exhausted age, in which we’re still reeling from the social ructions of the 2010s. I understand that in such a weary and fearful condition, it’s natural to want to slow the march of technological progress as a proxy for slowing the headlong rush of social progress
  • And I also understand how easy it is to get negatively polarized against billionaires, and any technologies that billionaires invent, and any literature that billionaires like to read.
  • But at a time when we’re creating vaccines against cancer and abundant clean energy and any number of other life-improving and productivity-boosting marvels, it’s a little strange to think that technology is ruining the world
  • The dystopian elements of modern life are mostly just prosaic, old things — political demagogues, sclerotic industries, social divisions, monopoly power, environmental damage, school bullies, crime, opiates, and so on
6More

Book Review: 'A Hitch in Time,' by Christopher Hitchens - The New York Times - 0 views

  • These are book reviews and diary essays written for The London Review of Books between 1983 and 2002. None has previously been anthologized. The pieces are split almost evenly between political topics (Margaret Thatcher, Bill Clinton, the Oklahoma bombing, Nixon and Kennedy, Kim Philby, the radicalism of 1968) and literary, academic and social ones (Tom Wolfe, the Academy Awards, Salman Rushdie, P.G. Wodehouse, spanking, Gore Vidal, Diana Mosley, Isaiah Berlin).
  • this miscellany ends in 2002. That was the year Hitchens, previously a self-described “extreme leftist,” came out in favor of the invasion of Iraq. He broke with The Nation, The London Review of Books and many of his old friends.
  • Why care about a pile of old book reviews? Hitchens’s didn’t sound like other people’s. He had none of the form’s mannerisms. He rarely praised or blamed; instead, he made distinctions, and he piled up evidence
  • ...3 more annotations...
  • For him, the books were occasions; he picked up the bits that interested him and ran with them. (“It’s a book review, not a bouillon cube,” as Nicholson Baker put it, replying to Ken Auletta, who had complained about one of Baker’s similarly rangy reviews in the Book Review.)
  • Spying Henry Kissinger in the Sistine Chapel gawping at the Hell section of “The Last Judgment,” Vidal commented: “Look, he’s apartment hunting.”
  • Hitchens was sui generis. He made most other book reviewers, to borrow Dorothy Parker’s words about the drama critic George Jean Nathan, “look as if they spelled out their reviews with alphabet blocks.”
« First ‹ Previous 461 - 477 of 477
Showing 20 items per page