Skip to main content

Home/ History Readings/ Group items matching "doomsday" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

What Oppenheimer really knew about an atomic bomb ending the world - The Washington Post - 0 views

  • In a chilling, existential, bizarrely comic moment, the new movie “Oppenheimer” revives an old question: Did Manhattan Project scientists think there was even a minute possibility that detonating the first atomic bomb on the remote plains of New Mexico could destroy the world?
  • physicists knew it wouldn’t, long before the Trinity test on July 16, 1945, at the Alamogordo Bombing Range, about 210 miles south of the secret Los Alamos, N.M., laboratory.
  • “This thing has been blown out of proportion over the years,” said Richard Rhodes, author of the Pulitzer Prize-winning book “The Making of the Atomic Bomb.” The question on the scientists’ minds before the test, he said, “wasn’t, ‘Is it going to blow up the world?’ It was, ‘Is it going to work at all?’”
  • ...17 more annotations...
  • In the movie, one scene has J. Robert Oppenheimer, director of the laboratory, seeking to reassure his boss, Gen. Leslie Groves, on the eve of the test. Upon investigation, Oppenheimer tells him, physicists have concluded that the chances the test detonation will destroy the world are “near zero.” Realizing the news has alarmed, not reassured, the general, Oppenheimer asks, “What do you want from theory alone?”“Zero would be nice,” the general replies.
  • no physicists or historians interviewed for this story recalled coming across any mention of such a conversation between Oppenheimer and the general in the historical record.
  • Still, the discussions and calculations persisted long after the Trinity test. In 1946, three Manhattan project scientists, including Teller, who would later become known as the father of the hydrogen bomb, wrote a report concluding that the explosive force of the first atomic bomb wasn’t even close to what would be required to trigger a planet-destroying chain reaction in air. The report was not declassified until 1973.
  • At a conference in the summer of 1942, almost a full year before Los Alamos opened, physicist Edward Teller raised the possibility of atomic bombs igniting Earth’s oceans or atmosphere. According to Rhodes’s account, Hans Bethe, who headed the theoretical division at Los Alamos, “didn’t believe it from the first minute” but nonetheless performed the calculations convincing the other physicists that such a disaster was not a reasonable possibility.
  • “I don’t think any physicists seriously worried about it,” said John Preskill, a professor of theoretical physics at California Institute of Technology.
  • “Did the actual exchange happen at that moment? No, I don’t think so,” said Alex Wellerstein, an associate professor at Stevens Institute of Technology in Hoboken, N.J., and author of the 2021 book, “Restricted Data: The History of Nuclear Secrecy in the United States.”“But were there discussions like that? I believe so,” he added.
  • A 1979 study by scientists at the University of California’s Lawrence Livermore Laboratory examined the question of whether a nuclear explosion might trigger a runaway reaction in the atmosphere or oceans. In page after page of mathematical equations, the scientists described a complex set of factors that made atmospheric ignition effectively impossible.
  • As outlandish as the notion was to many scientists, the nuclear research organization CERN felt obliged to deal with the fear, noting on its website that “some theories suggest that the formation of tiny ‘quantum’ black holes may be possible. The observation of such an event would be thrilling in terms of our understanding of the Universe; and would be perfectly safe.”
  • Dudley’s essay also recounted a story that on the day of the test, “as zero hour approached” Gen. Groves was annoyed to find Manhattan Project physicist and Nobel Prize winner Enrico Fermi making bets with colleagues about whether the bomb would ignite the atmosphere, “and, if so, whether it would destroy only New Mexico ― or the entire world.” (Some experts have suggested Fermi’s actions may have been more of a joke, or an example of gallows humor.)
  • Fascination with this doomsday scenario may stem, at least in part, from a misunderstanding of what physicists mean when they say “near zero.” The branch of physics known as quantum mechanics, which deals with matter and light at the atomic and subatomic scale, does not rule out any possibilities.
  • For example, if a boy tosses a rubber ball at a brick wall, there is an exceedingly remote — but still valid — possibility that instead of watching the ball bounce back, he could see it pass through the wall.
  • Aditi Verma, an assistant professor of nuclear engineering and radiological sciences at the University of Michigan, put it this way: “What a physicist means by ‘near zero’ would be zero to an engineer.”
  • In the 2000s, scientists encountered a similar problem of terminology as they prepared to generate high-speed particle collisions at the Large Hadron Collider in Geneva. Talk surfaced that the activity might generate a black hole that would devour Earth.
  • Probably the easiest to grasp is the fact that, even under the harshest scenarios, far more energy would be lost in the explosion than gained, wiping out any chance to sustain a chain reaction.
  • In other words, any black hole created by the collider would be far too small to pose any risk to the planet.Scientists say such disaster scenarios are sometimes the price of crossing new thresholds of discovery.
  • “You don’t often talk in certainties,” he said. “You talk in probabilities. If you haven’t done the experiment, you are hesitant to say ‘This is impossible. It will never happen.’ … It was good to think it through.”
  • Rhodes added that he hopes the “Oppenheimer” movie will not lead people to doubt the scientists on the Manhattan Project.“They knew what they were doing,” he said. “They were not feeling around in the dark.”
Javier E

Thinking About the Unthinkable in Ukraine: What Happens If Putin Goes Nuclear? - 0 views

  • Planning for the potential that Russia would use nuclear weapons is imperative; the danger would be greatest if the war were to turn decisively in Ukraine’s favor.
  • There are three general options within which U.S. policymakers would find a variation to respond to a Russian nuclear attack against Ukraine
  • The United States could opt to rhetorically decry a nuclear detonation but do nothing militarily. It could unleash nuclear weapons of its own. Or it could refrain from a nuclear counterattack but enter the war directly with large-scale conventional airstrikes and the mobilization of ground forces.
  • ...20 more annotations...
  • A conventional war response is the least bad of the three because it avoids the higher risks of either the weaker or the stronger options.
  • Today, with the balance of forces reversed since the Cold War, the current Russian doctrine of “escalate to deescalate” mimics NATO’s Cold War “flexible response” concept.
  • NATO promoted the policy of flexible response rhetorically, but the idea was always shaky strategically. The actual contingency plans it generated never commanded consensus simply because initiating the use of nuclear weapons risked tit-for-tat exchanges that could culminate in an apocalyptic unlimited war.
  • the group could not reach agreement on specific follow-on options beyond an initial symbolic “demonstration shot” for psychological effect, for fear that Moscow could always match them or up the ante.
  • NATO policymakers should not bank on Moscow’s restraint. Putin has more at stake in the war than Ukraine’s nuclear-armed supporters outside the country do, and he could bet that in a pinch, Washington would be less willing to play Russian roulette than he is
  • As NATO confronts the possibility of Russia using nuclear weapons, the first question it needs to answer is whether that eventuality should constitute a real redline for the West
  • As dishonorable as submission sounds to hawks in advance, if the time actually comes, it will have the strong appeal to Americans, because it would avoid the ultimate risk of national suicide.
  • That immediate appeal has to be balanced by the longer-term risks that would balloon from setting the epochal precedent that initiating a nuclear attack pays off
  • This dilemma underlines the obvious imperative of maximizing Moscow’s disincentives to go nuclear in the first place.
  • if it wants to deter Putin from the nuclear gambit in the first place—governments need to indicate as credibly as possible that Russian nuclear use would provoke NATO, not cow it.
  • If NATO decides it would strike back on Ukraine’s behalf, then more questions arise: whether to also fire nuclear weapons and, if so, how. The most prevalent notion is an eye-for-an-eye nuclear counterattack destroying Russian targets comparable to the ones the original Russian attack had hit.
  • it invites slow-motion exchanges in which neither side gives up and both ultimately end up devastated.
  • both the tit-for-tat and the disproportionate retaliatory options pose dauntingly high risks.
  • A less dangerous option would be to respond to a nuclear attack by launching an air campaign with conventional munitions alone against Russian military targets and mobilizing ground forces for potential deployment into the battle in Ukraine. This would be coupled with two strong public declarations. First, to dampen views of this low-level option as weak, NATO policymakers would emphasize that modern precision technology makes tactical nuclear weapons unnecessary for effectively striking targets that used to be considered vulnerable only to undiscriminating weapons of mass destruction
  • That would frame Russia’s resort to nuclear strikes as further evidence not only of its barbarism but of its military backwardness.
  • The second important message to emphasize would be that any subsequent Russian nuclear use would trigger American nuclear retaliation.
  • Such a strategy would appear weaker than retaliation in kind and would worsen the Russians’ desperation about losing rather than relieve it, thus leaving their original motive for escalation in place along with the possibility that they would double down and use even more nuclear weapons.
  • The main virtue of the conventional option is simply that it would not be as risky as either the weaker do-nothing or the stronger nuclear options.
  • If the challenge that is now only hypothetical actually arrives, entering a nuclearized war could easily strike Americans as an experiment they do not want to run. For that reason, there is a very real possibility that policymakers would wind up with the weakest option: rant about the unthinkable barbarity of the Russian action and implement whatever unused economic sanctions are still available but do nothing militarily.
  • So far, Moscow has been buoyed by the refusal of China, India, and other countries to fully join the economic sanctions campaign imposed by the West. These fence sitters, however, have a stake in maintaining the nuclear taboo. They might be persuaded to declare that their continued economic collaboration with Russia is contingent on it refraining from the use of nuclear weapons.
Javier E

Apocalypse When? Global Warming's Endless Scroll - The New York Times - 0 views

  • the climate crisis is outpacing our emotional capacity to describe it
  • I can’t say precisely when the end began, just that in the past several years, “the end of the world” stopped referring to a future cataclysmic event and started to describe our present situation
  • Across the ironized hellscape of the internet, we began “tweeting through the apocalypse” and blogging the Golden Globes ceremony “during the end times” and streaming “Emily in Paris” “at the end of the world.”
  • ...7 more annotations...
  • global warming represents the collapse of such complex systems at such an extreme scale that it overrides our emotional capacity
  • it is darkly inverted on the Instagram account @afffirmations, where new-age positive thinking buckles under the weight of generational despair, and serene stock photography collides with mantras like “I am not climate change psychosis” and “Humanity is not doomed.”
  • Often the features of our dystopia are itemized, as if we are briskly touring the concentric circles of hell — rising inequality, declining democracy, unending pandemic, the financial system optimistically described as “late” capitalism — until we have reached the inferno’s toasty center, which is the destruction of the Earth through man-made global warming.
  • This creates its own perverse flavor of climate denial: We acknowledge the science but do not truly accept it, at least not enough to urgently act.
  • This paralysis itself is almost too horrible to contemplate. As global warming cooks the Earth, it melts our brains, fries our nerves and explodes the narratives that we like to tell about humankind — even the apocalyptic ones.
  • This “end of the world” does not resemble the ends of religious prophecies or disaster films, in which the human experiment culminates in dramatic final spectacles
  • Instead we persist in an oxymoronic state, inhabiting an end that has already begun but may never actually end.
Javier E

How 2020 Forced Facebook and Twitter to Step In - The Atlantic - 0 views

  • mainstream platforms learned their lesson, accepting that they should intervene aggressively in more and more cases when users post content that might cause social harm.
  • During the wildfires in the American West in September, Facebook and Twitter took down false claims about their cause, even though the platforms had not done the same when large parts of Australia were engulfed in flames at the start of the year
  • Twitter, Facebook, and YouTube cracked down on QAnon, a sprawling, incoherent, and constantly evolving conspiracy theory, even though its borders are hard to delineate.
  • ...15 more annotations...
  • It tweaked its algorithm to boost authoritative sources in the news feed and turned off recommendations to join groups based around political or social issues. Facebook is reversing some of these steps now, but it cannot make people forget this toolbox exists in the future
  • Nothing symbolizes this shift as neatly as Facebook’s decision in October (and Twitter’s shortly after) to start banning Holocaust denial. Almost exactly a year earlier, Zuckerberg had proudly tied himself to the First Amendment in a widely publicized “stand for free expression” at Georgetown University.
  • The evolution continues. Facebook announced earlier this month that it will join platforms such as YouTube and TikTok in removing, not merely labeling or down-ranking, false claims about COVID-19 vaccines.
  • the pandemic also showed that complete neutrality is impossible. Even though it’s not clear that removing content outright is the best way to correct misperceptions, Facebook and other platforms plainly want to signal that, at least in the current crisis, they don’t want to be seen as feeding people information that might kill them.
  • As platforms grow more comfortable with their power, they are recognizing that they have options beyond taking posts down or leaving them up. In addition to warning labels, Facebook implemented other “break glass” measures to stem misinformation as the election approached.
  • Down-ranking, labeling, or deleting content on an internet platform does not address the social or political circumstances that caused it to be posted in the first place
  • Content moderation comes to every content platform eventually, and platforms are starting to realize this faster than ever.
  • Platforms don’t deserve praise for belatedly noticing dumpster fires that they helped create and affixing unobtrusive labels to them
  • Warning labels for misinformation might make some commentators feel a little better, but whether labels actually do much to contain the spread of false information is still unknown.
  • News reporting suggests that insiders at Facebook knew they could and should do more about misinformation, but higher-ups vetoed their ideas. YouTube barely acted to stem the flood of misinformation about election results on its platform.
  • When internet platforms announce new policies, assessing whether they can and will enforce them consistently has always been difficult. In essence, the companies are grading their own work. But too often what can be gleaned from the outside suggests that they’re failing.
  • And if 2020 finally made clear to platforms the need for greater content moderation, it also exposed the inevitable limits of content moderation.
  • Even before the pandemic, YouTube had begun adjusting its recommendation algorithm to reduce the spread of borderline and harmful content, and is introducing pop-up nudges to encourage user
  • even the most powerful platform will never be able to fully compensate for the failures of other governing institutions or be able to stop the leader of the free world from constructing an alternative reality when a whole media ecosystem is ready and willing to enable him. As Renée DiResta wrote in The Atlantic last month, “reducing the supply of misinformation doesn’t eliminate the demand.”
  • Even so, this year’s events showed that nothing is innate, inevitable, or immutable about platforms as they currently exist. The possibilities for what they might become—and what role they will play in society—are limited more by imagination than any fixed technological constraint, and the companies appear more willing to experiment than ever.
Javier E

Sam Altman, the ChatGPT King, Is Pretty Sure It's All Going to Be OK - The New York Times - 0 views

  • He believed A.G.I. would bring the world prosperity and wealth like no one had ever seen. He also worried that the technologies his company was building could cause serious harm — spreading disinformation, undercutting the job market. Or even destroying the world as we know it.
  • “I try to be upfront,” he said. “Am I doing something good? Or really bad?”
  • In 2023, people are beginning to wonder if Sam Altman was more prescient than they realized.
  • ...44 more annotations...
  • And yet, when people act as if Mr. Altman has nearly realized his long-held vision, he pushes back.
  • This past week, more than a thousand A.I. experts and tech leaders called on OpenAI and other companies to pause their work on systems like ChatGPT, saying they present “profound risks to society and humanity.”
  • As people realize that this technology is also a way of spreading falsehoods or even persuading people to do things they should not do, some critics are accusing Mr. Altman of reckless behavior.
  • “The hype over these systems — even if everything we hope for is right long term — is totally out of control for the short term,” he told me on a recent afternoon. There is time, he said, to better understand how these systems will ultimately change the world.
  • Many industry leaders, A.I. researchers and pundits see ChatGPT as a fundamental technological shift, as significant as the creation of the web browser or the iPhone. But few can agree on the future of this technology.
  • Some believe it will deliver a utopia where everyone has all the time and money ever needed. Others believe it could destroy humanity. Still others spend much of their time arguing that the technology is never as powerful as everyone says it is, insisting that neither nirvana nor doomsday is as close as it might seem.
  • he is often criticized from all directions. But those closest to him believe this is as it should be. “If you’re equally upsetting both extreme sides, then you’re doing something right,” said OpenAI’s president, Greg Brockman.
  • To spend time with Mr. Altman is to understand that Silicon Valley will push this technology forward even though it is not quite sure what the implications will be
  • in 2019, he paraphrased Robert Oppenheimer, the leader of the Manhattan Project, who believed the atomic bomb was an inevitability of scientific progress. “Technology happens because it is possible,” he said
  • His life has been a fairly steady climb toward greater prosperity and wealth, driven by an effective set of personal skills — not to mention some luck. It makes sense that he believes that the good thing will happen rather than the bad.
  • He said his company was building technology that would “solve some of our most pressing problems, really increase the standard of life and also figure out much better uses for human will and creativity.”
  • He was not exactly sure what problems it will solve, but he argued that ChatGPT showed the first signs of what is possible. Then, with his next breath, he worried that the same technology could cause serious harm if it wound up in the hands of some authoritarian government.
  • Kelly Sims, a partner with the venture capital firm Thrive Capital who worked with Mr. Altman as a board adviser to OpenAI, said it was like he was constantly arguing with himself.
  • “In a single conversation,” she said, “he is both sides of the debate club.”
  • He takes pride in recognizing when a technology is about to reach exponential growth — and then riding that curve into the future.
  • he is also the product of a strange, sprawling online community that began to worry, around the same time Mr. Altman came to the Valley, that artificial intelligence would one day destroy the world. Called rationalists or effective altruists, members of this movement were instrumental in the creation of OpenAI.
  • Does it make sense to ride that curve if it could end in diaster? Mr. Altman is certainly determined to see how it all plays out.
  • “Why is he working on something that won’t make him richer? One answer is that lots of people do that once they have enough money, which Sam probably does. The other is that he likes power.”
  • “He has a natural ability to talk people into things,” Mr. Graham said. “If it isn’t inborn, it was at least fully developed before he was 20. I first met Sam when he was 19, and I remember thinking at the time: ‘So this is what Bill Gates must have been like.
  • poker taught Mr. Altman how to read people and evaluate risk.
  • It showed him “how to notice patterns in people over time, how to make decisions with very imperfect information, how to decide when it was worth pain, in a sense, to get more information,” he told me while strolling across his ranch in Napa. “It’s a great game.”
  • He believed, according to his younger brother Max, that he was one of the few people who could meaningfully change the world through A.I. research, as opposed to the many people who could do so through politics.
  • In 2019, just as OpenAI’s research was taking off, Mr. Altman grabbed the reins, stepping down as president of Y Combinator to concentrate on a company with fewer than 100 employees that was unsure how it would pay its bills.
  • Within a year, he had transformed OpenAI into a nonprofit with a for-profit arm. That way he could pursue the money it would need to build a machine that could do anything the human brain could do.
  • Mr. Brockman, OpenAI’s president, said Mr. Altman’s talent lies in understanding what people want. “He really tries to find the thing that matters most to a person — and then figure out how to give it to them,” Mr. Brockman told me. “That is the algorithm he uses over and over.”
  • Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, another lab intent on building artificial general intelligence.
  • “These are people who have left an indelible mark on the fabric of the tech industry and maybe the fabric of the world,” he said. “I think Sam is going to be one of those people.”
  • The trouble is, unlike the days when Apple, Microsoft and Meta were getting started, people are well aware of how technology can transform the world — and how dangerous it can be.
  • Mr. Scott of Microsoft believes that Mr. Altman will ultimately be discussed in the same breath as Steve Jobs, Bill Gates and Mark Zuckerberg.
  • The woman was the Canadian singer Grimes, Mr. Musk’s former partner, and the hat guy was Eliezer Yudkowsky, a self-described A.I. researcher who believes, perhaps more than anyone, that artificial intelligence could one day destroy humanity.
  • The selfie — snapped by Mr. Altman at a party his company was hosting — shows how close he is to this way of thinking. But he has his own views on the dangers of artificial intelligence.
  • In March, Mr. Altman tweeted out a selfie, bathed by a pale orange flash, that showed him smiling between a blond woman giving a peace sign and a bearded guy wearing a fedora.
  • He also helped spawn the vast online community of rationalists and effective altruists who are convinced that A.I. is an existential risk. This surprisingly influential group is represented by researchers inside many of the top A.I. labs, including OpenAI.
  • They don’t see this as hypocrisy: Many of them believe that because they understand the dangers clearer than anyone else, they are in the best position to build this technology.
  • Mr. Altman believes that effective altruists have played an important role in the rise of artificial intelligence, alerting the industry to the dangers. He also believes they exaggerate these dangers.
  • As OpenAI developed ChatGPT, many others, including Google and Meta, were building similar technology. But it was Mr. Altman and OpenAI that chose to share the technology with the world.
  • Many in the field have criticized the decision, arguing that this set off a race to release technology that gets things wrong, makes things up and could soon be used to rapidly spread disinformation.
  • Mr. Altman argues that rather than developing and testing the technology entirely behind closed doors before releasing it in full, it is safer to gradually share it so everyone can better understand risks and how to handle them.
  • He told me that it would be a “very slow takeoff.”
  • When I asked Mr. Altman if a machine that could do anything the human brain could do would eventually drive the price of human labor to zero, he demurred. He said he could not imagine a world where human intelligence was useless.
  • If he’s wrong, he thinks he can make it up to humanity.
  • His grand idea is that OpenAI will capture much of the world’s wealth through the creation of A.G.I. and then redistribute this wealth to the people. In Napa, as we sat chatting beside the lake at the heart of his ranch, he tossed out several figures — $100 billion, $1 trillion, $100 trillion.
  • If A.G.I. does create all that wealth, he is not sure how the company will redistribute it. Money could mean something very different in this new world.
  • But as he once told me: “I feel like the A.G.I. can help with that.”
‹ Previous 21 - 25 of 25
Showing 20 items per page