Skip to main content

Home/ History Readings/ Group items tagged atomization

Rss Feed Group items tagged

Javier E

Trump, Taxes and Citizenship - The New York Times - 0 views

  • You can be a taxpayer or you can be a citizen. If you’re a taxpayer your role in the country is defined by your economic and legal status. Your primary identity is individual. You’re perfectly within your rights to do everything you legally can to look after your self-interest.
  • Within this logic, it’s perfectly fine for Donald Trump to have potentially paid no income taxes, even over a long period of time
  • As Trump and his advisers have argued, it is normal practice in our society to pay as little in taxes as possible.
  • ...13 more annotations...
  • The problem with the taxpayer mentality is that you end up serving your individual interest short term but soiling the nest you need to be happy in over the long term.
  • A healthy nation isn’t just an atomized mass of individual economic and legal units. A nation is a web of giving and getting. You give to your job, and your employer gives to you. You give to your neighborhood, and your neighborhood gives to you. You give to your government, and your government gives to you.
  • It starts with the warm glow of love of country.
  • this is exactly the atomized mentality that is corroding America. Years ago, David Foster Wallace put it gently: “It may sound reactionary, I know. But we can all feel it. We’ve changed the way we think of ourselves as citizens. We don’t think of ourselves as citizens in the old sense of being small parts of something larger and infinitely more important to which we have serious responsibilities. We do still think of ourselves as citizens in the sense of being beneficiaries — we’re actually conscious of our rights as American citizens and the nation’s responsibilities to us and ensuring we get our share of the American pie.”
  • The older citizenship mentality is a different mentality.
  • If you orient everything around individual self-interest, you end up ripping the web of giving and receiving. Neighbors can’t trust neighbors. Individuals can’t trust their institutions, and they certainly can’t trust their government. Everything that is not explicitly prohibited is permissible. Everybody winds up suspicious and defensive and competitive.
  • It continues with a sense of sweet gratitude that the founders of the country, for all their flaws, were able to craft a structure of government that is suppler and more lasting than anything we seem to be able to craft today.
  • The citizen enjoys a sweet reverence for all the gifts that have been handed down over time, and a generous piety about country that is the opposite of arrogance.
  • Out of this sweet parfait of emotions comes a sense of a common beauty that transcends individual beauty. There’s a sense of how a lovely society is supposed to be. This means that the economic desire to save money on taxes competes with a larger desire to be part of a lovely world.
  • In a lovely society everybody practices a kind of social hygiene. There are some things that are legal but distasteful and corrupt. In a lovely society people shun these corrupt and corrupting things.
  • In a lovely society everyone feels privilege, but the rich feel a special privilege. They know that they have already been given more than they deserve, and that it is actually not going to hurt all that much to try to be worthy of what they’ve received.
  • You can say that a billionaire paying no taxes is fine and legal. But you have to adopt an overall mentality that shuts down a piece of your heart, and most of your moral sentiments.
  • That mentality is entirely divorced from the mentality of commonality and citizenship. That mentality has side effects. They may lead toward riches, but they lead away from happiness.
bodycot

Obama on Trump: 'Don't underestimate the guy, because he's going to be 45th president o... - 0 views

    • bodycot
       
      Obama on the end of his presidency.
  • Thousands of people showed up in freezing temperatures on Sunday in Michigan to hear Sen. Bernie Sanders denounce Republican efforts to repeal President Barack Obama's health care law, one of dozens of rallies Democrats staged across the country to highlight opposition.
  • "I'm going to get really sick and my life will be at risk," said Bible, an online antique dealer.
  • ...26 more annotations...
  • "This is the wealthiest country in the history of the world. It is time we got our national priorities right," Sanders told the Michigan rally.
  • Britt Waligorski, 31, a health care administrator for a dental practice, said she didn't get health insurance through work but has been covered through the health law for three years. While the premiums have gone up, she said she is concerned that services for women will be taken away if it is repealed.
  • About 2,000 people cheered and held rainbow and American flags and signs that read "Don't Make America Sick Again" and "Health Care For All" at the rally.
  • Republicans want to end the fines that enforce the requirement that many individuals buy coverage and that larger companies provide it to workers.
    • bodycot
       
      Pro-ACA rally.
  • With eager anticipation, the Kremlin is counting the days to Donald Trump's inauguration and venting its anger at Barack Obama's outgoing administration, no holds barred.
  • At the same time, Russian officials are blasting the outgoing U.S. administration in distinctly undiplomatic language, dropping all decorum after Obama hit Moscow with more sanctions in his final weeks in office.
  • On Sunday, Vice President-elect Mike Pence insisted the Trump presidential campaign had no contacts with Russia and denied that the incoming national security adviser spoke with Russian officials in December about sanctions. He added that such questions were part of an effort to cast doubt on Trump's victory.
  • In an interview Friday with The Wall Street Journal, Trump said he might do away with Obama's sanctions if Russia works with the U.S. on battling terrorists and achieving other goals.
    • bodycot
       
      Kremlin
  • "We and many analysts believe that the (agreement) is consolidated. The new U.S. administration will not be able to abandon it," Araqchi told a news conference in Tehran, held a year after the deal took effect.
  • Trump, who will take office on Friday, has threatened to either scrap the agreement, which curbs Iran's nuclear programme and lifts sanctions against it, or seek a better deal.
  • "It's quite likely that the U.S. Congress or the next administration will act against Iran and imposes new sanctions."
  • But Iran is still subject to an U.N. arms embargo and other restrictions, which are not technically part of the nuclear agreement.
    • bodycot
       
      Iran Nuclear Deal.
  • The event was marked by tense exchanges as Trump repeated his refusal to release his tax returns and denounced media outlets that published stories based on unverified allegations about his ties to the Kremlin
  • Trump began his remarks on Tuesday by blaming “inaccurate news” for his decision not to take questions from the press more often.
  • Trump went on to address a pair of reports published Tuesday night that touched on unverified accusations about his relationship with Russia. The first report, which came from CNN, said intelligence officials had presented information to Trump alleging that the Russian government had an ongoing relationship with members of his campaign — and, more sensationally, possessed compromising information about him that could be used for blackmail.
  • “I want to thank a lot of the news organizations … some of whom have not treated me very well over the years. …
  • “It’s all fake news. It’s phony stuff. It didn’t happen, and it was gotten by opponents of ours, as you know, because you reported it and so did many of the other people.
    • bodycot
       
      Trump press conference.
  • “No, no, no,” Jones said with a sly grin that barely disguised his evident hostility. Sitting back in his barber chair, he shook his head and narrowed his eyes. “That’s not why you are here. You’re here because of the billboards, because of the KKK. That’s why you are here.”
  • When the controversial billboards were ripped down and defaced, they were replaced almost immediately.
  • “While Trump wants to make America great again, we have to ask ourselves, ‘What made America great in the first place?
  • The Trump campaign quickly disavowed the endorsement
    • bodycot
       
      KKK
maddieireland334

Hiroshima bombing was Japan's fault, says Chinese state media - The Washington Post - 0 views

  • President Obama became the first sitting U.S. president to visit the Japanese city of Hiroshima, site of the first use of a nuclear bomb in warfare more than seven decades ago. He did not apologize for his nation's act — which led to the deaths of an estimated 140,000 people — but made a somber speech about the need for disarmament.
  • The decision by President Harry Truman to deploy this terrifying weapon was, according to the China Daily, "a bid to bring an early end to the war and prevent protracted warfare from claiming even more lives."
  • The editorial reminded all that Japan's imperialist regime had brought on the onslaught after a decade of expansionist war and brutal occupation elsewhere in Asia.
  • ...4 more annotations...
  • "It was the war of aggression the Japanese militarist government launched against its neighbors and its refusal to accept its failure that had led to U.S. dropping the atomic bombs," it concluded.
  • To this day, governments in Beijing and Seoul both complain about Japan's perceived failures to fully atone for and properly remember the violence unleashed by its military.
  • The Chinese rhetoric was similar to what was aired in April following U.S. Secretary of State John F. Kerry's own visit to Hiroshima.
  • An editorial by the state-run Xinhua news agency said "it is Tokyo's lasting moral obligation to let that notorious chapter known by every citizen of the country and make compensations and apologies fair and square to the affected individuals and facilities, not just in Japan but also in other stricken nations."
aqconces

Barack Obama's Hiroshima trip stirs debate on Harry Truman's fateful choice - ABC News ... - 0 views

  • Barack Obama's visit to Hiroshima next week has reignited an emotive debate over former US president Harry Truman's epoch-making decision to drop the first atomic bomb
  • Within four months, the atomic bomb had been successfully tested, targets had been selected, "Little Boy" and "Fat Man" had been dropped on Hiroshima and Nagasaki killing an estimated 214,000 people, and Japan's Emperor Hirohito had surrendered.
  • The speed, circumstances and repercussions of Truman's decision remain contentious.
  • ...9 more annotations...
  • "When Mr Obama visits Hiroshima on May 27 he should place no distance between himself and Harry Truman," wrote Wilson Miscamble, a Notre Dame University history professor.
  • "Rather he should pay tribute to the president whose actions brought a terrible war to an end."
  • "I was one of those who felt that there were a number of cogent reasons to question the wisdom of such an act," he later wrote.
  • According to historian and biographer David McCullough, at that point not a single Japanese unit had surrendered during the war.
  • Meeting with Joseph Stalin and Winston Churchill at Potsdam, the three leaders called for Tokyo to "surrender unconditionally" or face "prompt and utter destruction".
  • Within Truman's inner circle there were voices against using the bomb, including Dwight Eisenhower, the future president who was then a wartime general.
  • Japan showed no signs of surrender, despite heavy losses and a seemingly inevitable defeat.
  • Asked whether Mr Obama would make the same decision as Truman, aide and spokesman Josh Earnest said: "I think what the president would say is that it's hard to put yourself in that position from the outside."
  • "I think it's hard to look back and second-guess it too much."
proudsa

Obama's Hiroshima Visit Is a Reminder that Atomic Bombs Weren't What Won the War - 0 views

  • opportunity to reconsider some of the myths surrounding the historic decision to use the atomic bomb
  • loss of 135,000 people made little impact on the Japanese military
  • Japan had been willing to sacrifice city after city to American conventional bombing in the months leading up to Hiroshima
  • ...12 more annotations...
  • The historical record also makes clear that American leaders fully understood this.
  • Japan was likely to surrender with the sole proviso that Japan be allowed to keep its emperor in some figurehead role.
  • The U.S. military had long planned to keep the emperor in such a role to help control Japan during the postwar occupation.
  • they too judged that this would end the war.
  • the United States allowed Japan to keep its emperor as a way to help control Japan during the occupation
  • the use of this barbarous weapon at Hiroshima and Nagasaki was of no material assistance in our war against Japan
  • What can be proved is that the president was advised that the assurances were, in fact, likely to end the war without the bombs
  • So there was plenty time to use the bombs if Japan did not surrender once assurances for the emperor were given.
  • Close attention to some key dates is also instructive.
  • the Red Army attack on or around Aug. 8. Hiroshima was destroyed on Aug. 6 and Nagasaki on Aug. 9.
  • What really happened in the days leading up to the decision to destroy Hiroshima and Nagasaki may never be known.
  • The only serious answer to the threat of nuclear weapons is an all-out effort to abolish them from arsenals throughout the world
horowitzza

Obama at Hiroshima: What to watch for - CNNPolitics.com - 0 views

  • President Barack Obama will become the first sitting president to visit the city Friday, traveling there to offer a reconciliatory balm for the still-painful knowledge of the devastation countries can inflict upon one another.
  • He also hopes to remind the world that nuclear weapons remain a global threat when placed in the wrong hands.
  • Officials say Obama won't apologize for the decision to use an atomic bomb, which many historians insist was necessary to hasten the end of World War II and save lives.
  • ...4 more annotations...
  • Such an apology would ripple to the U.S., likely igniting a political firestorm on the home front in the midst of the presidential race.
  • Obama said Thursday he hoped to mark Hiroshima as a history-altering moment -- the U.S. is the only country to have ever used a nuclear bomb -- that humanity must avoid repeating.
  • "The dropping of the atomic bomb, the ushering in of nuclear weapons, was an inflection point in modern history," Obama said during a news conference at the G-7 Summit in Japan.
  • Obama further hopes his appearance at the site will serve to reinforce his bid to reduce global stockpiles of nuclear weapons, an effort that's had only moderate strides after seven years in office.
horowitzza

Japan doesn't want the U.S. to apologize for bombing Hiroshima. Here's why - LA Times - 0 views

  • For years, the question has lingered: Should the U.S. apologize for dropping the atomic bomb on Hiroshima
  • No sitting U.S. president has visited the city since it was largely destroyed in an atomic blast during World War II
  • Secretary of State John Kerry may have foreshadowed what’s to come when he visited Hiroshima this month and called the experience “gut-wrenching.”
  • ...9 more annotations...
  • Does Japan even want an apology?
  • Likely not. A secret 2009 state department cable published by Wikileaks in 2011 indicated Japan was cool to the idea and worried that it would only serve to energize anti-nuclear activists in the country.
  • the government’s official stance was that it would be more meaningful for the U.S. and Japan to “aim for a peaceful and safe world without nuclear weapons.”
  • An apology also could harden the opposition to using nuclear power in Japan, a sentiment that blossomed after the meltdown at Fukushima. The administration has made nuclear power a major part of its energy policy.
  • “Why doesn’t the Japanese government want Mr. Obama to apologize? Because it tears the scab off a much bigger wound that Japan wants healed,”
  • “If Obama apologizes at Hiroshima, it draws attention to Japanese behavior elsewhere in Asia during the ’30s and ’40s.
  • I think the American people should know that not only the bombing of Hiroshima and Nagasaki, but the firebombing of Tokyo in which thousands died, were illegal acts against humanity. They were civilian massacres.”
  • A 2015 opinion poll by a Russian news agency found that 60% of the Japanese public wanted an apology for the bombing.
  • But what the Japanese government and the public want aren’t always the same.
  •  
    Examines whether or not the US should apologize for the Hiroshima bombings
grayton downing

Iran Says It Agrees to 'Road Map' With U.N. on Nuclear Inspections - NYTimes.com - 0 views

  • The International Atomic Energy Agency said on Monday that Iran had agreed to resolve all outstanding issues with the agency, and would permit “managed access” by international inspectors to two key nuclear facilities that have not been regularly viewed.
  • the promise of wider scrutiny did not extend to one of the most contentious locations: the Parchin military site southwest of Tehran. Inspectors from the agency, the United Nations’ nuclear watchdog,
  • “This is an important step forward to start with, but much more needs to be done,
  • ...5 more annotations...
  • Secretary of State John Kerry said at a news conference in the United Arab Emirates that the Obama administration was not in a “race” to strike a deal.
  • In the past, the agency has questioned whether the Gachin mine, which produces yellowcake uranium for conversion to nuclear fuel, is linked to Iran’s military. The heavy-water plant at Arak could produce plutonium, which can be used in a weapon, and a key concern is that once the plant is operational, it would be all but impossible to destroy it without running the risk of spreading deadly plutonium. Western officials noted that the agreement gave the atomic agency access to the heavy production plant but not the nuclear reactor, which is under construction there.
  • lack of in-depth information and inspections of the heavy-water plant have been a particular worry to the West. French officials went further and indicated that they wanted construction halted altogether at the facility, and that Iran’s failure to agree to that was one reason the French were reluctant to endorse a broader deal with Iran this weekend.
  • The agreement on Monday comprised a four-paragraph statement and six bullet points in an annex of issues to be tackled within the next three months.
  • “Managed access” is a term used by the United Nations agency to denote the ground rules for inspections that permit host countries to protect information they consider to be proprietary or secret, such as military technology, while still allowing inspectors to garner data they require, officials said.
grayton downing

A Step, if Modest, Toward Slowing Iran's Weapons Capability - NYTimes.com - 0 views

  • At the beginning of Mr. Obama’s presidency, Iran had roughly 2,000 kilograms of low-enriched uranium, barely enough for a bomb. It now has about 9,000 kilograms, by the estimates of the International Atomic Energy Agency.
  • True rollback would mean dismantling many of those centrifuges, shipping much of the fuel out of the country
  • There is also the problem of forcing Iran to reveal what kind of progress it has made toward designing a weapon. For years, its leaders have refused to answer questions about documents, slipped out of the country by a renegade scientist nearly eight years ago, that strongly suggest work on a nuclear warhead
  • ...2 more annotations...
  • Even then, a single weapon would do Iran little good next to Israel’s 100 or more and the United States’ thousands, as Mr. Zarif, the foreign minister, often points out.
  • After all, his stated goal has always been to prevent Iran from getting a bomb, not to prevent it from getting the capability to do so. He knows he cannot destroy, by bombs or deals, whatever knowledge Iran has gained of how to build a weapon. It is too late for that.
maddieireland334

Iran Complies With Nuclear Deal; Sanctions Are Lifted - The New York Times - 0 views

  • The United States and European nations lifted oil and financial sanctions on Iran and released roughly $100 billion of its assets after international inspectors concluded that the country had followed through on promises to dismantle large sections of its nuclear program.
  • Five Americans, including a Washington Post reporter, Jason Rezaian, were released by Iran hours before the nuclear accord was implemented.
  • Early on Sunday, a senior United States official confirmed that “our detained U.S. citizens have been released and that those who wished to depart Iran have left.”
  • ...16 more annotations...
  • “Iran has undertaken significant steps that many people — and I do mean many — doubted would ever come to pass,” Secretary of State John Kerry said Saturday evening at the headquarters of the International Atomic Energy Agency, which earlier issued a report detailing how Iran had shipped 98 percent of its fuel to Russia, dismantled more than 12,000 centrifuges so they could not enrich uranium, and poured cement into the core of a reactor designed to produce plutonium.
  • The release of the “unjustly detained” Americans, as Mr. Kerry put it, came at some cost: Seven Iranians, either convicted or charged with breaking American embargoes, were released in the prisoner swap, and 14 others were removed from international wanted lists.
  • They particularly object to the release of about $100 billion in frozen assets — mostly from past oil sales — that Iran will now control, and the end of American and European restrictions on trade that had been imposed as part of the American-led effort to stop the program.
  • In Tehran and Washington, political battles are still being fought over the merits and dangers of moving toward normal interchanges between two countries that have been avowed adversaries for more than three decades.
  • But Mr. Kerry suggested that the nuclear deal had broken the cycle of hostility, enabling the secret negotiations that led up to the hostage swap.
  • “Critics will continue to attack the deal for giving away too much to Tehran,” said R. Nicholas Burns, who started the sanctions against Iran that were lifted Saturday as the No. 3 official in the State Department during the George W. Bush administration.
  • A copy of the proposed sanction leaked three weeks ago, and the Obama administration pulled it back — perhaps to avoid torpedoing the prisoner swap and the completion of the nuclear deal. Negotiations to win the release of Mr. Rezaian, who had covered the nuclear talks before he was imprisoned on vague charges, were an open secret: Mr. Kerry often alluded to the fact that he was working on the issue behind the scenes.
  • Then, several weeks ago the Iranians leaked news that they were interested in a swap of their own citizens, which American officials said was an outrageous demand, because they had been indicted or convicted in a truly independent court system.
  • The result was two parallel races underway — one involving implementing the nuclear deal, the other to get the prisoner swap done while the moment was ripe.
  • For example, the United States and Iran were struggling late Saturday to define details of what kind of “advanced centrifuges” Iran will be able to develop nearly a decade from now — the kind of definitional difference that can undermine an accord.
  • The result was that Mr. Kerry and Mr. Zarif veered from the monumental significance of what they were accomplishing — an end to a decade of open hostility — to the minutiae of uranium enrichment.
  • But Iran has something it desperately needs: Billions in cash, at a time oil shipments have been cut by more than half because of the sanctions, and below $30-a-barrel prices mean huge cuts in national revenue.
  • senior American official said Saturday that Iran will be able to access about $50 billion of a reported $100 billion in holdings abroad, though others have used higher estimates. The official said Iran will likely need to keep much of those assets abroad to facilitate international trade.
  • The Obama administration on Saturday also removed 400 Iranians and others from its sanctions list and took other steps to lift selected restrictions on interactions with Iran
  • Under the new rules put in place, the United States will no longer sanction foreign individuals or firms for buying oil and gas from Iran. The American trade embargo remains in place, but the government will permit certain limited business activities with Iran, such as selling or purchasing Iranian food and carpets and American commercial aircraft and parts.
  • It is unclear what will happen after the passing of Iran’s Supreme Leader, Ayatollah Ali Khamenei, who has protected and often fueled the hardliners — but permitted these talks to go ahead.
Javier E

Have Dark Forces Been Messing With the Cosmos? - The New York Times - 0 views

  • Long, long ago, when the universe was only about 100,000 years old — a buzzing, expanding mass of particles and radiation — a strange new energy field switched on. That energy suffused space with a kind of cosmic antigravity, delivering a not-so-gentle boost to the expansion of the universe.Then, after another 100,000 years or so, the new field simply winked off, leaving no trace other than a speeded-up universe.
  • astronomers from Johns Hopkins University. In a bold and speculative leap into the past, the team has posited the existence of this field to explain an astronomical puzzle: the universe seems to be expanding faster than it should be.
  • The cosmos is expanding only about 9 percent more quickly than theory prescribes. But this slight-sounding discrepancy has intrigued astronomers, who think it might be revealing something new about the universe.
  • ...30 more annotations...
  • Adding to the confusion, there already is a force field — called dark energy — making the universe expand faster. And a new, controversial report suggests that this dark energy might be getting stronger and denser, leading to a future in which atoms are ripped apart and time ends.
  • Or it could all be a mistake. Astronomers have rigorous methods to estimate the effects of statistical noise and other random errors on their results; not so for the unexamined biases called systematic errors.
  • “The unknown systematic is what gets you in the end.
  • As space expands, it carries galaxies away from each other like the raisins in a rising cake. The farther apart two galaxies are, the faster they will fly away from each other. The Hubble constant simply says by how much
  • But to calibrate the Hubble constant, astronomers depend on so-called standard candles: objects, such as supernova explosions and certain variable stars, whose distances can be estimated by luminosity or some other feature. This is where the arguing begins
  • in 2001, a team using the Hubble Space Telescope, and led by Dr. Freedman, reported a value of 72. For every megaparsec farther away from us that a galaxy is, it is moving 72 kilometers per second faster.
  • d astronomers now say they have narrowed the uncertainty in the Hubble constant to just 2.4 percent.
  • These results are so good that they now disagree with results from the European Planck spacecraft, which predict a Hubble constant of 67.
  • Planck is considered the gold standard of cosmology. It spent four years studying the cosmic bath of microwaves left over from the end of the Big Bang, when the universe was just 380,000 years old. But it did not measure the Hubble constant directly
  • Rather, the Planck group derived the value of the constant, and other cosmic parameters, from a mathematical model largely based on those microwaves
  • In short, Planck’s Hubble constant is based on a cosmic baby picture. In contrast, the classical astronomical value is derived from what cosmologists modestly call “local measurements,” a few billion light-years deep into a middle-aged universe
  • What if that baby picture left out or obscured some important feature of the universe
  • String theory suggests that space could be laced with exotic energy fields associated with lightweight particles or forces yet undiscovered. Those fields, collectively called quintessence, could act in opposition to gravity, and could change over time — popping up, decaying or altering their effect, switching from repulsive to attractive.
  • The team focused in particular on the effects of fields associated with hypothetical particles called axions. Had one such field arisen when the universe was about 100,000 years old, it could have produced just the right amount of energy to fix the Hubble discrepancy, the team reported in a paper late last year. They refer to this theoretical force as “early dark energy.”
  • The jury is still out. Dr. Riess said that the idea seems to work, which is not to say that he agrees with it, or that it is right. Nature, manifest in future observations, will have the final say.
  • So far, the smart money is still on cosmic confusion. Michael Turner, a veteran cosmologist at the University of Chicago and the organizer of a recent airing of the Hubble tensions, said, “Indeed, all of this is going over all of our heads. We are confused and hoping that the confusion will lead to something good!”
  • Early dark energy appeals to some cosmologists because it hints at a link to, or between, two mysterious episodes in the history of the universe.
  • The first episode occurred when the universe was less than a trillionth of a trillionth of a second old. At that moment, cosmologists surmise, a violent ballooning propelled the Big Bang; in a fraction of a trillionth of a second, this event — named “inflation” by the cosmologist Alan Guth, of M.I.T. — smoothed and flattened the initial chaos into the more orderly universe observed today. Nobody knows what drove inflation.
  • The second episode is unfolding today: cosmic expansion is speeding up.
  • The issue came to light in 1998, when two competing teams of astronomers asked whether the collective gravity of the galaxies might be slowing the expansion enough to one day drag everything together into a Big Crunch
  • To great surprise, they discovered the opposite: the expansion was accelerating under the influence of an anti-gravitational force later called dark energy
  • Dark energy comprises 70 percent of the mass-energy of the universe. And, spookily, it behaves very much like a fudge factor known as the cosmological constant, a cosmic repulsive force that Einstein inserted in his equations a century ago thinking it would keep the universe from collapsing under its own weight.
  • Under the influence of dark energy, the cosmos is now doubling in size every 10 billion years — to what end, nobody knows
  • Early dark energy, the force invoked by the Johns Hopkins group, might represent a third episode of antigravity taking over the universe and speeding it up
  • “Maybe the universe does this from time-to-time?”
  • If dark energy remains constant, everything outside our galaxy eventually will be moving away from us faster than the speed of light, and will no longer be visible. The universe will become lifeless and utterly dark.But if dark energy is temporary — if one day it switches off — cosmologists and metaphysicians can all go back to contemplating a sensible tomorrow.
  • As standard candles, quasars aren’t ideal because their masses vary widely. Nevertheless, the researchers identified some regularities in the emissions from quasars, allowing the history of the cosmos to be traced back nearly 12 billion years. The team found that the rate of cosmic expansion deviated from expectations over that time span.
  • One interpretation of the results is that dark energy is not constant after all, but is changing, growing denser and thus stronger over cosmic time. It so happens that this increase in dark energy also would be just enough to resolve the discrepancy in measurements of the Hubble constant.
  • The bad news is that, if this model is right, dark energy may be in a particularly virulent and — most physicists say — implausible form called phantom energy. Its existence would imply that things can lose energy by speeding up
  • As the universe expands, the push from phantom energy would grow without bounds, eventually overcoming gravity and tearing apart first Earth, then atoms
Javier E

The U.S. and Japan have very different memories of World War II. - 0 views

  • The Japanese national narrative is that the bomb gave Japan a mission for peace in the world. The bomb doesn’t end the war: It starts the postwar mission for peace. The American narrative is that the bomb ended the war and saved American lives. That’s the story.
  • the critique some people make is that Japan’s understanding of the war hasn’t changed at all, on any front, and that the country still sees itself as a victim rather than an aggressor.
  • It has a victim narrative, but that is true with every country, including Germany, which saw itself as a victim of its leaders. But Japanese victims’ narratives lasted a lot longer than others. There are several reasons for that, but probably the most important was the United States, which conspired in creating that narrative in the first few months after the American occupation. To achieve the goals of the American occupation, it was important to see the Japanese aggression and atrocities as something that was brought about by bad leaders, so that these leaders—but not the people—were held responsible. That was a good grounding for reforms. This narrative sat well with the Japanese but it was a co-created narrative.
  • ...7 more annotations...
  • The bomb story hasn’t changed but the country has changed since 1989. When Hirohito died in 1989, the same year the Cold War ended, the United States stopped being the only country that mattered to Japan. The country was [suddenly] facing Asia, and so you got the rise of issues like the comfort women and biological war crimes.
  • These things, according to Japanese opinion polls, have had a tremendous impact on the Japanese public. That is why there is a conservative backlash. If you look at polls about the comfort women, the Japanese people think the comfort women should be compensated.
  • Here, I don’t think the story has changed but the attitude is changing. The people who fought in WWII will not change their narrative. They tried to put it on a postage stamp saying, “Atomic Bomb Hastened War’s End.” But then you have future generations that are not all the same.
  • The Japanese ignore everything before Hiroshima and the Americans ignore everything after Nagasaki. Both of the stories are truncated.
  • There is one other point. The atomic bombings were a continuation of civilian bombing, area bombing, carpet bombing, that every country did in World War II. It was universal. So if we are talking about the lessons of Hiroshima, we need to talk about the lessons of civilian bombings generally.
  • I think the main thing of the visit—like most things involving the politics of memory—is symbolic. It is a symbolic gesture. It says, “We don’t believe nuclear war is right and we don’t want to see it ever again.” That’s what the banner in Hiroshima says: “We shall not repeat this evil.”
  • The New York Times asked what everyone else does: Does this refer to the bomb or the war? Yes, there is an ambiguity there. Actually it means both. And so that’s what Obama is saying with his visit. We are saying that this sort of suffering is terrible, and that’s good. Instead of having a huge military parade, which have gotten bigger and bigger in Moscow and Beijing, this is another way of talking about the war.
brookegoodman

Joseph Stalin - Biography, World War II & Facts - HISTORY - 0 views

  • Joseph Stalin (1878-1953) was the dictator of the Union of Soviet Socialist Republics (USSR) from 1929 to 1953. Under Stalin, the Soviet Union was transformed from a peasant society into an industrial and military superpower. However, he ruled by terror, and millions of his own citizens died during his brutal reign. 
  • Joseph Stalin was born Josef Vissarionovich Djugashvili on December 18, 1878, or December 6, 1878, according to the Old Style Julian calendar (although he later invented a new birth date for himself: December 21, 1879), in the small town of Gori, Georgia, then part of the Russian empire. When he was in his 30s, he took the name Stalin, from the Russian for “man of steel.”
  • Starting in the late 1920s, Joseph Stalin launched a series of five-year plans intended to transform the Soviet Union from a peasant society into an industrial superpower. His development plan was centered on government control of the economy and included the forced collectivization of Soviet agriculture, in which the government took control of farms. Millions of farmers refused to cooperate with Stalin’s orders and were shot or exiled as punishment. The forced collectivization also led to widespread famine across the Soviet Union that killed millions.
  • ...3 more annotations...
  • In 1939, on the eve of World War II, Joseph Stalin and German dictator Adolf Hitler (1889-1945) signed the German-Soviet Nonaggression Pact. Stalin then proceeded to annex parts of Poland and Romania, as well as the Baltic states of Estonia, Latvia and Lithuania. He also launched an invasion of Finland. Then, in June 1941, Germany broke the Nazi-Soviet pact and invaded the USSR, making significant early inroads. (Stalin had ignored warnings from the Americans and the British, as well as his own intelligence agents, about a potential invasion, and the Soviets were not prepared for war.) 
  • Joseph Stalin did not mellow with age: He prosecuted a reign of terror, purges, executions, exiles to labor camps and persecution in the postwar USSR, suppressing all dissent and anything that smacked of foreign–especially Western–influence. He established communist governments throughout Eastern Europe, and in 1949 led the Soviets into the nuclear age by exploding an atomic bomb. In 1950, he gave North Korea’s communist leader Kim Il Sung (1912-1994) permission to invade United States-supported South Korea, an event that triggered the Korean War.
  • Stalin, who grew increasingly paranoid in his later years, died on March 5, 1953, at age 74, after suffering a stroke. His body was embalmed and preserved in Lenin’s mausoleum in Moscow’s Red Square until 1961, when it was removed and buried near the Kremlin walls as part of the de-Stalinization process initiated by Stalin’s successor Nikita Khrushchev (1894-1971).
Javier E

'Oppenheimer,' 'The Maniac' and Our Terrifying Prometheus Moment - The New York Times - 0 views

  • Prometheus was the Titan who stole fire from the gods of Olympus and gave it to human beings, setting us on a path of glory and disaster and incurring the jealous wrath of Zeus. In the modern world, especially since the beginning of the Industrial Revolution, he has served as a symbol of progress and peril, an avatar of both the liberating power of knowledge and the dangers of technological overreach.
  • The consequences are real enough, of course. The bombs dropped on Hiroshima and Nagasaki killed at least 100,000 people. Their successor weapons, which Oppenheimer opposed, threatened to kill everybody els
  • Annie Dorsen’s theater piece “Prometheus Firebringer,” which was performed at Theater for a New Audience in September, updates the Greek myth for the age of artificial intelligence, using A.I. to weave a cautionary tale that my colleague Laura Collins-Hughes called “forcefully beneficial as an examination of our obeisance to technology.”
  • ...13 more annotations...
  • Something similar might be said about “The Maniac,” Benjamín Labatut’s new novel, whose designated Prometheus is the Hungarian-born polymath John von Neumann, a pioneer of A.I. as well as an originator of game theory.
  • both narratives are grounded in fact, using the lives and ideas of real people as fodder for allegory and attempting to write a new mythology of the modern world.
  • on Neumann and Oppenheimer were close contemporaries, born a year apart to prosperous, assimilated Jewish families in Budapest and New York. Von Neumann, conversant in theoretical physics, mathematics and analytic philosophy, worked for Oppenheimer at Los Alamos during the Manhattan Project. He spent most of his career at the Institute for Advanced Study, where Oppenheimer served as director after the war.
  • More than most intellectual bastions, the institute is a house of theory. The Promethean mad scientists of the 19th century were creatures of the laboratory, tinkering away at their infernal machines and homemade monsters. Their 20th-century counterparts were more likely to be found at the chalkboard, scratching out our future in charts, equations and lines of code.
  • MANIAC. The name was an acronym for “Mathematical Analyzer, Numerical Integrator and Computer,” which doesn’t sound like much of a threat. But von Neumann saw no limit to its potential. “If you tell me precisely what it is a machine cannot do,” he declared, “then I can always make a machine which will do just that.” MANIAC didn’t just represent a powerful new kind of machine, but “a new type of life.”
  • More than 200 years after the Shelleys, Prometheus is having another moment, one closer in spirit to Mary’s terrifying ambivalence than to Percy’s fulsome gratitude. As technological optimism curdles in the face of cyber-capitalist villainy, climate disaster and what even some of its proponents warn is the existential threat of A.I., that ancient fire looks less like an ember of divine ingenuity than the start of a conflagration. Prometheus is what we call our capacity for self-destruction.
  • Oppenheimer wasn’t a principal author of that theory. Those scientists, among them Niels Bohr, Erwin Schrödinger and Werner Heisenberg, were characters in Labatut’s previous novel, “When We Cease to Understand the World.” That book provides harrowing illumination of a zone where scientific insight becomes indistinguishable from madness or, perhaps, divine inspiration. The basic truths of the new science seem to explode all common sense: A particle is also a wave; one thing can be in many places at once; “scientific method and its object could no longer be prised apart.”
  • . Oppenheimer’s designation as Prometheus is precise. He snatched a spark of quantum insight from those divinities and handed it to Harry S. Truman and the U.S. Army Air Forces.
  • Labatut’s account of von Neumann is, if anything, more unsettling than “Oppenheimer.” We had decades to get used to the specter of nuclear annihilation, and since the end of the Cold War it has been overshadowed by other terrors. A.I., on the other hand, seems newly sprung from science fiction, and especially terrifying because we can’t quite grasp what it will become.
  • Von Neumann, who died in 1957, did not teach machines to play Go. But when asked “what it would take for a computer, or some other mechanical entity, to begin to think and behave like a human being,” he replied that “it would have to play, like a child.”
  • the intellectual drama of “Oppenheimer” — as distinct from the dramas of his personal life and his political fate — is about how abstraction becomes reality. The atomic bomb may be, for the soldiers and politicians, a powerful strategic tool in war and diplomacy. For the scientists, it’s something else: a proof of concept, a concrete manifestation of quantum theory.
  • If Oppenheimer took hold of the sacred fire of atomic power, von Neumann’s theft was bolder and perhaps more insidious: He stole a piece of the human essence. He’s not only a modern Prometheus; he’s a second Frankenstein, creator of an all but human, potentially more than human monster.
  • “Technological power as such is always an ambivalent achievement,” Labatut’s von Neumann writes toward the end of his life, “and science is neutral all through, providing only means of control applicable to any purpose, and indifferent to all. It is not the particularly perverse destructiveness of one specific invention that creates danger. The danger is intrinsic. For progress there is no cure.”
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

A Tantalizing 'Hint' That Astronomers Got Dark Energy All Wrong - The New York Times - 0 views

  • Dark energy was assumed to be a constant force in the universe, both currently and throughout cosmic history. But the new data suggest that it may be more changeable, growing stronger or weaker over time, reversing or even fading away.
  • . If the work of dark energy were constant over time, it would eventually push all the stars and galaxies so far apart that even atoms could be torn asunder, sapping the universe of all life, light, energy and thought, and condemning it to an everlasting case of the cosmic blahs. Instead, it seems, dark energy is capable of changing course and pointing the cosmos toward a richer future.
  • a large international collaboration called the Dark Energy Spectroscopic Instrument, or DESI. The group has just begun a five-year effort to create a three-dimensional map of the positions and velocities of 40 million galaxies across 11 billion years of cosmic time. Its initial map, based on the first year of observations, includes just six million galaxies.
  • ...13 more annotations...
  • “So far we’re seeing basic agreement with our best model of the universe, but we’re also seeing some potentially interesting differences that could indicate that dark energy is evolving with time,”
  • When the scientists combined their map with other cosmological data, they were surprised to find that it did not quite agree with the otherwise reliable standard model of the universe, which assumes that dark energy is constant and unchanging. A varying dark energy fit the data points better.
  • “It’s certainly more than a curiosity,” Dr. Palanque-Delabrouille said. “I would call it a hint. Yeah, it’s not yet evidence, but it’s interesting.”
  • But this version of dark energy is merely the simplest one. “With DESI we now have achieved a precision that allows us to go beyond that simple model,” Dr. Palanque-Delabrouille said, “to see if the density of dark energy is constant over time, or if it has some fluctuations and evolution with time.”
  • “While combining data sets is tricky, and these are early results from DESI, the possible evidence that dark energy is not constant is the best news I have heard since cosmic acceleration was firmly established 20-plus years ago.”
  • praised the new survey as “superb data.” The results, she said, “open the potential for a new window into understanding dark energy, the dominant component of the universe, which remains the biggest mystery in cosmology. Pretty exciting.”
  • what if dark energy were not constant as the cosmological model assumed?
  • At issue is a parameter called w, which is a measure of the density, or vehemence, of the dark energy. In Einstein’s version of dark energy, this number remains constant, with a value of –1, throughout the life of the universe. Cosmologists have been using this value in their models for the past 25 years.
  • Dark energy took its place in the standard model of the universe known as L.C.D.M., composed of 70 percent dark energy (Lambda), 25 percent cold dark matter (an assortment of slow-moving exotic particles) and 5 percent atomic matter. So far that model has been bruised but not broken by the new James Webb Space Telescope
  • As a measure of distance, the researchers used bumps in the cosmic distribution of galaxies, known as baryon acoustic oscillations. These bumps were imprinted on the cosmos by sound waves in the hot plasma that filled the universe when it was just 380,000 years old. Back then, the bumps were a half-million light-years across. Now, 13.5 billion years later, the universe has expanded a thousandfold, and the bumps — which are now 500 million light-years across — serve as convenient cosmic measuring sticks.
  • The DESI scientists divided the past 11 billion years of cosmic history into seven spans of time. (The universe is 13.8 billion years old.) For each, they measured the size of these bumps and how fast the galaxies in them were speeding away from us and from each other.
  • When the researchers put it all together, they found that the usual assumption — a constant dark energy — didn’t work to describe the expansion of the universe. Galaxies in the three most recent epochs appeared closer than they should have been, suggesting that dark energy could be evolving with time.
  • Dr. Riess of Johns Hopkins, who had an early look at the DESI results, noted that the “hint,” if validated, could pull the rug out from other cosmological measurements, such as the age or size of the universe. “This result is very interesting and we should take it seriously,” he wrote in his email. “Otherwise why else do we do these experiments?”
Javier E

Opinion | The 100-Year Extinction Panic Is Back, Right on Schedule - The New York Times - 0 views

  • The literary scholar Paul Saint-Amour has described the expectation of apocalypse — the sense that all history’s catastrophes and geopolitical traumas are leading us to “the prospect of an even more devastating futurity” — as the quintessential modern attitude. It’s visible everywhere in what has come to be known as the polycrisis.
  • Climate anxiety, of the sort expressed by that student, is driving new fields in psychology, experimental therapies and debates about what a recent New Yorker article called “the morality of having kids in a burning, drowning world.”
  • The conviction that the human species could be on its way out, extinguished by our own selfishness and violence, may well be the last bipartisan impulse.
  • ...28 more annotations...
  • a major extinction panic happened 100 years ago, and the similarities are unnerving.
  • The 1920s were also a period when the public — traumatized by a recent pandemic, a devastating world war and startling technological developments — was gripped by the conviction that humanity might soon shuffle off this mortal coil.
  • It also helps us see how apocalyptic fears feed off the idea that people are inherently violent, self-interested and hierarchical and that survival is a zero-sum war over resources.
  • Either way, it’s a cynical view that encourages us to take our demise as a foregone conclusion.
  • What makes an extinction panic a panic is the conviction that humanity is flawed and beyond redemption, destined to die at its own hand, the tragic hero of a terrestrial pageant for whom only one final act is possible
  • What the history of prior extinction panics has to teach us is that this pessimism is both politically questionable and questionably productive. Our survival will depend on our ability to recognize and reject the nihilistic appraisals of humanity that inflect our fears for the future, both left and right.
  • As a scholar who researches the history of Western fears about human extinction, I’m often asked how I avoid sinking into despair. My answer is always that learning about the history of extinction panics is actually liberating, even a cause for optimism
  • Nearly every generation has thought its generation was to be the last, and yet the human species has persisted
  • As a character in Jeanette Winterson’s novel “The Stone Gods” says, “History is not a suicide note — it is a record of our survival.”
  • Contrary to the folk wisdom that insists the years immediately after World War I were a period of good times and exuberance, dark clouds often hung over the 1920s. The dread of impending disaster — from another world war, the supposed corruption of racial purity and the prospect of automated labor — saturated the period
  • The previous year saw the publication of the first of several installments of what many would come to consider his finest literary achievement, “The World Crisis,” a grim retrospective of World War I that laid out, as Churchill put it, the “milestones to Armageddon.
  • Bluntly titled “Shall We All Commit Suicide?,” the essay offered a dismal appraisal of humanity’s prospects. “Certain somber facts emerge solid, inexorable, like the shapes of mountains from drifting mist,” Churchill wrote. “Mankind has never been in this position before. Without having improved appreciably in virtue or enjoying wiser guidance, it has got into its hands for the first time the tools by which it can unfailingly accomplish its own extermination.”
  • The essay — with its declaration that “the story of the human race is war” and its dismay at “the march of science unfolding ever more appalling possibilities” — is filled with right-wing pathos and holds out little hope that mankind might possess the wisdom to outrun the reaper. This fatalistic assessment was shared by many, including those well to Churchill’s left.
  • “Are not we and they and all the race still just as much adrift in the current of circumstances as we were before 1914?” he wondered. Wells predicted that our inability to learn from the mistakes of the Great War would “carry our race on surely and inexorably to fresh wars, to shortages, hunger, miseries and social debacles, at last either to complete extinction or to a degradation beyond our present understanding.” Humanity, the don of sci-fi correctly surmised, was rushing headlong into a “scientific war” that would “make the biggest bombs of 1918 seem like little crackers.”
  • The pathbreaking biologist J.B.S. Haldane, another socialist, concurred with Wells’s view of warfare’s ultimate destination. In 1925, two decades before the Trinity test birthed an atomic sun over the New Mexico desert, Haldane, who experienced bombing firsthand during World War I, mused, “If we could utilize the forces which we now know to exist inside the atom, we should have such capacities for destruction that I do not know of any agency other than divine intervention which would save humanity from complete and peremptory annihilation.”
  • F.C.S. Schiller, a British philosopher and eugenicist, summarized the general intellectual atmosphere of the 1920s aptly: “Our best prophets are growing very anxious about our future. They are afraid we are getting to know too much and are likely to use our knowledge to commit suicide.”
  • Many of the same fears that keep A.I. engineers up at night — calibrating thinking machines to human values, concern that our growing reliance on technology might sap human ingenuity and even trepidation about a robot takeover — made their debut in the early 20th century.
  • The popular detective novelist R. Austin Freeman’s 1921 political treatise, “Social Decay and Regeneration,” warned that our reliance on new technologies was driving our species toward degradation and even annihilation
  • Extinction panics are, in both the literal and the vernacular senses, reactionary, animated by the elite’s anxiety about maintaining its privilege in the midst of societal change
  • There is a perverse comfort to dystopian thinking. The conviction that catastrophe is baked in relieves us of the moral obligation to act. But as the extinction panic of the 1920s shows us, action is possible, and these panics can recede
  • To whatever extent, then, that the diagnosis proved prophetic, it’s worth asking if it might have been at least partly self-fulfilling.
  • today’s problems are fundamentally new. So, too, must be our solutions
  • It is a tired observation that those who don’t know history are destined to repeat it. We live in a peculiar moment in which this wisdom is precisely inverted. Making it to the next century may well depend on learning from and repeating the tightrope walk — between technological progress and self-annihilation — that we have been doing for the past 100 years
  • We have gotten into the dangerous habit of outsourcing big issues — space exploration, clean energy, A.I. and the like — to private businesses and billionaires
  • That ideologically varied constellation of prominent figures shared a basic diagnosis of humanity and its prospects: that our species is fundamentally vicious and selfish and our destiny therefore bends inexorably toward self-destruction.
  • Less than a year after Churchill’s warning about the future of modern combat — “As for poison gas and chemical warfare,” he wrote, “only the first chapter has been written of a terrible book” — the 1925 Geneva Protocol was signed, an international agreement banning the use of chemical or biological weapons in combat. Despite the many horrors of World War II, chemical weapons were not deployed on European battlefields.
  • As for machine-age angst, there’s a lesson to learn there, too: Our panics are often puffed up, our predictions simply wrong
  • In 1928, H.G. Wells published a book titled “The Way the World Is Going,” with the modest subtitle “Guesses and Forecasts of the Years Ahead.” In the opening pages, he offered a summary of his age that could just as easily have been written about our turbulent 2020s. “Human life,” he wrote, “is different from what it has ever been before, and it is rapidly becoming more different.” He continued, “Perhaps never in the whole history of life before the present time, has there been a living species subjected to so fiercely urgent, many-sided and comprehensive a process of change as ours today. None at least that has survived. Transformation or extinction have been nature’s invariable alternatives. Ours is a species in an intense phase of transition.”
Javier E

Planck Satellite Shows Image of Infant Universe - NYTimes.com - 0 views

  • Recorded by the European Space Agency’s Planck satellite, the image is a heat map of the cosmos as it appeared only 370,000 years after the Big Bang, showing space speckled with faint spots from which galaxies would grow over billions of years.
  • is in stunning agreement with the general view of the universe that has emerged over the past 20 years, of a cosmos dominated by mysterious dark energy that seems to be pushing space apart and the almost-as-mysterious dark matter that is pulling galaxies together. It also shows a universe that seems to have endured an explosive burp known as inflation, which was the dynamite in the Big Bang.
  • “The extraordinary quality of Planck’s portrait of the infant universe allows us to peel back its layers to the very foundations, revealing that our blueprint of the cosmos is far from complete.”
  • ...10 more annotations...
  • Analyzing the relative sizes and frequencies of spots and ripples over the years has allowed astronomers to describe the birth of the universe to a precision that would make the philosophers weep. The new data have allowed astronomers to tweak their model a bit. It now seems the universe is 13.8 billion years old, instead of 13.7 billion, and consists by mass of 4.9 percent ordinary matter like atoms, 27 percent dark matter and 68 percent dark energy.
  • “Our ultimate goal would be to construct a new model that predicts the anomalies and links them together. But these are early days; so far, we don’t know whether this is possible and what type of new physics might be needed. And that’s exciting.”
  • The microwaves detected by the Planck date from 370,000 years after the Big Bang, which is as far back as optical or radio telescopes will ever be able to see, cosmologists say. But the patterns within them date from less than a trillionth of a second after the Big Bang, when the universe is said to have undergone a violent burst of expansion known as inflation that set cosmic history on the course it has followed ever since. Those patterns are Planck’s prize.
  • Within the standard cosmological framework, however, the new satellite data underscored the existence of puzzling anomalies that may yet lead theorists back to the drawing board. The universe appears to be slightly lumpier, with bigger and more hot and cold spots in the northern half of the sky as seen from Earth than toward the south, for example. And there is a large, unexplained cool spot in the northern hemisphere.
  • The biggest surprise here, astronomers said, is that the universe is expanding slightly more slowly than previous measurements had indicated. The Hubble constant, which characterizes the expansion rate, is 67 kilometers per second per megaparsec — in the units astronomers use — according to Planck. Recent ground-based measurements combined with the WMAP data gave a value of 69, offering enough of a discrepancy to make cosmologists rerun their computer simulations of cosmic history.
  • a Planck team member from the University of California, Berkeley, said it represents a mismatch between measurements made of the beginning of time and those made more recently, and that it could mean that dark energy, which is speeding up the expansion of the universe, is more complicated than cosmologists thought. He termed the possibility “pretty radical,” adding, “That would be pretty exciting.”
  • The data also offered striking support for the notion of inflation, which has been the backbone of Big Bang theorizing for 30 years. Under the influence of a mysterious force field during the first trillionth of a fraction of a second, what would become the observable universe ballooned by 100 trillion trillion times in size from a subatomic pinprick to a grapefruit in less than a violent eye-blink, so the story first enunciated by Alan Guth of M.I.T. goes.
  • Submicroscopic quantum fluctuations in this force field are what would produce the hot spots in the cosmic microwaves, which in turn would grow into galaxies. According to Planck’s measurements, those fluctuations so far fit the predictions of the simplest model of inflation, invented by Andrei Linde of Stanford, to a T. Dr. Tegmark of M.I.T. said, “We’re homing in on the simplest model.”
  • Cosmologists still do not know what might have caused inflation, but the recent discovery of the Higgs boson has provided evidence that the kinds of fields that can provoke such behavior really exist.
  • another clue to the nature of inflation could come from the anomalies in the microwave data — the lopsided bumpiness, for example — that tend to happen on the largest scales in the universe. By the logic of quantum cosmology, they were the first patterns to be laid down on the emerging cosmos; that is to say, when inflation was just starting.
‹ Previous 21 - 40 of 131 Next › Last »
Showing 20 items per page