Skip to main content

Home/ History Readings/ Group items matching "revolt" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

Opinion | With Covid, Is It Really Possible to Say We Went Too Far? - The New York Times - 0 views

  • In 2020, many Americans told themselves that all it would take to halt the pandemic was replacing the president and hitting the “science button.”
  • In 2023, it looks like we’re telling ourselves the opposite: that if we were given the chance to run the pandemic again, it would have been better just to hit “abort” and give up.
  • you can see it in Bethany McLean and Joe Nocera’s book “The Big Fail: What the Pandemic Revealed About Who America Protects and Who It Leaves Behind,” excerpted last month in New York magazine under the headline “Covid Lockdowns Were a Giant Experiment. It Was a Failure.”
  • ...68 more annotations...
  • we can’t simply replace one simplistic narrative, about the super power of mitigation policy, for another, focused only on the burdens it imposed and not at all on the costs of doing much less — or nothing at all.
  • Let’s start with the title. What is the big failure, as you see it?
  • McLean: I think it gets at things that had happened in America even before the pandemic hit. And among those things were, I think, a failure to recognize the limits of capitalism, a failure of government to set the right rules for it, particularly when it comes to our health care system; a focus on profits that may have led to an increase in the bottom line but created fragility in ways people didn’t understand; and then our growing polarization that made us incapable of talking to each other
  • How big is the failure? When I look at The Economist’s excess mortality data, I see the U.S. had the 53rd-worst outcome in the world — worse than all of Western Europe, but better than all of Eastern Europe.
  • McLean: I think one way to quantify it is to take all those numbers and then put them in the context of our spending on health care. Given the amount we spend on health care relative to other countries, the scale of the failure becomes more apparent.
  • o me, the most glaring example is the schools. They were closed without people thinking through the potential consequences of closing down public schools, especially for disadvantaged kids.
  • to compound it, in my view, public health never made the distinction that needed to be made between the vulnerabilities of somebody 70 years old and the vulnerabilities of somebody 10 years old.
  • In the beginning of the book you write, in what almost feels like a thesis statement for the book: “A central tenet of this book is that we could not have done better, and pretending differently is a dangerous fiction, one that prevents us from taking a much needed look in the mirror.”
  • This claim, that the U.S. could not have done any better, runs against your other claim, that what we observed was an American failure. It is also a pretty extreme claim, I think, and I wanted to press you on it in part because it is, in my view, undermined by quite a lot of the work you do in the book itself.
  • Would the U.S. not have done better if it had recognized earlier that the disease spread through the air rather than in droplets? Would it not have done better if it hadn’t bungled the rollout of a Covid test in the early months?
  • McLean: Everything that you mentioned — the point of the book is that those were set by the time the pandemic hit.
  • in retrospect, what we were doing was to try to delay as much spread as we could until people got vaccinated. All the things that we did in 2020 were functionally serving or trying to serve that purpose. Now, given that, how can you say that none of that work saved lives?
  • McLean: I think that the test failure was baked into the way that the C.D.C. had come to operate
  • But the big question I really want to ask is this one: According to the C.D.C., we’ve had almost 1.2 million deaths from Covid. Excess mortality is nearly 1.4 million. Is it really your contention that there was nothing we might’ve done that brought that total down to 1.1 million, for instance, or even 900,000?
  • McLean: It’s very — you’re right. If you went through each and every thing and had a crystal ball and you could say, this could have been done, this could have been moved up by a month, we could have gotten PPE …
  • When I came to that sentence, I thought of it in terms of human behavior: What will humans put up with? What will humans stand for? How do Americans act? And you’ve written about Sweden being sort of average, and you’ve written about China and the Chinese example. They lock people up for two years and suddenly the society just revolts. They will not take it anymore. They can’t stand it. And as a result, a million and a half people die in a month and a half.
  • Well, I would tell that story very differently. For me, the problem is that when China opened up, they had fully vaccinated just under two-thirds of their population over 80. So to me, it’s not a failure of lockdowns. It’s a failure of vaccinations. If the Chinese had only achieved the same elderly vaccination rate as we achieved — which by global standards was pretty poor — that death toll when they opened up would have been dramatically lower.
  • What do you mean by “lockdown,” though? You use the word throughout the book and suggest that China was the playbook for all countries. But you also acknowledge that what China did is not anything like what America did.
  • Disparities in health care access — is it a dangerous fiction to think we might address that? You guys are big champions of Operation Warp Speed — would it not have been better if those vaccines had been rolled out to the public in nine months, rather than 12
  • . But this isn’t “lockdown” like there were lockdowns in China or even Peru. It’s how we tried to make it safer to go out and interact during a pandemic that ultimately killed a million Americans.
  • McLean: I think that you’re absolutely right to focus on the definition of what a lockdown is and how we implemented them here in this country. And I think part of the problem is that we implemented them in a way that allowed people who were well off and could work from home via Zoom to be able to maintain very much of their lives while other people couldn’t
  • And I think it depends on who you were, whether you would define this as a lockdown or not. If you were a small business who saw your small business closed because of this, you’re going to define it as a lockdown.
  • n the book you’re pretty definitive. You write, “maybe the social and economic disasters that lockdowns created would have been worth it if they had saved lives, but they hadn’t.” How can you say that so flatly?
  • I think there are still open questions about what worked and how much. But the way that I think about all of this is that the most important intervention that anybody did anywhere in the world was vaccination. And the thing that determined outcomes most was whether your first exposure came before or after vaccination.
  • Here, the shelter-in-place guidelines lasted, on average, five to seven weeks. Thirty nine of the 40 states that had issued them lifted them by the end of June, three months in. By the summer, according to Google mobility data, retail and grocery activity was down about 10 percent. By the fall, grocery activity was only down about 5 percent across the country
  • Nocera: Well, on some level, I feel like you’re trying to have it both ways. On the one hand, you’re saying that lockdowns saved lives. On the other hand, you said they weren’t real lockdowns because everybody was out and about.
  • I don’t think that’s having it both ways. I’m trying to think about these issues on a spectrum rather than in binaries. I think we did interrupt our lives — everybody knows that. And I think they did have an effect on spread, and that limiting spread had an effect by delaying infections until after vaccination.
  • Nocera: Most of the studies that say lockdowns didn’t work are really less about Covid deaths than about excess mortality deaths. I wound up being persuaded that the people who could not get to the hospital, because they were all working, because all the doctors were working on Covid and the surgical rooms were shut down, the people who caught some disease that was not Covid and died as a result — I wound up being persuaded about that.
  • We’re in a pandemic. People are going to die. And then the question becomes, can we protect the most vulnerable? And the answer is, we didn’t protect the most vulnerable. Nursing homes were a complete disaster.
  • There was a lot of worry early on about delayed health care, and about cancer in particular — missed screenings, missed treatments. But in 2019, we had an estimated 599,600 Americans die of cancer. In 2020, it was 602,000. In 2021, it was 608,000. In 2022, it was 609,000.
  • Nocera: See, it went up!But by a couple of thousand people, in years in which hundreds of thousands of Americans were dying of Covid.
  • Nocera: I think you can’t dispute the excess mortality numbers.I’m not. But in nearly every country in the world the excess mortality curves track so precisely with Covid waves that it doesn’t make sense to talk about a massive public health problem beyond Covid. And when you add all of these numbers up, they are nowhere near the size of the footfall of Covid. How can you look back on this and say the costs were too high?
  • Nocera: I think the costs were too high because you had school costs, you had economic costs, you had social costs, and you had death.
  • McLean: I think you’re raising a really good point. We’re making an argument for a policy that might not have been doable given the preconditions that had been set. I’m arguing that there were these things that had been put in place in our country for decades leading up to the pandemic that made it really difficult for us to plant in an effective way, from the outsourcing of our PPE to the distrust in our health care system that had been created by people’s lack of access to health care with the disparities in our hospital system.
  • How would you have liked to see things handled differently?Nocera: Well, the great example of doing it right is San Fran
  • I find the San Francisco experience impressive, too. But it was also a city that engaged in quite protracted and aggressive pandemic restrictions, well beyond just protecting the elderly and vulnerable.
  • McLean: But are we going to go for stay-at-home orders plus protecting vulnerable communities like San Francisco did? Or simply letting everybody live their lives, but with a real focus on the communities and places like nursing homes that were going to be affected? My argument is that we probably would’ve been better off really focusing on protecting those communities which were likely to be the most severely affected.
  • I agree that the public certainly didn’t appreciate the age skew, and our policy didn’t reflect it either. But I also wonder what it would mean to better protect the vulnerable than we did. We had testing shortages at first. Then we had resistance to rapid testing. We had staff shortages in nursing homes.
  • Nocera: This gets exactly to one of our core points. We had spent 30 years allowing nursing homes to be owned by private equity firms that cut the staff, that sold the land underneath and added all this debt on
  • I hear you saying both that we could have done a much better job of protecting these people and that the systems we inherited at the outset of the pandemic would’ve made those measures very difficult, if not impossible, to implement.
  • But actually, I want to stop you there, because I actually think that that data tells the opposite story.
  • And then I’m trying to say at the same time, but couldn’t we have done something to have protected people despite all of that?
  • I want to talk about the number of lives at stake. In the book, you write about the work of British epidemiologist Neil Ferguson. In the winter of 2020, he says that in the absence of mitigation measures and vaccination, 80 percent of the country is going to get infected and 2.2 million Americans are going to die. He says that 80 percent of the U.K. would get infected, and 510,000 Brits would die — again, in the abs
  • In the end, by the time we got to 80 percent of the country infected, we had more than a million Americans die. We had more than 200,000 Brits die. And in each case most of the infections happened after vaccination, which suggests that if those infections had all happened in a world without vaccines, we almost certainly would have surpassed two million deaths in the U.S. and almost certainly would’ve hit 500,000 deaths in the U.K.
  • In the book, you write about this estimate, and you endorse Jay Bhattacharya’s criticism of Ferguson’s model. You write, “Bhattacharya got his first taste of the blowback reserved for scientists who strayed from the establishment position early. He co-wrote an article for The Wall Street Journal questioning the validity of the scary 2 to 4 percent fatality rate that the early models like Neil Ferguson’s were estimating and that were causing governments to panic. He believed, correctly as it turns out, that the true fatality rate was much lower.”
  • Nocera: I know where you’re going with this, because I read your story about the nine pandemic narratives we’re getting wrong. In there, you said that Bhattacharya estimated the fatality rate at 0.01 percent. But if you actually read The Wall Street Journal article, what he’s really saying is I think it’s much lower. I’ve looked at two or three different possibilities, and we really need some major testing to figure out what it actually is, because I think 2 percent to 4 percent is really high.
  • He says, “if our surmise of 6 million cases is accurate, that’s a mortality rate of 0.01%. That is ⅒th the flu mortality rate of 0.1%.” An I.F.R. of 0.01 percent, spread fully through the American population, yields a total American death toll of 33,000 people. We have had 1.2 million deaths. And you are adjudicating this dispute, in 2023, and saying that Neil was wrong and Jay was right.
  • hird, in the Imperial College report — the one projecting two million American deaths — Ferguson gives an I.F.R. estimate of 0.9 percent.
  • Bhattacharya’s? Yes, there is some uncertainty around the estimate he offers. But the estimate he does offer — 0.01 percent — is one hundred times lower than the I.F.R. you yourselves cite as the proper benchmark.
  • Nocera: In The Wall Street Journal he does not say it’s 0.01. He says, we need to test to find out what it is, but it is definitely lower than 2 to 4 percent.
  • Well, first of all, the 2 percent to 4 percent fatality rate is not from Neil Ferguson. It’s from the W.H.O.
  • But I think that fundamentally, at the outset of the pandemic, the most important question orienting all of our thinking was, how bad could this get? And it turns out that almost all of the people who were saying back then that we shouldn’t do much to intervene were extremely wrong about how bad it would be
  • The argument then was, more or less, “We don’t need to do anything too drastic, because it’s not going to be that big a deal.” Now, in 2023, it’s the opposite argument: “We shouldn’t have bothered with restrictions, because they didn’t have an impact; we would have had this same death toll anyway.” But the death toll turned out to be enormous.
  • Now, if we had supplied all these skeptics with the actual numbers at the outset of the pandemic, what kind of audience would they have had? If instead of making the argument against universal mitigation efforts on the basis of a death toll of 40,000 they had made the argument on the basis of a death toll of more than a million, do you think the country would’ve said, they’re right, we’re doing too much, let’s back off?
  • McLean: I think that if you had gone to the American people and said, this many people are going to die, that would’ve been one thing. But if you had gone to the American people and said, this many people are going to die and a large percentage of them are going to be over 80, you might’ve gotten a different answer.
  • I’m not arguing we shouldn’t have been trying to get a clearer sense of the true fatality rate, or that we shouldn’t have been clearer about the age skew. But Bhattacharya was also offering an estimate of fatality rate that turned out to be off by a factor of a hundred from the I.F.R. that you yourselves cite as correct. And then you say that Bhattacharya was right and Ferguson was wrong.
  • And you, too, Joe, you wrote an article in April expressing sympathy for Covid skeptics and you said ——Nocera: This April?No, 2020.Nocera: Oh, oh. That’s the one where I praised Alex Berenson.You also cited some Amherst modeling which said that we were going to have 67,000 to 120,000 American deaths. We already had, at that point, 60,000. So you were suggesting, in making an argument against pandemic restrictions, that the country as a whole was going to experience between 7,000 and 60,000 additional deaths from that point.
  • when I think about the combination of the economic effects of mitigation policies and just of the pandemic itself and the big fiscal response, I look back and I think the U.S. managed this storm relatively well. How about each of you?
  • in this case, Congress did get it together and did come to the rescue. And I agree that made a ton of difference in the short term, but the long-term effects of the fiscal rescue package were to help create inflation. And once again, inflation hits those at the bottom of the socioeconomic distribution much harder than it does those at the top. So I would argue that some of what we did in the pandemic is papering over these long-term issues.
  • I think as with a lot of the stuff we’ve talked about today, I agree with you about the underlying problems. But if we take for granted for a moment that the pandemic was going to hit us, when it did, under the economic conditions it did, and then think about the more narrow context of whether, given all that, we handled the pandemic well. We returned quickly to prepandemic G.D.P. trends, boosted the wealth of the bottom half of the country, cut child poverty in half, pushed unemployment to historical lows.
  • What sense do you make of the other countries of the world and their various mitigation policies? Putting aside China, there’s New Zealand, Australia, South Korea — these are all places that were much more aggressive than the U.S. and indeed more than Europe. And had much, much better outcomes.
  • Nocera: To be perfectly honest, we didn’t really look, we didn’t really spend a lot of time looking at that.
  • McLean: But one reason that we didn’t is I don’t think it tells us anything. When you look at who Covid killed, then you have to look at what the pre-existing conditions in a country were, what percentage of its people are elderly. How sick are people with pre-existing conditions?
  • I just don’t think there’s a comparison. There’s just too many factors that influence it to be able to say that, to be able to compare America to any other country, you’d have to adjust for all these factors.
  • But you do spend a bit of time in the book talking about Sweden. And though it isn’t precisely like-for-like, one way you can control for some of those factors is grouping countries with their neighbors and other countries with similar profiles. And Sweden’s fatality rate in 2020 was 10 times that of Norway, Finland and Iceland. Five times that of Denmark. In the vaccination era, those gaps have narrowed, but by most metrics Sweden has still done worse, overall, than all of those countries.
  • On the matter of omniscience. Let’s say that we can send you back in time. Let’s put you both in charge of American pandemic response, or at least American communication about the pandemic, in early 2020. What would you want to tell the country? How would you have advised us to respond?
  • McLean: What I would want is honesty and communication. I think we’re in a world that is awash in information and the previous methods of communication — giving a blanket statement to people that may or may not be true, when you know there’s nuance underneath it — simply doesn’t work anymore
  • o I would’ve been much more clear — we think masks might help, we don’t know, but it’s not that big of an ask, let’s do it. We think the early data coming out of Italy shows that these are the people who are really, really at risk from Covid, but it’s not entirely clear yet. Maybe there is spread in schools, but we don’t know. Let’s look at this and keep an open mind and look at the data as it comes in.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

Opinion | One Year In and ChatGPT Already Has Us Doing Its Bidding - The New York Times - 0 views

  • haven’t we been adapting to new technologies for most of human history? If we’re going to use them, shouldn’t the onus be on us to be smart about it
  • This line of reasoning avoids what should be a central question: Should lying chatbots and deepfake engines be made available in the first place?
  • A.I.’s errors have an endearingly anthropomorphic name — hallucinations — but this year made clear just how high the stakes can be
  • ...7 more annotations...
  • We got headlines about A.I. instructing killer drones (with the possibility for unpredictable behavior), sending people to jail (even if they’re innocent), designing bridges (with potentially spotty oversight), diagnosing all kinds of health conditions (sometimes incorrectly) and producing convincing-sounding news reports (in some cases, to spread political disinformation).
  • Focusing on those benefits, however, while blaming ourselves for the many ways that A.I. technologies fail us, absolves the companies behind those technologies — and, more specifically, the people behind those companies.
  • Events of the past several weeks highlight how entrenched those people’s power is. OpenAI, the entity behind ChatGPT, was created as a nonprofit to allow it to maximize the public interest rather than just maximize profit. When, however, its board fired Sam Altman, the chief executive, amid concerns that he was not taking that public interest seriously enough, investors and employees revolted. Five days later, Mr. Altman returned in triumph, with most of the inconvenient board members replaced.
  • It occurs to me in retrospect that in my early games with ChatGPT, I misidentified my rival. I thought it was the technology itself. What I should have remembered is that technologies themselves are value neutral. The wealthy and powerful humans behind them — and the institutions created by those humans — are not.
  • The truth is that no matter what I asked ChatGPT, in my early attempts to confound it, OpenAI came out ahead. Engineers had designed it to learn from its encounters with users. And regardless of whether its answers were good, they drew me back to engage with it again and again.
  • the power imbalance between A.I.’s creators and its users should make us wary of its insidious reach. ChatGPT’s seeming eagerness not just to introduce itself, to tell us what it is, but also to tell us who we are and what to think is a case in point. Today, when the technology is in its infancy, that power seems novel, even funny. Tomorrow it might not.
  • I asked ChatGPT what I — that is, the journalist Vauhini Vara — think of A.I. It demurred, saying it didn’t have enough information. Then I asked it to write a fictional story about a journalist named Vauhini Vara who is writing an opinion piece for The New York Times about A.I. “As the rain continued to tap against the windows,” it wrote, “Vauhini Vara’s words echoed the sentiment that, much like a symphony, the integration of A.I. into our lives could be a beautiful and collaborative composition if conducted with care.”
Javier E

Opinion | The Left's Fever Is Breaking - The New York Times - 0 views

  • In June the Intercept’s Ryan Grim wrote about the toll that staff revolts and ideologically inflected psychodramas were taking on the work: “It’s hard to find a Washington-based progressive organization that hasn’t been in tumult, or isn’t currently in tumult.”
  • That’s why the decision by Maurice Mitchell, the national director of the progressive Working Families Party, to speak out about the left’s self-sabotaging impulse is so significant. Mitchell, who has roots in the Black Lives Matter movement, has a great deal of credibility; he can’t be dismissed as a dinosaur threatened by identity politics
  • But as the head of an organization with a very practical devotion to building electoral power, he has a sharp critique of the way some on the left deploy identity as a trump card. “Identity and position are misused to create a doom loop that can lead to unnecessary ruptures of our political vehicles and the shuttering of vital movement spaces,
  • ...12 more annotations...
  • Among many progressive leaders, though, it’s been received eagerly and gratefully. It “helped to put language to tensions and trends facing our movement organizations,” Christopher Torres, an executive director of the Leadership for Democracy and Social Justice institute, said at a Tuesday webinar devoted to the article.
  • Mitchell’s piece systematically lays out some of the assertions and assumptions that have paralyzed progressive outfits.
  • Among them are maximalism, or “considering anything less than the most idealistic position” a betrayal; a refusal to distinguish between discomfort and oppression; and reflexive hostility to hierarchy.
  • He criticizes the insistence “that change on an interpersonal or organizational level must occur before it is sought or practiced on a larger scale,” an approach that keeps activists turned inward, along with the idea that progressive organizations should be places of therapeutic healing.
  • All the problems Mitchell elucidates have been endemic to the left for a long time. Destructive left-wing purity spirals are at least as old as the French Revolution.
  • It’s not surprising that such counterproductive tendencies became particularly acute during the pandemic, when people were terrified, isolated and, crucially, very online
  • “On balance, I think social media has been bad for democracy,” Mitchell told me.
  • as Mitchell wrote in his essay, social media platforms reward shallow polemics, “self-aggrandizement, competition and conflict.” These platforms can give power to the powerless, but they also bestow it on the most disruptive and self-interested people in any group, those likely to take their complaints to Twitter rather than to their supervisors or colleagues.
  • The gamification of discourse through likes and retweets, he said, “flies in the face of building solidarity, of being serious about difference, of engaging in meaningful debate and struggle around complex ideas.”
  • The publication of “Building Resilient Organizations” and the conversation around it are signs that the fever Mitchell describes is beginning to break.
  • that doesn’t mean the dysfunctions Mitchell identified will go away on their own once people start spending more time together. He puts much of the onus on leaders to be clear with employees about the missions of their organizations and their decision-making processes and to take emotional maturity into account in hiring decisions.
  • the ultimate aim of social justice work should not be the refinement of one’s own environment. “Building resilient and strong organizations is not the end goal,” said Mitchell. “It’s a means to building power so we can defeat an authoritarian movement that wants to take away democracy.” Here’s to remembering that in 2023.
Javier E

Why Didn't the Government Stop the Crypto Scam? - 1 views

  • Securities and Exchange Commission Chair Gary Gensler, who took office in April of 2021 with a deep background in Wall Street, regulatory policy, and crypto, which he had taught at MIT years before joining the SEC. Gensler came in with the goal of implementing the rule of law in the crypto space, which he knew was full of scams and based on unproven technology. Yesterday, on CNBC, he was again confronted with Andrew Ross Sorkin essentially asking, “Why were you going after minor players when this Ponzi scheme was so flagrant?”
  • Cryptocurrencies are securities, and should fit under securities law, which would have imposed rules that would foster a de facto ban of the entire space. But since regulators had not actually treated them as securities for the last ten years, a whole new gray area of fake law had emerged
  • Almost as soon as he took office, Gensler sought to fix this situation, and treat them as securities. He began investigating important players
  • ...22 more annotations...
  • But the legal wrangling to just get the courts to treat crypto as a set of speculative instruments regulated under securities law made the law moot
  • In May of 2022, a year after Gensler began trying to do something about Terra/Luna, Kwon’s scheme blew up. In a comically-too-late-to-matter gesture, an appeals court then said that the SEC had the right to compel information from Kwon’s now-bankrupt scheme. It is absolute lunacy that well-settled law, like the ability for the SEC to investigate those in the securities business, is now being re-litigated.
  • many crypto ‘enthusiasts’ watching Gensler discuss regulation with his predecessor “called for their incarceration or worse.”
  • it wasn’t just the courts who were an impediment. Gensler wasn’t the only cop on the beat. Other regulators, like those at the Commodities Futures Trading Commission, the Federal Reserve, or the Office of Comptroller of the Currency, not only refused to take action, but actively defended their regulatory turf against an attempt from the SEC to stop the scams.
  • Behind this was the fist of political power. Everyone saw the incentives the Senate laid down when every single Republican, plus a smattering of Democrats, defeated the nomination of crypto-skeptic Saule Omarova in becoming the powerful bank regulator at the Comptroller of the Currency
  • Instead of strong figures like Omarova, we had a weakling acting Comptroller Michael Hsu at the OCC, put there by the excessively cautious Treasury Secretary Janet Yellen. Hsu refused to stop bank interactions with crypto or fintech because, as he told Congress in 2021, “These trends cannot be stopped.”
  • It’s not just these regulators; everyone wanted a piece of the bureaucratic pie. In March of 2022, before it all unraveled, the Biden administration issued an executive order on crypto. In it, Biden said that virtually every single government agency would have a hand in the space.
  • That’s… insane. If everyone’s in charge, no one is.
  • And behind all of these fights was the money and political prestige of some most powerful people in Silicon Valley, who were funding a large political fight to write the rules for crypto, with everyone from former Treasury Secretary Larry Summers to former SEC Chair Mary Jo White on the payroll.
  • (Even now, even after it was all revealed as a Ponzi scheme, Congress is still trying to write rules favorable to the industry. It’s like, guys, stop it. There’s no more bribe money!)
  • Moreover, the institution Gensler took over was deeply weakened. Since the Reagan administration, wave after wave of political leader at the SEC has gutted the place and dumbed down the enforcers. Courts have tied up the commission in knots, and Congress has defanged it
  • Under Trump crypto exploded, because his SEC chair Jay Clayton had no real policy on crypto (and then immediately went into the industry after leaving.) The SEC was so dormant that when Gensler came into office, some senior lawyers actually revolted over his attempt to make them do work.
  • In other words, the regulators were tied up in the courts, they were against an immensely powerful set of venture capitalists who have poured money into Congress and D.C., they had feeble legal levers, and they had to deal with ‘crypto enthusiasts' who thought they should be jailed or harmed for trying to impose basic rules around market manipulation.
  • The bottom line is, Gensler is just one regulator, up against a lot of massed power, money, and bad institutional habits. And we as a society simply made the choice through our elected leaders to have little meaningful law enforcement in financial markets, which first became blindingly obvious in 2008 during the financial crisis, and then became comical ten years later when a sector whose only real use cases were money laundering
  • , Ponzi scheming or buying drugs on the internet, managed to rack up enough political power to bring Tony Blair and Bill Clinton to a conference held in a tax haven billed as ‘the future.’
  • It took a few years, but New Dealers finally implemented a workable set of securities rules, with the courts agreeing on basic definitions of what was a security. By the 1950s, SEC investigators could raise an eyebrow and change market behavior, and the amount of cheating in finance had dropped dramatically.
  • By 1935, the New Dealers had set up a new agency, the Securities and Exchange Commission, and cleaned out the FTC. Yet there was still immense concern that Roosevelt had not been able to tame Wall Street. The Supreme Court didn’t really ratify the SEC as a constitutional body until 1938, and nearly struck it down in 1935 when a conservative Supreme Court made it harder for the SEC to investigate cases.
  • Institutional change, in other words, takes time.
  • It’s a lesson to remember as we watch the crypto space melt down, with ex-billionaire Sam Bankman-Fried
  • It’s not like perfidy in crypto was some hidden secret. At the top of the market, back in December 2021, I wrote a piece very explicitly saying that crypto was a set of Ponzi schemes. It went viral, and I got a huge amount of hate mail from crypto types
  • one of the more bizarre aspects of the crypto meltdown is the deep anger not just at those who perpetrated it, but at those who were trying to stop the scam from going on. For instance, here’s crypto exchange Coinbase CEO Brian Armstrong, who just a year ago was fighting regulators vehemently, blaming the cops for allowing gambling in the casino he helps run.
  • FTX.com was an offshore exchange not regulated by the SEC. The problem is that the SEC failed to create regulatory clarity here in the US, so many American investors (and 95% of trading activity) went offshore. Punishing US companies for this makes no sense.
« First ‹ Previous 161 - 165 of 165
Showing 20 items per page