Skip to main content

Home/ History Readings/ Group items tagged genius

Rss Feed Group items tagged

Javier E

Opinion | The Reactionary Futurism of Marc Andreessen - The New York Times - 0 views

  • “I consider Mark and Elon to be role models to children in their embrace of fighting,” Andreessen writes.
  • Modern American society, at least in the big cities, is turning on law enforcement and tolerating crime, so you need combat skills to protect your loved ones. We are also fat and depressed, and learning to fight might help on both counts. In conclusion, “if it was good enough for Heracles and Theseus, it’s good enough for us.”
  • what caught my eye was the veneration of the virile aggression of the Greeks, the call to rediscover the ways of the ancients. A list of things that were good enough for the Greeks but not good enough for us would run long: Slavery, pederasty and bloodletting come to mind
  • ...38 more annotations...
  • This is what connects figures as disparate as Jordan Peterson and J.D. Vance and Peter Thiel and Donald Trump. These are the ideas that unite both the mainstream and the weirder figures of the so-called postliberal right, from Patrick Deneen to the writer Bronze Age Pervert.
  • I think the Republican Party’s collapse into incoherence reflects the fact that much of the modern right is reactionary, not conservative
  • As Paul Valéry, the French poet, once said, “Ancient Greece is the most beautiful invention of the modern age.” To treat Andreessen’s essay as an argument misses the point. It’s a vibe. And the vibe is reactionary.
  • It’s a coalition obsessed with where we went wrong: the weakness, the political correctness, the liberalism, the trigger warnings, the smug elites. It’s a coalition that believes we were once hard and have become soft; worse, we have come to lionize softness and punish hardness.
  • The story of the reactionary follows a template across time and place. It “begins with a happy, well-ordered state where people who know their place live in harmony and submit to tradition and their God,” Mark Lilla writes in his 2016 book, “The Shipwrecked Mind: On Political Reaction.”
  • He continues:Then alien ideas promoted by intellectuals — writers, journalists, professors — challenge this harmony, and the will to maintain order weakens at the top. (The betrayal of elites is the linchpin of every reactionary story.) A false consciousness soon descends on the society as a whole as it willingly, even joyfully, heads for destruction. Only those who have preserved memories of the old ways see what is happening. Whether the society reverses direction or rushes to its doom depends entirely on their resistance.
  • The Silicon Valley cohort Andreessen belongs to has added a bit to this formula. In their story, the old way that is being lost is the appetite for risk and inequality and dominance that drives technology forward and betters human life. What the muscled ancients knew and what today’s flabby whingers have forgotten is that man must cultivate the strength and will to master nature, and other men, for the technological frontier to give way
  • Now Andreessen has distilled the whole ideology to a procession of stark bullet points in his latest missive, the buzzy, bizarre “Techno-Optimist Manifesto.”
  • it’s the pairing of the reactionary’s sodden take on modern society with the futurist’s starry imagining of the bright tomorrow. So call it what it is: reactionary futurism
  • Andreessen’s argument is simple: Technology is good. Very good. Those who stand in its way are bad.
  • “The Enemy.” The list is long, ranging from “anti-greatness” to “statism” to “corruption” to “the ivory tower” to “cartels” to “bureaucracy” to “socialism” to “abstract theories” to anyone “disconnected from the real world … playing God with everyone else’s lives”
  • So who is it, exactly, who extinguishes the dancing star within the human soul?
  • Our present society has been subjected to a mass demoralization campaign for six decades — against technology and against life — under varying names like “existential risk,” “sustainability,” “E.S.G.,” “sustainable development goals,” “social responsibility,” “stakeholder capitalism,” “precautionary principle,” “trust and safety,” “tech ethics,” “risk management,” “degrowth,” “the limits of growth.”
  • The enemy, in other words, is anything or anyone who might seek to yoke technology to social goals or structures
  • For years, I’ve been arguing for politics to take technology more seriously, to see new inventions as no less necessary than social insurance and tax policy in bringing about a worthier world. Too often, we debate only how to divvy up what we already have. We have lost the habit of imagining what we could have; we are too timid in deploying the coordinated genius and muscle of society
  • I’ve been digging into the history of where and when we lost faith in technology and, more broadly, growth. At the core of that story is an inability to manage, admit or even see when technologies or policies go awry
  • The turn toward a less-is-more politics came in the 1970s, when the consequences of reckless growth became unignorable
  • Did we, in some cases, overcorrect? Absolutely. But the only reason we can even debate whether we overcorrected is because we corrected: The Clean Air Act and the Clean Water Act and a slew of other bills and regulations did exactly what they promised.
  • It is telling that Andreessen groups sustainability and degrowth into the same bucket of antagonists
  • Degrowth is largely, though not wholly, skeptical of technological solutions to our problems
  • But the politics of sustainability — as evidenced in legislation like the Inflation Reduction Act — have settled into another place entirely: a commitment to solving our hardest environmental problems by driving technology forward, by investing and deploying clean energy infrastructure at a scale unlike anything the government has done since the 1950s.
  • Andreessen focuses at some length on the nuclear future he believes we’ve been denied —
  • but curiously ignores the stunning advances in solar and wind and battery power that public policy has delivered.
  • He yearns for a kind of person, not just a kind of technology. “We believe in ambition, aggression, persistence, relentlessness — strength,” he writes, italics included. “We believe in merit and achievement. We believe in bravery, in courage.”
  • There are ways in which these virtues have become undervalued, in which the left, in particular, has a dysfunctional relationship with individual achievement and entrepreneurial élan.
  • Andreessen’s ideas trace an odd, meme-based philosophy that has flourished in some corners of the internet known as effective accelerationism
  • “Effective accelerationism aims to follow the ‘will of the universe’: leaning into the thermodynamic bias towards futures with greater and smarter civilizations that are more effective at finding/extracting free energy from the universe,”
  • “E/acc has no particular allegiance to the biological substrate for intelligence and life, in contrast to transhumanism.” OK!
  • Take Andreessen’s naming of trust and safety teams as among his enemies.
  • That, in a way, is my core disagreement with Andreessen. Reactionary futurism is accelerationist in affect but deccelerationist in practice
  • How has that worked out? A new analysis by Similarweb found that traffic to twitter.com fell in the United States by 19 percent from September 2022 to September 2023 and traffic on mobile devices fell by almost 18 percent. Indications are that advertising revenue on the platform is collapsing.
  • Andreessen spends much of his manifesto venerating the version of markets that you hear in the first few weeks of Econ 101, before the professor begins complicating the picture with all those annoying market failures
  • Throughout his essay, Andreessen is at pains to attack those who might slow the development of artificial intelligence in the name of safety, but nothing would do more to freeze progress in A.I. than a disaster caused by its reckless deployment
  • It is hard to read Andreessen’s manifesto, with its chopped-up paragraphs and its blunt jabs of thought delivered for maximum engagement and polarization, and not feel that Andreessen now reflects the medium in which he has made his home: X. He doesn’t just write in the way the medium rewards. He increasingly seems to think in its house style, too.
  • One reason I left Twitter long ago is that I noticed that it was a kind of machine for destroying trust. It binds you to the like-minded but cuts you from those with whom you have even modest disagreements
  • There is a reason that Twitter’s rise was conducive to politics of revolution and reaction rather than of liberalism and conservatism. If you are there too often, seeing the side of humanity it serves up, it is easy to come to think that everything must be burned down.
  • Musk purchased Twitter (in an acquisition that Andreessen Horowitz helped finance) and gutted its trust and safety teams. The result has been a profusion of chaos, disinformation and division on his platform
  • Treating so much of society with such withering contempt will not speed up a better future. It will turn people against the politics and policies of growth, just as it did before. Trust is the most essential technology of all.
Javier E

The Arab Oil Embargo and Bad Energy Policy's 50th Birthday - WSJ - 0 views

  • The “second wave” of electric-vehicle buyers isn’t materializing, the Journal reported this week
  • To lure the first wave took thousands of dollars in taxpayer handouts to each buyer and thousands more in subsidies to encourage companies to build the EVs in the first place. And these buyers were the enthusiasts. How much more will have to be piled on the table to lure those customers who aren’t bewitched by EV cultural and technological appeal and care about having a useful car at an affordable price?
  • But this was always understood. In the fantasy life of greens, the next step would be to ban the sale of new gasoline cars altogether. Except Americans vote: Politicians who don’t get the votes of Americans don’t get to make policy, including the policy of denying them the choice to buy gasoline-powered vehicle
  • ...10 more annotations...
  • At some point, too, the public might look up and notice that subsidizing EVs is having no effect on climate or CO2.
  • the 50th anniversary of the 1973 Arab oil embargo in the latest edition of New Atlantis: “The worst effect was on U.S. energy policy. Whereas the embargo lasted about five months, the toll on U.S. policy has lasted five decades and counting.”
  • the 50-year-old fuel-economy regime devolved into a convoluted set of political trade-offs serving—as the Biden administration recently admitted—no legitimate cost-benefit goal. Boondoggles from synfuels to corn ethanol were launched in the 1970s to honor the false god of energy independence, though thanks to the still-functioning genius of the free-market system the U.S. nevertheless blundered into true energy security with the help of fracking.
  • The words “energy transition” are redundant. The energy economy is always transitioning. The transitions are additive. Wind, hydro and biomass all existed before fossil fuels arrived
  • Energy’s uses are unlimited. This is why, unless the world improbably adopts a carbon tax, the effect of green-energy subsidies (aside from enriching their backers) is largely to stimulate increased energy consumption rather than reduce CO2. This effect is already apparent in the numbers.
  • another ’70s legacy: our least-useful professors invoking big-oil stereotypes in pursuit of political goals.
  • Witness a New York Times op-ed this week combining adventurous antitrust reasoning with tired anti-Exxon tropes, claiming a proposed oil merger represents a “direct threat to democracy” by somehow blocking a solution to climate change that voters apparently crave even though it doesn’t exist.
  • Exxon controls less than 3% of the world’s oil and gas, most of which are in the hands of governments. The U.S. is responsible for less than 15% of global CO2 emissions.
  • What older Americans remember as the oil crisis was a product of domestic price controls, imposed by people in the Nixon administration who knew better.
  • Along the way, the country did manage to remove lead from gasoline and mandate catalytic converters, which improved air quality, showing that rational, economical policy outcomes are still possible amid the vast politicized waste that “energy policy” has otherwise become in the last 50 years.
Javier E

(1) The Resilience Of Republican Christianism - 0 views

  • I tried to sketch out the essence of an actual conservative sensibility and politics: one of skepticism, limited government and an acceptance of human imperfection.
  • My point was that this conservative tradition had been lost in America, in so far as it had ever been found, because it had been hijacked by religious and political fundamentalism
  • I saw the fundamentalist psyche — rigid, abstract, authoritarian — as integral to the GOP in the Bush years and beyond, a phenomenon that, if sustained, would render liberal democracy practically moribund. It was less about the policy details, which change over time, than an entire worldview.
  • ...26 more annotations...
  • the intellectual right effectively dismissed the book
  • Here is David Brooks, echoing the conservative consensus in 2006:
  • As any number of historians, sociologists and pollsters can tell you, the evangelical Protestants who now exercise a major influence on the Republican Party are an infinitely diverse and contradictory group, and their relationship to these hyperpartisans is extremely ambivalent.
  • The idea that members of the religious right form an “infinitely diverse and contradictory group” and were in no way “hyperpartisan” is now clearly absurd. Christianism, in fact, turned out to be the central pillar of Trump’s success, with white evangelicals giving unprecedented and near-universal support — 84 percent — to a shameless, disgusting pagan, because and only because he swore to smite their enemies.
  • The fusion of Trump and Christianism is an unveiling of a sort — proof of principle that, in its core, Christianism is not religious but political, a reactionary cult susceptible to authoritarian preacher
  • Christianism is to the American right what critical theory is to the American left: a reductionist, totalizing creed that “others” half the country, and deeply misreads the genius of the American project.
  • Christianism starts, as critical theory does, by attacking the core of the Founding: in particular, its Enlightenment defense of universal reason, and its revolutionary removal of religion from the state.
  • Mike Johnson’s guru, pseudo-historian David Barton, claims that the Founders were just like evangelicals today, and intended the government at all levels to enforce “Christian values” — primarily, it seems, with respect to the private lives of others. As Pete Wehner notes, “If you listen to Johnson speak on the ‘so-called separation of Church and state’ and claim that ‘the Founders wanted to protect the church from an encroaching state, not the other way around,’ you will hear echoes of Barton.”
  • Christianism is a way to think about politics without actually thinking. Johnson expressed this beautifully last week: “I am a Bible-believing Christian. Someone asked me today in the media, they said, ‘It’s curious, people are curious: What does Mike Johnson think about any issue under the sun?’ I said, ‘Well, go pick up a Bible off your shelf and read it. That’s my worldview.
  • this tells us nothing, of course. The Bible demands interpretation in almost every sentence and almost every word; it contains universes of moral thought and thesauri of ambiguous words in a different ancient language; it has no clear blueprint for contemporary American politics, period
  • Yet Johnson uses it as an absolute authority to back up any policy he might support
  • The submission to (male) authority is often integral to fundamentalism
  • Trump was an authority figure, period. He was a patriarch. He was the patriarch of their tribe. And he was in power, which meant that God put him there. After which nothing needs to be said. So of course if the patriarch says the election is rigged, you believe him.
  • And of course you do what you can to make sure that God’s will be done — by attempting to overturn the election results if necessary.
  • Christianism is a just-so story, with no deep moral conflicts. Material wealth does not pose a moral challenge, for example, as it has done for Christians for millennia. For Christianists, it’s merely proof that God has blessed you and you deserve it.
  • “I believe that scripture, the Bible is very clear: that God is the one that raises up those in authority. And I believe that God has ordained and allowed each one of us to be brought here for this specific moment.” That means that Trump was blessed by God, and not just by the Electoral College in 2016. And because he was blessed by God, it was impossible that Biden beat him fairly in 2020.
  • More than three-quarters of those representing the most evangelical districts are election deniers, compared to just half of those in the remaining districts. Fully three-quarters of the deniers in the caucus hail from evangelical districts.
  • since the Tea Party, the turnover in primary challenges in these evangelical districts has been historic — a RINO-shredding machine. No wonder there were crosses being carried on Capitol Hill on January 6, 2021. The insurrectionists were merely following God’s will. And Trump’s legal team was filled with the faithful.
  • Tom Edsall shows the skew that has turned American politics into something of a religious war: “When House districts are ranked by the percentage of voters who are white evangelicals, the top quintile is represented by 81 Republicans and 6 Democrats and the second quintile by 68 Republicans and 19 Democrats.”
  • the overwhelming majority of the Republican House Caucus (70%) represents the Most Evangelical districts (top two quintiles). Thus, we can see that a group that represents less than 15% of the US population commands 70% of the districts comprising the majority party in the House of Representatives.
  • And almost all those districts are safe as houses. When you add Christianism to gerrymandering, you get a caucus that has no incentive to do anything but perform for the cable shows.
  • This is not a caucus interested in actually doing anything.
  • I don’t know how we best break the grip of the fundamentalist psyche on the right. It’s a deep human tendency — to give over control to a patriarch or a holy book rather than engage in the difficult process of democratic interaction with others, compromise, and common ground.
  • he phenomenon has been given new life by a charismatic con-man in Donald Trump, preternaturally able to corral the cultural fears and anxieties of those with brittle, politicized faith.
  • What I do know is that, unchecked, this kind of fundamentalism is a recipe not for civil peace but for civil conflict
  • It’s a mindset, a worldview, as deep in the human psyche as the racial tribalism now endemic on the left. It controls one of our two major parties. And in so far as it has assigned all decisions to one man, Donald Trump, it is capable of supporting the overturning of an election — or anything else, for that matter, that the patriarch wants. Johnson is a reminder of that.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

Jack Bogle: The Undisputed Champion of the Long Run - WSJ - 0 views

  • Jack Bogle is ready to declare victory. Four decades ago, a mutual-fund industry graybeard warned him that he would “destroy the industry.” Mr. Bogle’s plan was to create a new mutual-fund company owned not by the founding entrepreneur and his partners but by the shareholders of the funds themselves. This would keep overhead low for investors, as would a second part of his plan: an index fund that would mimic the performance of the overall stock market rather than pay genius managers to guess which stocks might go up or down.
  • Not even Warren Buffett has minted more millionaires than Jack Bogle has—and he did so not by helping them get lucky, but by teaching them how to earn the market’s long-run, average return without paying big fees to Wall Street.
  • “When the climate really gets bad, I’m not some statue out there. But when I get knots in my stomach, I say to myself, ‘Reread your books,’ ” he says. Mr. Bogle has written numerous advice books on investing, including 2007’s “The Little Book of Common Sense Investing,” which remains a perennial Amazon best seller—and all of them emphasize not trying to outguess the markets.
  • ...11 more annotations...
  • Mr. Bogle has some hard news for investors. The basic appeal of index funds—their ability to deliver the market return without shifting an arm and leg to Wall Street’s army of helpers—will only become more important given the decade of depressed returns he sees ahead.
  • Don’t imagine a revisitation of the ’80s or ’90s, when stocks returned 18% a year and investors, after the industry’s rake-off, imagined they “had the greatest manager in the world” because they got 14%. Those planning on a comfy retirement or putting a kid through college will have to save more, work to keep costs low, and—above all—stick to the plan.
  • The mutual-fund industry is slowly liquidating itself—except for Vanguard. Mr. Bogle happily supplies the numbers: During the 12 months that ended May 31, “the fund industry took in $87 billion . . . of which $224 billion came into Vanguard.” In other words, “in the aggregate, our competitors experienced capital outflows of $137 billion.”
  • That said, Mr. Bogle finds today’s stock scene puzzling. Shares are highly priced in historical terms; earnings and economic growth he expects to disappoint for at least the next decade (he sees no point in trying to forecast further). And yet he advises investors to stay invested and weather the storm: “If we’re going to have lower returns, well, the worst thing you can do is reach for more yield. You just have to save more.”
  • He also knows the heartache of having just about everything he has saved tied up in volatile, sometimes irrational markets, especially now. “We’re in a difficult place,” he says. “We live in an extremely risky world—probably more risky than I can recall.”
  • Then why invest at all? Maybe it would be better to sell and stick the cash in a bank or a mattress. “I know of no better way to guarantee you’ll have nothing at the end of the trail,” he responds. “So we know we have to invest. And there’s no better way to invest than a diversified list of stocks and bonds at very low cost.”
  • Mr. Bogle’s own portfolio consists of 50% stocks and 50% bonds, the latter tilted toward short- and medium-term. Keep an eagle eye on costs, he says, in a world where pre-cost returns may be as low as 3% or 4%. Inattentive investors can expect to lose as much as 70% of their profits to “hidden” fund management costs in addition to the “expense ratios” touted in mutual-fund prospectuses. (These hidden costs include things like sales load, transaction costs, idle cash and inefficient taxes.)
  • Mr. Bogle relies on a forecasting model he published 25 years ago, which tells him that investors over the next decade, thanks largely to a reversion to the mean in valuations, will be lucky to clear 2% annually after costs. Yuck.
  • Investing, he says, always is “an act of trust—in the ability of civilization and the U.S. to continue to flourish; in the ability of corporations to continue, through efficiency and entrepreneurship and innovation, to provide substantial returns.” But nothing, not even American greatness, is guaranteed, he adds
  • what he calls the financial buccaneer type, an entrepreneur more interested in milking what’s left of the active-management-fee gravy train than in providing low-cost competition for Vanguard—which means Vanguard’s best days as guardian of America’s nest egg may still lie ahead.
  • the growth of indexing is obviously unwelcome writing on the wall for Wall Street professionals and Vanguard’s profit-making competitors like Fidelity, which have never been able to give heart and soul to low-churn indexing because indexing doesn’t generate large fees for executives and shareholders of management companies.
Javier E

The new tech worldview | The Economist - 0 views

  • Sam Altman is almost supine
  • the 37-year-old entrepreneur looks about as laid-back as someone with a galloping mind ever could. Yet the ceo of OpenAi, a startup reportedly valued at nearly $20bn whose mission is to make artificial intelligence a force for good, is not one for light conversation
  • Joe Lonsdale, 40, is nothing like Mr Altman. He’s sitting in the heart of Silicon Valley, dressed in linen with his hair slicked back. The tech investor and entrepreneur, who has helped create four unicorns plus Palantir, a data-analytics firm worth around $15bn that works with soldiers and spooks
  • ...25 more annotations...
  • a “builder class”—a brains trust of youngish idealists, which includes Patrick Collison, co-founder of Stripe, a payments firm valued at $74bn, and other (mostly white and male) techies, who are posing questions that go far beyond the usual interests of Silicon Valley’s titans. They include the future of man and machine, the constraints on economic growth, and the nature of government.
  • They share other similarities. Business provided them with their clout, but doesn’t seem to satisfy their ambition
  • The number of techno-billionaires in America (Mr Collison included) has more than doubled in a decade.
  • ome of them, like the Medicis in medieval Florence, are keen to use their money to bankroll the intellectual ferment
  • The other is Paul Graham, co-founder of Y Combinator, a startup accelerator, whose essays on everything from cities to politics are considered required reading on tech campuses.
  • Mr Altman puts it more optimistically: “The iPhone and cloud computing enabled a Cambrian explosion of new technology. Some things went right and some went wrong. But one thing that went weirdly right is a lot of people got rich and said ‘OK, now what?’”
  • A belief that with money and brains they can reboot social progress is the essence of this new mindset, making it resolutely upbeat
  • The question is: are the rest of them further evidence of the tech industry’s hubristic decadence? Or do they reflect the start of a welcome capacity for renewal?
  • Two well-known entrepreneurs from that era provided the intellectual seed capital for some of today’s techno nerds.
  • Mr Thiel, a would-be libertarian philosopher and investor
  • This cohort of eggheads starts from common ground: frustration with what they see as sluggish progress in the world around them.
  • Yet the impact could ultimately be positive. Frustrations with a sluggish society have encouraged them to put their money and brains to work on problems from science funding and the redistribution of wealth to entirely new universities. Their exaltation of science may encourage a greater focus on hard tech
  • the rationalist movement has hit the mainstream. The result is a fascination with big ideas that its advocates believe goes beyond simply rose-tinted tech utopianism
  • A burgeoning example of this is “progress studies”, a movement that Mr Collison and Tyler Cowen, an economist and seer of the tech set, advocated for in an article in the Atlantic in 2019
  • Progress, they think, is a combination of economic, technological and cultural advancement—and deserves its own field of study
  • There are other examples of this expansive worldview. In an essay in 2021 Mr Altman set out a vision that he called “Moore’s Law for Everything”, based on similar logic to the semiconductor revolution. In it, he predicted that smart machines, building ever smarter replacements, would in the coming decades outcompete humans for work. This would create phenomenal wealth for some, obliterate wages for others, and require a vast overhaul of taxation and redistribution
  • His two bets, on OpenAI and nuclear fusion, have become fashionable of late—the former’s chatbot, ChatGPT, is all the rage. He has invested $375m in Helion, a company that aims to build a fusion reactor.
  • Mr Lonsdale, who shares a libertarian streak with Mr Thiel, has focused attention on trying to fix the shortcomings of society and government. In an essay this year called “In Defence of Us”, he argues against “historical nihilism”, or an excessive focus on the failures of the West.
  • With a soft spot for Roman philosophy, he has created the Cicero Institute in Austin that aims to inject free-market principles such as competition and transparency into public policy.
  • He is also bringing the startup culture to academia, backing a new place of learning called the University of Austin, which emphasises free speech.
  • All three have business ties to their mentors. As a teen, Mr Altman was part of the first cohort of founders in Mr Graham’s Y Combinator, which went on to back successes such as Airbnb and Dropbox. In 2014 he replaced him as its president, and for a while counted Mr Thiel as a partner (Mr Altman keeps an original manuscript of Mr Thiel’s book “Zero to One” in his library). Mr Thiel was also an early backer of Stripe, founded by Mr Collison and his brother, John. Mr Graham saw promise in Patrick Collison while the latter was still at school. He was soon invited to join Y Combinator. Mr Graham remains a fan: “If you dropped Patrick on a desert island, he would figure out how to reproduce the Industrial Revolution,”
  • While at university, Mr Lonsdale edited the Stanford Review, a contrarian publication co-founded by Mr Thiel. He went on to work for his mentor and the two men eventually helped found Palantir. He still calls Mr Thiel “a genius”—though he claims these days to be less “cynical” than his guru.
  • “The tech industry has always told these grand stories about itself,” says Adrian Daub of Stanford University and author of the book, “What Tech Calls Thinking”. Mr Daub sees it as a way of convincing recruits and investors to bet on their risky projects. “It’s incredibly convenient for their business models.”
  • In the 2000s Mr Thiel supported the emergence of a small community of online bloggers, self-named the “rationalists”, who were focused on removing cognitive biases from thinking (Mr Thiel has since distanced himself). That intellectual heritage dates even further back, to “cypherpunks”, who noodled about cryptography, as well as “extropians”, who believed in improving the human condition through life extensions
  • Silicon Valley has shown an uncanny ability to reinvent itself in the past.
Javier E

Elon Musk's Text Messages Explain Everything - The Atlantic - 0 views

  • I’ve begun to think of Exhibit H as a skeleton key for the final, halcyon days of the tech boom—unlocking an understanding of the cultural brain worms and low-interest-rate hubris that defined the industry in 2022. What we see in Exhibit H is only a tiny snapshot of a very important inbox, but it’s enough to make this one of the most revealing documents in a year that’s been absolutely overflowing with tech disclosures
  • the Musk texts demonstrate a decadence, an unearned confidence, and a boy’s-club mentality that coincide with the cultural disillusionment regarding the genius-innovator narrative.
  • I snarkily coined the Elon Musk School of Management to describe the petulant way that some tech founders, such as Musk and Coinbase’s Brian Armstrong, seemed to use confrontational, culture-warring, Twitter-addled thought leadership as a business tactic. The Musk School revolves around two principles: running a company in an authoritarian manner, and ensuring that every management decision is optimized to make news and hijack the attention of those following along on social media
  • ...8 more annotations...
  • The Musk messages also reveal how some of the richest and most powerful men in the world treat actual billions of dollars with a level of care more appropriate for a 3-year-old tossing around Monopoly cash.
  • Oracle’s founder, Larry Ellison, essentially writes Musk a blank check over text, pledging, “A billion … or whatever you recommend.” The venture capitalist Marc Andreessen unsolicitedly offers Musk “$250M with no additional work required.” And Michael Grimes, a top investment banker at Morgan Stanley, proposes a meeting with Bankman-Fried as a way to “get us $5bn equity in an hour.”
  • The blitheness is the point. It is a total power move to talk about getting “$5bn in equity in an hour” the same way we mere mortals talk about Venmo-ing a friend $15 for lunch. The texts make it clear that these men are fundamentally alienated from the rest of the world by their wealth.
  • “These are absolutely not normal people with a normal understanding of the world.”
  • The men in Musk’s phone also appear wildly confident in their own abilities and those of their peers. Mathias Döpfner, the CEO of the media conglomerate Axel Springer, infamously texted Musk his bullet-pointed plan for Twitter, which began with the line item “1.),, Solve Free Speech.”
  • They teach us what happens when a small group of people with too much money come to view that money not just as a reward for success, but as its own form of merit—a specious achievement that totally alienates them from reality.
  • Ultimately, Exhibit H documents the loneliness and isolation of being the world’s richest man. As told via the texts, the seed of Musk’s Twitter purchase was planted by sycophants deferential to the billionaire who will never give him hard, truthful advice, because they wish to stay close to him.
  • the one time he receives actual, honest feedback from Agrawal, Musk behaves aggressively and impulsively, sealing his fate.
Javier E

In Silicon Valley, You Can Be Worth Billions and It's Not Enough - The New York Times - 0 views

  • He got a phone call about the imminent sale of a tech company and allegedly traded on the confidential information, according to charges filed by the Securities and Exchange Commission. The profit for a few minutes of work: $415,726.
  • rarely has anyone traded his reputation for seemingly so little reward. For Mr. Bechtolsheim, $415,726 was equivalent to a quarter rolling behind the couch. He was ranked No. 124 on the Bloomberg Billionaires Index last week, with an estimated fortune of $16 billion.
  • Last month, Mr. Bechtolsheim, 68, settled the insider trading charges without admitting wrongdoing. He agreed to pay a fine of more than $900,000 and will not serve as an officer or director of a public company for five years.
  • ...16 more annotations...
  • Nothing in his background seems to have brought him to this troubling point. Mr. Bechtolsheim was one of those who gave Silicon Valley its reputation as an engineer’s paradise, a place where getting rich was just something that happened by accident.
  • “He cared so much about making great technology that he would buy a house, not furnish it and sleep on a futon,” said Scott McNealy, who joined with Mr. Bechtolsheim four decades ago to create Sun Microsystems, a maker of computer workstations and servers that was a longtime tech powerhouse. “Money was not how he measured himself.”
  • researchers who analyze trading data say corporate executives broadly profit from confidential information. These executives try to avoid traditional insider trading restrictions by buying shares in economically linked firms, a phenomenon called “shadow trading.”
  • “There appears to be significant profits being made from shadow trading,” said Mihir N. Mehta, an assistant professor of accounting at the University of Michigan and an author of a 2021 study in The Accounting Review that found “robust evidence” of the behavior. “The people doing it have a sense of entitlement or maybe just think, ‘I’m invincible.’”
  • He went to Stanford as a Ph.D. student in the mid-1970s and got to know the then-small programming community around the university. In the early 1980s, he, along with Mr. McNealy, Vinod Khosla and Bill Joy, started Sun Microsystems as an outgrowth of a Stanford project. When Sun initially raised money, Mr. Bechtolsheim put his entire life savings — about $100,000 — into the company.
  • “You could end up losing all your money,” he was warned by the venture capitalists financing Sun. His response: “I see zero risk here.”
  • An impromptu demonstration was hastily arranged for 8 a.m., which Mr. Bechtolsheim cut short. He had seen enough, and besides, he had to get to the office. He gave them a check, and the deal was sealed, Mr. Levy wrote, “with as little fanfare as if he were grabbing a latte on the way to work.
  • Mr. Page and Mr. Brin couldn’t deposit Mr. Bechtolsheim’s check for a month because Google did not have a bank account. When Google went public in 2004, that $100,000 investment was worth at least $1 billion.
  • It wasn’t the money that made the story famous, however. It was the way it confirmed one of Silicon Valley’s most cherished beliefs about itself: that its genius is so blindingly obvious, questions are superfluous.
  • The dot-com boom was a disorienting period for longtime Valley leaders whose interest in money was muted. Mr. Bechtolsheim’s Sun colleague Mr. Joy left Silicon Valley.
  • “There’s so much money around, it’s clouding a lot of people’s ethics,” Mr. Joy said in a 1999 oral history
  • Mr. Bechtolsheim didn’t leave. In 2008, he co-founded Arista, a Silicon Valley computer networking company that went public and now has 4,000 employees and a stock market value of $100 billion.
  • Mr. Bechtolsheim was chair of Arista’s board when an executive from another company called in 2019, according to the S.E.C. Arista and the other company, which was not named in court documents, had a history of sharing confidential information under nondisclosure agreements.
  • immediately after hanging up, the government said, he bought Acacia option contracts in the accounts of a close relative and a colleague. The next day, the deal was announced. Acacia shares jumped 35 percent.
  • Arista’s code of conduct states that “employees who possess material, nonpublic information gained through their work at Arista may not trade in Arista securities or the securities of another company to which the information pertains.”
  • Mr. Levy, the “In the Plex” author, said there were plenty of legal ways to make money in Silicon Valley. “Someone who is regarded as an influential funder and is very well connected gets nearly unlimited opportunities to make very desirable early investments,”
Javier E

The Authoritarian Grip on Working-Class Men - 0 views

  • as Rachel Kleinfeld of the Carnegie Endowment  points out, working-class American men “are much more likely to be politically apathetic” than they are to be active authoritarians. “They look for belonging, purpose, and advice, and find a mix of grifters, political hacks and violent extremists who lead them down an ugly road.” That’s the manhood problem.
  • Donald Trump has a special genius for intuiting the dark, unspoken things that people want and need
  • He understands that the era when American men looked to Gary Cooper or Jimmy Stewart–men who protected, sacrificed, stood tall in the saddle–is over
  • ...6 more annotations...
  • ask, as both Kleinfeld and Reeves do, how we can reduce the demand for illiberalism and violence by helping working-class men find a place in an increasingly feminized society.
  • We often wonder whether the cause of our illiberal tilt is economic or cultural--a loss of financial security and hope for the future, or a wrenching change in identity, demography and values. The problem of manhood lies at the intersection of these two domains, for working-class men have lost both economic standing and social status.
  • Identity is not simply a dependent variable of economic standing. Men went to matter–as men. If that need is not satisfied by work, it has to be satisfied elsewhere. There must be alternatives to the manosphere. 
  • Everyone, of course, needs virtuous purpose in order to lead a full life, but American working-class men have lost so many traditional sources of selfless action that they have become especially vulnerable to the call of the selfish jerk. Where, then, do you find virtuous purpose? In volunteer work, for example, or in programs of national service. Organizations like Big Brothers can do every bit as much for the big brother as for the little one. 
  • Volunteerism no doubt sounds like a naive prescription in a world hellbent on self-aggrandizement. But the idea of “service,” and its emotional satisfaction, pervaded American life so long as the mainline Protestant churches flourished, which is to say until a generation or two ago.
  • People need to feel needed; and helping those who need you is a source of great joy. Is it really impossible to restore the idea that a man is not only a strong, stoical creature who can throw a football through a tire but one who seeks opportunities to serve others?
« First ‹ Previous 81 - 89 of 89
Showing 20 items per page