Yuval Noah Harari's Apocalyptic Vision - The Atlantic - 0 views
-
He shares with Jared Diamond, Steven Pinker, and Slavoj Žižek a zeal for theorizing widely, though he surpasses them in his taste for provocative simplifications.
-
In medieval Europe, he explains, “Knowledge = Scriptures x Logic,” whereas after the scientific revolution, “Knowledge = Empirical Data x Mathematics.”
-
Silicon Valley’s recent inventions invite galaxy-brain cogitation of the sort Harari is known for. The larger you feel the disruptions around you to be, the further back you reach for fitting analogies
- ...44 more annotations...
-
Have such technological leaps been good? Harari has doubts. Humans have “produced little that we can be proud of,” he complained in Sapiens. His next books, Homo Deus: A Brief History of Tomorrow (2015) and 21 Lessons for the 21st Century (2018), gazed into the future with apprehension
-
Harari has written another since-the-dawn-of-time overview, Nexus: A Brief History of Information Networks From the Stone Age to AI. It’s his grimmest work yet
-
Harari rejects the notion that more information leads automatically to truth or wisdom. But it has led to artificial intelligence, whose advent Harari describes apocalyptically. “If we mishandle it,” he warns, “AI might extinguish not only the human dominion on Earth but the light of consciousness itself, turning the universe into a realm of utter darkness.”
-
Those seeking a precedent for AI often bring up the movable-type printing press, which inundated Europe with books and led, they say, to the scientific revolution. Harari rolls his eyes at this story. Nothing guaranteed that printing would be used for science, he notes
-
Copernicus’s On the Revolutions of the Heavenly Spheres failed to sell its puny initial print run of about 500 copies in 1543. It was, the writer Arthur Koestler joked, an “all-time worst seller.”
-
The book that did sell was Heinrich Kramer’s The Hammer of the Witches (1486), which ranted about a supposed satanic conspiracy of sexually voracious women who copulated with demons and cursed men’s penises. The historian Tamar Herzig describes Kramer’s treatise as “arguably the most misogynistic text to appear in print in premodern times.” Yet it was “a bestseller by early modern standards,”
-
Kramer’s book encouraged the witch hunts that killed tens of thousands. These murderous sprees, Harari observes, were “made worse” by the printing press.
-
Ampler information flows made surveillance and tyranny worse too, Harari argues. The Soviet Union was, among other things, “one of the most formidable information networks in history,”
-
Information has always carried this destructive potential, Harari believes. Yet up until now, he argues, even such hellish episodes have been only that: episodes
-
States ruled by top-down terror have a durability problem too, Harari explains. Even if they could somehow intercept every letter and plant informants in every household, they’d still need to intelligently analyze all of the incoming reports. No regime has come close to managing this
-
for the 20th-century states that got nearest to total control, persistent problems managing information made basic governance difficult.
-
Some people worry that the government will implant a chip in their brain, but they should “instead worry about the smartphones on which they read these conspiracy theories,” Harari writes. Phones can already track our eye movements, record our speech, and deliver our private communications to nameless strangers. They are listening devices that, astonishingly, people are willing to leave by the bedside while having sex.
-
Harari’s biggest worry is what happens when AI enters the chat. Currently, massive data collection is offset, as it has always been, by the difficulties of data analysis
-
What defense could there be against an entity that recognized every face, knew every mood, and weaponized that information?
-
Today’s political deliriums are stoked by click-maximizing algorithms that steer people toward “engaging” content, which is often whatever feeds their righteous rage.
-
Imagine what will happen, Harari writes, when bots generate that content themselves, personalizing and continually adjusting it to flood the dopamine receptors of each user.
-
Kramer’s Hammer of the Witches will seem like a mild sugar high compared with the heroin rush of content the algorithms will concoct. If AI seizes command, it could make serfs or psychopaths of us all.
-
Although we know how to make AI models, we don’t understand them. We’ve blithely summoned an “alien intelligence,” Harari writes, with no idea what it will do.
-
Last year, Harari signed an open letter warning of the “profound risks to society and humanity” posed by unleashing “powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.” It called for a pause of at least six months on training advanced AI systems,
-
cynics saw the letter as self-serving. It fed the hype by insisting that artificial intelligence, rather than being a buggy product with limited use, was an epochal development. It showcased tech leaders’ Oppenheimer-style moral seriousness
-
it cost them nothing, as there was no chance their research would actually stop. Four months after signing, Musk publicly launched an AI company.
-
The economics of the Information Age have been treacherous. They’ve made content cheaper to consume but less profitable to produce. Consider the effect of the free-content and targeted-advertising models on journalism
-
Since 2005, the United States has lost nearly a third of its newspapers and more than two-thirds of its newspaper jobs, to the point where nearly 7 percent of newspaper employees now work for a single organization, The New York Times
-
AI threatens to exacerbate this. With better chatbots, platforms won’t need to link to external content, because they’ll reproduce it synthetically. Instead of a Google search that sends users to outside sites, a chatbot query will summarize those sites, keeping users within Google’s walled garden.
-
a Truman Show–style bubble: personally generated content, read by voices that sound real but aren’t, plus product placement
-
this would cut off writers and publishers—the ones actually generating ideas—from readers. Our intellectual institutions would wither, and the internet would devolve into a closed loop of “five giant websites, each filled with screenshots of the other four,” as the software engineer Tom Eastman puts it.
-
Harari is Silicon Valley’s ideal of what a chatbot should be. He raids libraries, detects the patterns, and boils all of history down to bullet points. (Modernity, he writes, “can be summarised in a single phrase: humans agree to give up meaning in exchange for power.”)
-
Individual AI models cost billions of dollars. In 2023, about a fifth of venture capital in North America and Europe went to AI. Such sums make sense only if tech firms can earn enormous revenues off their product, by monopolizing it or marketing it. And at that scale, the most obvious buyers are other large companies or governments. How confident are we that giving more power to corporations and states will turn out well?
-
In Harari’s view, “power always stems from cooperation between large numbers of humans”; it is the product of society.
-
like a chatbot, he has a quasi-antagonistic relationship with his sources, an I’ll read them so you don’t have to attitude. He mines other writers for material—a neat quip, a telling anecdote—but rarely seems taken with anyone else’s view
-
Hand-wringing about the possibility that AI developers will lose control of their creation, like the sorcerer’s apprentice, distracts from the more plausible scenario that they won’t lose control, and that they’ll use or sell it as planned. A better German fable might be Richard Wagner’s The Ring of the Nibelung : A power-hungry incel forges a ring that will let its owner rule the world—and the gods wage war over it.
-
In Nexus, he proposes four principles. The first is “benevolence,” explained thus: “When a computer network collects information on me, that information should be used to help me rather than manipulate me.”
-
Harari’s other three values are decentralization of informational channels, accountability from those who collect our data, and some respite from algorithmic surveillance
-
these are fine, but they are quick, unsurprising, and—especially when expressed in the abstract, as things that “we” should all strive for—not very helpful.
-
though his persistent first-person pluralizing (“decisions we all make”) softly suggests that AI is humanity’s collective creation rather than the product of certain corporations and the individuals who run them. This obscures the most important actors in the drama—ironically, just as those actors are sapping our intellectual life, hampering the robust, informed debates we’d need in order to make the decisions Harari envisions.
-
Harari slots easily into the dominant worldview of Silicon Valley. Despite his oft-noted digital abstemiousness, he exemplifies its style of gathering and presenting information. And, like many in that world, he combines technological dystopianism with political passivity.
-
Although he thinks tech giants, in further developing AI, might end humankind, he does not treat thwarting them as an urgent priority. His epic narratives, told as stories of humanity as a whole, do not make much room for such us-versus-them clashes.