'Never summon a power you can't control': Yuval Noah Harari on how AI could threaten de... - 0 views
www.theguardian.com/...h-harari-ai-book-extract-nexus
AI power control democracy divide world intelligence guardian Harari

-
The Phaethon myth and Goethe’s poem fail to provide useful advice because they misconstrue the way humans gain power. In both fables, a single human acquires enormous power, but is then corrupted by hubris and greed. The conclusion is that our flawed individual psychology makes us abuse power.
-
What this crude analysis misses is that human power is never the outcome of individual initiative. Power always stems from cooperation between large numbers of humans. Accordingly, it isn’t our individual psychology that causes us to abuse power.
-
Our tendency to summon powers we cannot control stems not from individual psychology but from the unique way our species cooperates in large numbers. Humankind gains enormous power by building large networks of cooperation, but the way our networks are built predisposes us to use power unwisely
- ...57 more annotations...
-
We are also producing ever more powerful weapons of mass destruction, from thermonuclear bombs to doomsday viruses. Our leaders don’t lack information about these dangers, yet instead of collaborating to find solutions, they are edging closer to a global war.
-
Despite – or perhaps because of – our hoard of data, we are continuing to spew greenhouse gases into the atmosphere, pollute rivers and oceans, cut down forests, destroy entire habitats, drive countless species to extinction, and jeopardise the ecological foundations of our own species
-
For most of our networks have been built and maintained by spreading fictions, fantasies and mass delusions – ranging from enchanted broomsticks to financial systems. Our problem, then, is a network problem. Specifically, it is an information problem. For information is the glue that holds networks together, and when people are fed bad information they are likely to make bad decisions, no matter how wise and kind they personally are.
-
Traditionally, the term “AI” has been used as an acronym for artificial intelligence. But it is perhaps better to think of it as an acronym for alien intelligence
-
AI is an unprecedented threat to humanity because it is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands
-
Nuclear bombs do not themselves decide whom to kill, nor can they improve themselves or invent even more powerful bombs. In contrast, autonomous drones can decide by themselves who to kill, and AIs can create novel bomb designs, unprecedented military strategies and better AIs.
-
AI isn’t a tool – it’s an agent. The biggest threat of AI is that we are summoning to Earth countless new powerful agents that are potentially more intelligent and imaginative than us, and that we don’t fully understand or control.
-
repreneurs such as Yoshua Bengio, Geoffrey Hinton, Sam Altman, Elon Musk and Mustafa Suleyman have warned that AI could destroy our civilisation. In a 2023 survey of 2,778 AI researchers, more than a third gave at least a 10% chance of advanced AI leading to outcomes as bad as human extinction.
-
As AI evolves, it becomes less artificial (in the sense of depending on human designs) and more alien
-
AI isn’t progressing towards human-level intelligence. It is evolving an alien type of intelligence.
-
generative AIs like GPT-4 already create new poems, stories and images. This trend will only increase and accelerate, making it more difficult to understand our own lives. Can we trust computer algorithms to make wise decisions and create a better world? That’s a much bigger gamble than trusting an enchanted broom to fetch water
-
it is more than just human lives we are gambling on. AI is already capable of producing art and making scientific discoveries by itself. In the next few decades, it will be likely to gain the ability even to create new life forms, either by writing genetic code or by inventing an inorganic code animating inorganic entities. AI could therefore alter the course not just of our species’ history but of the evolution of all life forms.
-
“Then … came move number 37,” writes Suleyman. “It made no sense. AlphaGo had apparently blown it, blindly following an apparently losing strategy no professional player would ever pursue. The live match commentators, both professionals of the highest ranking, said it was a ‘very strange move’ and thought it was ‘a mistake’.
-
as the endgame approached, that ‘mistaken’ move proved pivotal. AlphaGo won again. Go strategy was being rewritten before our eyes. Our AI had uncovered ideas that hadn’t occurred to the most brilliant players in thousands of years.”
-
“In AI, the neural networks moving toward autonomy are, at present, not explainable. You can’t walk someone through the decision-making process to explain precisely why an algorithm produced a specific prediction. Engineers can’t peer beneath the hood and easily explain in granular detail what caused something to happen. GPT‑4, AlphaGo and the rest are black boxes, their outputs and decisions based on opaque and impossibly intricate chains of minute signals.”
-
Yet during all those millennia, human minds have explored only certain areas in the landscape of Go. Other areas were left untouched, because human minds just didn’t think to venture there. AI, being free from the limitations of human minds, discovered and explored these previously hidden areas.
-
Second, move 37 demonstrated the unfathomability of AI. Even after AlphaGo played it to achieve victory, Suleyman and his team couldn’t explain how AlphaGo decided to play it.
-
Move 37 is an emblem of the AI revolution for two reasons. First, it demonstrated the alien nature of AI. In east Asia, Go is considered much more than a game: it is a treasured cultural tradition. For more than 2,500 years, tens of millions of people have played Go, and entire schools of thought have developed around the game, espousing different strategies and philosophies
-
The rise of unfathomable alien intelligence poses a threat to all humans, and poses a particular threat to democracy. If more and more decisions about people’s lives are made in a black box, so voters cannot understand and challenge them, democracy ceases to functio
-
Human voters may keep choosing a human president, but wouldn’t this be just an empty ceremony? Even today, only a small fraction of humanity truly understands the financial system
-
As the 2007‑8 financial crisis indicated, some complex financial devices and principles were intelligible to only a few financial wizards. What happens to democracy when AIs create even more complex financial devices and when the number of humans who understand the financial system drops to zero?
-
Translating Goethe’s cautionary fable into the language of modern finance, imagine the following scenario: a Wall Street apprentice fed up with the drudgery of the financial workshop creates an AI called Broomstick, provides it with a million dollars in seed money, and orders it to make more money.
-
n pursuit of more dollars, Broomstick not only devises new investment strategies, but comes up with entirely new financial devices that no human being has ever thought about.
-
many financial areas were left untouched, because human minds just didn’t think to venture there. Broomstick, being free from the limitations of human minds, discovers and explores these previously hidden areas, making financial moves that are the equivalent of AlphaGo’s move 37.
-
For a couple of years, as Broomstick leads humanity into financial virgin territory, everything looks wonderful. The markets are soaring, the money is flooding in effortlessly, and everyone is happy. Then comes a crash bigger even than 1929 or 2008. But no human being – either president, banker or citizen – knows what caused it and what could be done about it
-
AI, too, is a global problem. Accordingly, to understand the new computer politics, it is not enough to examine how discrete societies might react to AI. We also need to consider how AI might change relations between societies on a global level.
-
As long as humanity stands united, we can build institutions that will regulate AI, whether in the field of finance or war. Unfortunately, humanity has never been united. We have always been plagued by bad actors, as well as by disagreements between good actors. The rise of AI poses an existential danger to humankind, not because of the malevolence of computers, but because of our own shortcomings.
-
errorists might use AI to instigate a global pandemic. The terrorists themselves may have little knowledge of epidemiology, but the AI could synthesise for them a new pathogen, order it from commercial laboratories or print it in biological 3D printers, and devise the best strategy to spread it around the world, via airports or food supply chain
-
desperate governments request help from the only entity capable of understanding what is happening – Broomstick. The AI makes several policy recommendations, far more audacious than quantitative easing – and far more opaque, too. Broomstick promises that these policies will save the day, but human politicians – unable to understand the logic behind Broomstick’s recommendations – fear they might completely unravel the financial and even social fabric of the world. Should they listen to the AI?
-
Human civilisation could also be devastated by weapons of social mass destruction, such as stories that undermine our social bonds. An AI developed in one country could be used to unleash a deluge of fake news, fake money and fake humans so that people in numerous other countries lose the ability to trust anything or anyone.
-
Many societies – both democracies and dictatorships – may act responsibly to regulate such usages of AI, clamp down on bad actors and restrain the dangerous ambitions of their own rulers and fanatics. But if even a handful of societies fail to do so, this could be enough to endanger the whole of humankind
-
Thus, a paranoid dictator might hand unlimited power to a fallible AI, including even the power to launch nuclear strikes. If the AI then makes an error, or begins to pursue an unexpected goal, the result could be catastrophic, and not just for that country
-
magine a situation – in 20 years, say – when somebody in Beijing or San Francisco possesses the entire personal history of every politician, journalist, colonel and CEO in your country: every text they ever sent, every web search they ever made, every illness they suffered, every sexual encounter they enjoyed, every joke they told, every bribe they took. Would you still be living in an independent country, or would you now be living in a data colony?
-
What happens when your country finds itself utterly dependent on digital infrastructures and AI-powered systems over which it has no effective control?
-
In the economic realm, previous empires were based on material resources such as land, cotton and oil. This placed a limit on the empire’s ability to concentrate both economic wealth and political power in one place. Physics and geology don’t allow all the world’s land, cotton or oil to be moved to one country
-
t is different with the new information empires. Data can move at the speed of light, and algorithms don’t take up much space. Consequently, the world’s algorithmic power can be concentrated in a single hub. Engineers in a single country might write the code and control the keys for all the crucial algorithms that run the entire world.
-
AI and automation therefore pose a particular challenge to poorer developing countries. In an AI-driven global economy, the digital leaders claim the bulk of the gains and could use their wealth to retrain their workforce and profit even more
-
Meanwhile, the value of unskilled labourers in left-behind countries will decline, causing them to fall even further behind. The result might be lots of new jobs and immense wealth in San Francisco and Shanghai, while many other parts of the world face economic ruin.
-
AI is expected to add $15.7tn (£12.3tn) to the global economy by 2030. But if current trends continue, it is projected that China and North America – the two leading AI superpowers – will together take home 70% of that money.
-
uring the cold war, the iron curtain was in many places literally made of metal: barbed wire separated one country from another. Now the world is increasingly divided by the silicon curtain. The code on your smartphone determines on which side of the silicon curtain you live, which algorithms run your life, who controls your attention and where your data flows.
-
Cyberweapons can bring down a country’s electric grid, but they can also be used to destroy a secret research facility, jam an enemy sensor, inflame a political scandal, manipulate elections or hack a single smartphone. And they can do all that stealthily. They don’t announce their presence with a mushroom cloud and a storm of fire, nor do they leave a visible trail from launchpad to target
-
The two digital spheres may therefore drift further and further apart. For centuries, new information technologies fuelled the process of globalisation and brought people all over the world into closer contact. Paradoxically, information technology today is so powerful it can potentially split humanity by enclosing different people in separate information cocoons, ending the idea of a single shared human reality
-
For decades, the world’s master metaphor was the web. The master metaphor of the coming decades might be the cocoon.
-
Other countries or blocs, such as the EU, India, Brazil and Russia, may try to create their own digital cocoons,
-
Instead of being divided between two global empires, the world might be divided among a dozen empires.
-
The cold war between the US and the USSR never escalated into a direct military confrontation, largely thanks to the doctrine of mutually assured destruction. But the danger of escalation in the age of AI is bigger, because cyber warfare is inherently different from nuclear warfare.
-
US companies are now forbidden to export such chips to China. While in the short term this hampers China in the AI race, in the long term it pushes China to develop a completely separate digital sphere that will be distinct from the American digital sphere even in its smallest buildings.
-
The temptation to start a limited cyberwar is therefore big, and so is the temptation to escalate it.
-
A second crucial difference concerns predictability. The cold war was like a hyper-rational chess game, and the certainty of destruction in the event of nuclear conflict was so great that the desire to start a war was correspondingly small
-
Cyberwarfare lacks this certainty. Nobody knows for sure where each side has planted its logic bombs, Trojan horses and malware. Nobody can be certain whether their own weapons would actually work when called upon
-
Such uncertainty undermines the doctrine of mutually assured destruction. One side might convince itself – rightly or wrongly – that it can launch a successful first strike and avoid massive retaliation
-
Even if humanity avoids the worst-case scenario of global war, the rise of new digital empires could still endanger the freedom and prosperity of billions of people. The industrial empires of the 19th and 20th centuries exploited and repressed their colonies, and it would be foolhardy to expect new digital empires to behave much better
-
Moreover, if the world is divided into rival empires, humanity is unlikely to cooperate to overcome the ecological crisis or to regulate AI and other disruptive technologies such as bioengineering.
-
The division of the world into rival digital empires dovetails with the political vision of many leaders who believe that the world is a jungle, that the relative peace of recent decades has been an illusion, and that the only real choice is whether to play the part of predator or prey.
-
Given such a choice, most leaders would prefer to go down in history as predators and add their names to the grim list of conquerors that unfortunate pupils are condemned to memorise for their history exams.
-
These leaders should be reminded, however, that there is a new alpha predator in the jungle. If humanity doesn’t find a way to cooperate and protect our shared interests, we will all be easy prey to AI.