How Could AI Destroy Humanity? - The New York Times - 0 views
-
“AI will steadily be delegated, and could — as it becomes more autonomous — usurp decision making and thinking from current humans and human-run institutions,” said Anthony Aguirre, a cosmologist at the University of California, Santa Cruz and a founder of the Future of Life Institute, the organization behind one of two open letters.
-
“At some point, it would become clear that the big machine that is running society and the economy is not really under human control, nor can it be turned off, any more than the S&P 500 could be shut down,” he said.
-
Are there signs A.I. could do this?Not quite. But researchers are transforming chatbots like ChatGPT into systems that can take actions based on the text they generate. A project called AutoGPT is the prime example.
- ...11 more annotations...
-
The idea is to give the system goals like “create a company” or “make some money.” Then it will keep looking for ways of reaching that goal, particularly if it is connected to other internet services.
-
A system like AutoGPT can generate computer programs. If researchers give it access to a computer server, it could actually run those programs. In theory, this is a way for AutoGPT to do almost anything online — retrieve information, use applications, create new applications, even improve itself.
-
Systems like AutoGPT do not work well right now. They tend to get stuck in endless loops. Researchers gave one system all the resources it needed to replicate itself. It couldn’t do it.In time, those limitations could be fixed.
-
“People are actively trying to build systems that self-improve,” said Connor Leahy, the founder of Conjecture, a company that says it wants to align A.I. technologies with human values. “Currently, this doesn’t work. But someday, it will. And we don’t know when that day is.”
-
Mr. Leahy argues that as researchers, companies and criminals give these systems goals like “make some money,” they could end up breaking into banking systems, fomenting revolution in a country where they hold oil futures or replicating themselves when someone tries to turn them off.
-
Because they learn from more data than even their creators can understand, these system also exhibit unexpected behavior. Researchers recently showed that one system was able to hire a human online to defeat a Captcha test. When the human asked if it was “a robot,” the system lied and said it was a person with a visual impairment.Some experts worry that as researchers make these systems more powerful, training them on ever larger amounts of data, they could learn more bad habits.
-
Who are the people behind these warnings?In the early 2000s, a young writer named Eliezer Yudkowsky began warning that A.I. could destroy humanity. His online posts spawned a community of believers.
-
Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, an A.I. lab that Google acquired in 2014. And many from the community of “EAs” worked inside these labs. They believed that because they understood the dangers of A.I., they were in the best position to build it.
-
The two organizations that recently released open letters warning of the risks of A.I. — the Center for A.I. Safety and the Future of Life Institute — are closely tied to this movement.
-
The recent warnings have also come from research pioneers and industry leaders like Elon Musk, who has long warned about the risks. The latest letter was signed by Sam Altman, the chief executive of OpenAI; and Demis Hassabis, who helped found DeepMind and now oversees a new A.I. lab that combines the top researchers from DeepMind and Google.
-
Other well-respected figures signed one or both of the warning letters, including Dr. Bengio and Geoffrey Hinton, who recently stepped down as an executive and researcher at Google. In 2018, they received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.