How the AI apocalypse gripped students at elite schools like Stanford - The Washington ... - 0 views
shared by Javier E on 06 Jul 23
- No Cached
-
Edwards thought young people would be worried about immediate threats, like AI-powered surveillance, misinformation or autonomous weapons that target and kill without human intervention — problems he calls “ultraserious.” But he soon discovered that some students were more focused on a purely hypothetical risk: That AI could become as smart as humans and destroy mankind.
-
In these scenarios, AI isn’t necessarily sentient. Instead, it becomes fixated on a goal — even a mundane one, like making paper clips — and triggers human extinction to optimize its task.
-
To prevent this theoretical but cataclysmic outcome, mission-driven labs like DeepMind, OpenAI and Anthropic are racing to build a good kind of AI programmed not to lie, deceive or kill us.
- ...28 more annotations...
-
Meanwhile, donors such as Tesla CEO Elon Musk, disgraced FTX founder Sam Bankman-Fried, Skype founder Jaan Tallinn and ethereum co-founder Vitalik Buterin — as well as institutions like Open Philanthropy, a charitable organization started by billionaire Facebook co-founder Dustin Moskovitz — have worked to push doomsayers from the tech industry’s margins into the mainstream.
-
More recently, wealthy tech philanthropists have begun recruiting an army of elite college students to prioritize the fight against rogue AI over other threats
-
Other skeptics, like venture capitalist Marc Andreessen, are AI boosters who say that hyping such fears will impede the technology’s progress.
-
Critics call the AI safety movement unscientific. They say its claims about existential risk can sound closer to a religion than research
-
And while the sci-fi narrative resonates with public fears about runaway AI, critics say it obsesses over one kind of catastrophe to the exclusion of many others.
-
Open Philanthropy spokesperson Mike Levine said harms like algorithmic racism deserve a robust response. But he said those problems stem from the same root issue: AI systems not behaving as their programmers intended. The theoretical risks “were not garnering sufficient attention from others — in part because these issues were perceived as speculative,” Levine said in a statement. He compared the nonprofit’s AI focus to its work on pandemics, which also was regarded as theoretical until the coronavirus emerged.
-
Among the reputational hazards of the AI safety movement is its association with an array of controversial figures and ideas, like EA, which is also known for recruiting ambitious young people on elite college campuses.
-
there was little status or money to be gained by focusing on risks. So the nonprofit set out to build a pipeline of young people who would filter into top companies and agitate for change from the insid
-
Colleges have been key to this growth strategy, serving as both a pathway to prestige and a recruiting ground for idealistic talent
-
The clubs train students in machine learning and help them find jobs in AI start-ups or one of the many nonprofit groups dedicated to AI safety.
-
Many of these newly minted student leaders view rogue AI as an urgent and neglected threat, potentially rivaling climate change in its ability to end human life. Many see advanced AI as the Manhattan Project of their generation
-
Despite the school’s ties to Silicon Valley, Mukobi said it lags behind nearby UC Berkeley, where younger faculty members research AI alignment, the term for embedding human ethics into AI systems.
-
Mukobi joined Stanford’s club for effective altruism, known as EA, a philosophical movement that advocates doing maximum good by calculating the expected value of charitable acts, like protecting the future from runaway AI. By 2022, AI capabilities were advancing all around him — wild developments that made those warnings seem prescient.
-
At Stanford, Open Philanthropy awarded Luby and Edwards more than $1.5 million in grants to launch the Stanford Existential Risk Initiative, which supports student research in the growing field known as “AI safety” or “AI alignment.
-
from the start EA was intertwined with tech subcultures interested in futurism and rationalist thought. Over time, global poverty slid down the cause list, while rogue AI climbed toward the top.
-
In the past year, EA has been beset by scandal, including the fall of Bankman-Fried, one of its largest donors
-
Another key figure, Oxford philosopher Nick Bostrom, whose 2014 bestseller “Superintelligence” is essential reading in EA circles, met public uproar when a decades-old diatribe about IQ surfaced in January.
-
Programming future AI systems to share human values could mean “an amazing world free from diseases, poverty, and suffering,” while failure could unleash “human extinction or our permanent disempowerment,” Mukobi wrote, offering free boba tea to anyone who attended the 30-minute intro.
-
Open Philanthropy’s new university fellowship offers a hefty direct deposit: undergraduate leaders receive as much as $80,000 a year, plus $14,500 for health insurance, and up to $100,000 a year to cover group expenses.
-
Student leaders have access to a glut of resources from donor-sponsored organizations, including an “AI Safety Fundamentals” curriculum developed by an OpenAI employee.
-
Interest in the topic is also growing among Stanford faculty members, Edwards said. He noted that a new postdoctoral fellow will lead a class on alignment next semester in Stanford’s storied computer science department.
-
Edwards discovered that shared online forums function like a form of peer review, with authors changing their original text in response to the comments
-
Mukobi feels energized about the growing consensus that these risks are worth exploring. He heard students talking about AI safety in the halls of Gates, the computer science building, in May after Geoffrey Hinton, another “godfather” of AI, quit Google to warn about AI. By the end of the year, Mukobi thinks the subject could be a dinner-table topic, just like climate change or the war in Ukraine.
-
Luby, Edwards’s teaching partner for the class on human extinction, also seems to find these arguments persuasive. He had already rearranged the order of his AI lesson plans to help students see the imminent risks from AI. No one needs to “drink the EA Kool-Aid” to have genuine concerns, he said.
-
Edwards, on the other hand, still sees things like climate change as a bigger threat than rogue AI. But ChatGPT and the rapid release of AI models has convinced him that there should be room to think about AI safety.
-
Interested students join reading groups where they get free copies of books like “The Precipice,” and may spend hours reading the latest alignment papers, posting career advice on the Effective Altruism forum, or adjusting their P(doom), a subjective estimate of the probability that advanced AI will end badly. The grants, travel, leadership roles for inexperienced graduates and sponsored co-working spaces build a close-knit community.
-
The course will not be taught by students or outside experts. Instead, he said, it “will be a regular Stanford class.”