Skip to main content

Home/ History Readings/ Group items tagged AI

Rss Feed Group items tagged

Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

AI is already writing books, websites and online recipes - The Washington Post - 0 views

  • Experts say those books are likely just the tip of a fast-growing iceberg of AI-written content spreading across the web as new language software allows anyone to rapidly generate reams of prose on almost any topic. From product reviews to recipes to blog posts and press releases, human authorship of online material is on track to become the exception rather than the norm.
  • Semrush, a leading digital marketing firm, recently surveyed its customers about their use of automated tools. Of the 894 who responded, 761 said they’ve at least experimented with some form of generative AI to produce online content, while 370 said they now use it to help generate most if not all of their new content, according to Semrush Chief Strategy Officer Eugene Levin.
  • What that may mean for consumers is more hyper-specific and personalized articles — but also more misinformation and more manipulation, about politics, products they may want to buy and much more.
  • ...32 more annotations...
  • As AI writes more and more of what we read, vast, unvetted pools of online data may not be grounded in reality, warns Margaret Mitchell, chief ethics scientist at the AI start-up Hugging Face
  • “The main issue is losing track of what truth is,” she said. “Without grounding, the system can make stuff up. And if it’s that same made-up thing all over the world, how do you trace it back to what reality is?”
  • a raft of online publishers have been using automated writing tools based on ChatGPT’s predecessors, GPT-2 and GPT-3, for years. That experience shows that a world in which AI creations mingle freely and sometimes imperceptibly with human work isn’t speculative; it’s flourishing in plain sight on Amazon product pages and in Google search results.
  • “If you have a connection to the internet, you have consumed AI-generated content,” said Jonathan Greenglass, a New York-based tech investor focused on e-commerce. “It’s already here.
  • “In the last two years, we’ve seen this go from being a novelty to being pretty much an essential part of the workflow,”
  • the news credibility rating company NewsGuard identified 49 news websites across seven languages that appeared to be mostly or entirely AI-generated.
  • The sites sport names like Biz Breaking News, Market News Reports, and bestbudgetUSA.com; some employ fake author profiles and publish hundreds of articles a day, the company said. Some of the news stories are fabricated, but many are simply AI-crafted summaries of real stories trending on other outlets.
  • Ingenio, the San Francisco-based online publisher behind sites such as horoscope.com and astrology.com, is among those embracing automated content. While its flagship horoscopes are still human-written, the company has used OpenAI’s GPT language models to launch new sites such as sunsigns.com, which focuses on celebrities’ birth signs, and dreamdiary.com, which interprets highly specific dreams.
  • Ingenio used to pay humans to write birth sign articles on a handful of highly searched celebrities like Michael Jordan and Ariana Grande, said Josh Jaffe, president of its media division. But delegating the writing to AI allows sunsigns.com to cheaply crank out countless articles on not-exactly-A-listers
  • In the past, Jaffe said, “We published a celebrity profile a month. Now we can do 10,000 a month.”
  • It isn’t just text. Google users have recently posted examples of the search engine surfacing AI-generated images. For instance, a search for the American artist Edward Hopper turned up an AI image in the style of Hopper, rather than his actual art, as the first result.
  • Jaffe said he isn’t particularly worried that AI content will overwhelm the web. “It takes time for this content to rank well” on Google, he said — meaning that it appears on the first page of search results for a given query, which is critical to attracting readers. And it works best when it appears on established websites that already have a sizable audience: “Just publishing this content doesn’t mean you have a viable business.”
  • Google clarified in February that it allows AI-generated content in search results, as long as the AI isn’t being used to manipulate a site’s search rankings. The company said its algorithms focus on “the quality of content, rather than how content is produced.”
  • Reputations are at risk if the use of AI backfires. CNET, a popular tech news site, took flack in January when fellow tech site Futurism reported that CNET had been using AI to create articles or add to existing ones without clear disclosures. CNET subsequently investigated and found that many of its 77 AI-drafted stories contained errors.
  • Jaffe said his company discloses its use of AI to readers, and he promoted the strategy at a recent conference for the publishing industry. “There’s nothing to be ashamed of,” he said. “We’re actually doing people a favor by leveraging generative AI tools” to create niche content that wouldn’t exist otherwise.
  • BuzzFeed, which pioneered a media model built around reaching readers directly on social platforms like Facebook, announced in January it planned to make “AI inspired content” part of its “core business,” such as using AI to craft quizzes that tailor themselves to each reader. BuzzFeed announced last month that it is laying off 15 percent of its staff and shutting down its news division, BuzzFeed News.
  • it’s finding traction in the murkier worlds of online clickbait and affiliate marketing, where success is less about reputation and more about gaming the big tech platforms’ algorithms.
  • That business is driven by a simple equation: how much it costs to create an article vs. how much revenue it can bring in. The main goal is to attract as many clicks as possible, then serve the readers ads worth just fractions of a cent on each visit — the classic form of clickbait
  • In the past, such sites often outsourced their writing to businesses known as “content mills,” which harness freelancers to generate passable copy for minimal pay. Now, some are bypassing content mills and opting for AI instead.
  • “Previously it would cost you, let’s say, $250 to write a decent review of five grills,” Semrush’s Levin said. “Now it can all be done by AI, so the cost went down from $250 to $10.”
  • The problem, Levin said, is that the wide availability of tools like ChatGPT means more people are producing similarly cheap content, and they’re all competing for the same slots in Google search results or Amazon’s on-site product reviews
  • So they all have to crank out more and more article pages, each tuned to rank highly for specific search queries, in hopes that a fraction will break through. The result is a deluge of AI-written websites, many of which are never seen by human eyes.
  • But CNET’s parent company, Red Ventures, is forging ahead with plans for more AI-generated content, which has also been spotted on Bankrate.com, its popular hub for financial advice. Meanwhile, CNET in March laid off a number of employees, a move it said was unrelated to its growing use of AI.
  • The rise of AI is already hurting the business of Textbroker, a leading content platform based in Germany and Las Vegas, said Jochen Mebus, the company’s chief revenue officer. While Textbroker prides itself on supplying credible, human-written copy on a huge range of topics, “People are trying automated content right now, and so that has slowed down our growth,”
  • Mebus said the company is prepared to lose some clients who are just looking to make a “fast dollar” on generic AI-written content. But it’s hoping to retain those who want the assurance of a human touch, while it also trains some of its writers to become more productive by employing AI tools themselves.
  • He said a recent survey of the company’s customers found that 30 to 40 percent still want exclusively “manual” content, while a similar-size chunk is looking for content that might be AI-generated but human-edited to check for tone, errors and plagiarism.
  • Levin said Semrush’s clients have also generally found that AI is better used as a writing assistant than a sole author. “We’ve seen people who even try to fully automate the content creation process,” he said. “I don’t think they’ve had really good results with that. At this stage, you need to have a human in the loop.”
  • For Cowell, whose book title appears to have inspired an AI-written copycat, the experience has dampened his enthusiasm for writing.“My concern is less that I’m losing sales to fake books, and more that this low-quality, low-priced, low-effort writing is going to have a chilling effect on humans considering writing niche technical books in the future,”
  • It doesn’t help, he added, knowing that “any text I write will inevitably be fed into an AI system that will generate even more competition.”
  • Amazon removed the impostor book, along with numerous others by the same publisher, after The Post contacted the company for comment.
  • AI-written books aren’t against Amazon’s rules, per se, and some authors have been open about using ChatGPT to write books sold on the site.
  • “Amazon is constantly evaluating emerging technologies and innovating to provide a trustworthy shopping experience for our customers,”
Javier E

AI could change the 2024 elections. We need ground rules. - The Washington Post - 0 views

  • New York Mayor Eric Adams doesn’t speak Spanish. But it sure sounds like he does.He’s been using artificial intelligence software to send prerecorded calls about city events to residents in Spanish, Mandarin Chinese, Urdu and Yiddish. The voice in the messages mimics the mayor but was generated with AI software from a company called ElevenLabs.
  • Experts have warned for years that AI will change our democracy by distorting reality. That future is already here. AI is being used to fabricate voices, fundraising emails and “deepfake” images of events that never occurred.
  • I’m writing this to urge elected officials, candidates and their supporters to pledge not to use AI to deceive voters. I’m not suggesting a ban, but rather calling for politicians to commit to some common values while our democracy adjusts to a world with AI.
  • ...20 more annotations...
  • If we don’t draw some lines now, legions of citizens could be manipulated, disenfranchised or lose faith in the whole system — opening doors to foreign adversaries who want to do the same. AI might break us in 2024.
  • “The ability of AI to interfere with our elections, to spread misinformation that’s extremely believable is one of the things that’s preoccupying us,” Schumer said, after watching me so easily create a deepfake of him. “Lots of people in the Congress are examining this.”
  • Of course, fibbing politicians are nothing new, but examples keep multiplying of how AI supercharges misinformation in ways we haven’t seen before. Two examples: The presidential campaign of Florida Gov. Ron DeSantis (R) shared an AI-generated image of former president Donald Trump embracing Anthony S. Fauci. That hug never happened. In Chicago’s mayoral primary, someone used AI to clone the voice of candidate Paul Vallas in a fake news report, making it look like he approved of police brutality.
  • But what will happen when a shocking image or audio clip goes viral in a battleground state shortly before an election? What kind of chaos will ensue when someone uses a bot to send out individually tailored lies to millions of different voters?
  • A wide 85 percent of U.S. citizens said they were “very” or “somewhat” concerned about the spread of misleading AI video and audio, in an August survey by YouGov. And 78 percent were concerned about AI contributing to the spread of political propaganda.
  • We can’t put the genie back in the bottle. AI is already embedded in tech tool campaigns that all of us use every day. AI creates our Facebook feeds and picks what ads we see. AI built into our phone cameras brightens faces and smooths skin.
  • What’s more, there are many political uses for AI that are unobjectionable, and even empowering for candidates with fewer resources. Politicians can use AI to manage the grunt work of sorting through databases and responding to constituents. Republican presidential candidate Asa Hutchinson has an AI chatbot trained to answer questions like him. (I’m not sure politician bots are very helpful, but fine, give it a try.)
  • Clarke’s solution, included in a bill she introduced on political ads: Candidates should disclose when they use AI to create communications. You know the “I approve this message” notice? Now add, “I used AI to make this message.”
  • But labels aren’t enough. If AI disclosures become commonplace, we may become blind to them, like so much other fine print.
  • The bigger ask: We want candidates and their supporting parties and committees not to use AI to deceive us.
  • So what’s the difference between a dangerous deepfake and an AI facetune that makes an octogenarian candidate look a little less octogenarian?
  • “The core definition is showing a candidate doing or saying something they didn’t do or say,”
  • Sure, give Biden or Trump a facetune, or even show them shaking hands with Abraham Lincoln. But don’t use AI to show your competitor hugging an enemy or fake their voice commenting on current issues.
  • The pledge also includes not using AI to suppress voting, such as using an authoritative voice or image to tell people a polling place has been closed. That is already illegal in many states, but it’s still concerning how believable AI might make these efforts seem.
  • Don’t deepfake yourself. Making yourself or your favorite candidate appear more knowledgeable, experienced or culturally capable is also a form of deception.
  • (Pressed on the ethics of his use of AI, Adams just proved my point that we desperately need some ground rules. “These are part of the broader conversations that the philosophical people will have to sit down and figure out, ‘Is this ethically right or wrong?’ I’ve got one thing: I’ve got to run the city,” he said.)
  • The golden rule in my pledge — don’t use AI to be materially deceptive — is similar to the one in an AI regulation proposed by a bipartisan group of lawmakers
  • Such proposals have faced resistance in Washington on First Amendment grounds. The free speech of politicians is important. It’s not against the law for politicians to lie, whether they’re using AI or not. An effort to get the Federal Election Commission to count AI deepfakes as “fraudulent misrepresentation” under its existing authority has faced similar pushback.
  • But a pledge like the one I outline here isn’t a law restraining speech. It’s asking politicians to take a principled stand on their own use of AI
  • Schumer said he thinks my pledge is just a start of what’s needed. “Maybe most candidates will make that pledge. But the ones that won’t will drive us to a lower common denominator, and that’s true throughout AI,” he said. “If we don’t have government-imposed guardrails, the lowest common denominator will prevail.”
Javier E

How the AI apocalypse gripped students at elite schools like Stanford - The Washington ... - 0 views

  • Edwards thought young people would be worried about immediate threats, like AI-powered surveillance, misinformation or autonomous weapons that target and kill without human intervention — problems he calls “ultraserious.” But he soon discovered that some students were more focused on a purely hypothetical risk: That AI could become as smart as humans and destroy mankind.
  • In these scenarios, AI isn’t necessarily sentient. Instead, it becomes fixated on a goal — even a mundane one, like making paper clips — and triggers human extinction to optimize its task.
  • To prevent this theoretical but cataclysmic outcome, mission-driven labs like DeepMind, OpenAI and Anthropic are racing to build a good kind of AI programmed not to lie, deceive or kill us.
  • ...28 more annotations...
  • Meanwhile, donors such as Tesla CEO Elon Musk, disgraced FTX founder Sam Bankman-Fried, Skype founder Jaan Tallinn and ethereum co-founder Vitalik Buterin — as well as institutions like Open Philanthropy, a charitable organization started by billionaire Facebook co-founder Dustin Moskovitz — have worked to push doomsayers from the tech industry’s margins into the mainstream.
  • More recently, wealthy tech philanthropists have begun recruiting an army of elite college students to prioritize the fight against rogue AI over other threats
  • Other skeptics, like venture capitalist Marc Andreessen, are AI boosters who say that hyping such fears will impede the technology’s progress.
  • Critics call the AI safety movement unscientific. They say its claims about existential risk can sound closer to a religion than research
  • And while the sci-fi narrative resonates with public fears about runaway AI, critics say it obsesses over one kind of catastrophe to the exclusion of many others.
  • Open Philanthropy spokesperson Mike Levine said harms like algorithmic racism deserve a robust response. But he said those problems stem from the same root issue: AI systems not behaving as their programmers intended. The theoretical risks “were not garnering sufficient attention from others — in part because these issues were perceived as speculative,” Levine said in a statement. He compared the nonprofit’s AI focus to its work on pandemics, which also was regarded as theoretical until the coronavirus emerged.
  • Among the reputational hazards of the AI safety movement is its association with an array of controversial figures and ideas, like EA, which is also known for recruiting ambitious young people on elite college campuses.
  • The foundation began prioritizing existential risks around AI in 2016,
  • there was little status or money to be gained by focusing on risks. So the nonprofit set out to build a pipeline of young people who would filter into top companies and agitate for change from the insid
  • Colleges have been key to this growth strategy, serving as both a pathway to prestige and a recruiting ground for idealistic talent
  • The clubs train students in machine learning and help them find jobs in AI start-ups or one of the many nonprofit groups dedicated to AI safety.
  • Many of these newly minted student leaders view rogue AI as an urgent and neglected threat, potentially rivaling climate change in its ability to end human life. Many see advanced AI as the Manhattan Project of their generation
  • Despite the school’s ties to Silicon Valley, Mukobi said it lags behind nearby UC Berkeley, where younger faculty members research AI alignment, the term for embedding human ethics into AI systems.
  • Mukobi joined Stanford’s club for effective altruism, known as EA, a philosophical movement that advocates doing maximum good by calculating the expected value of charitable acts, like protecting the future from runaway AI. By 2022, AI capabilities were advancing all around him — wild developments that made those warnings seem prescient.
  • At Stanford, Open Philanthropy awarded Luby and Edwards more than $1.5 million in grants to launch the Stanford Existential Risk Initiative, which supports student research in the growing field known as “AI safety” or “AI alignment.
  • from the start EA was intertwined with tech subcultures interested in futurism and rationalist thought. Over time, global poverty slid down the cause list, while rogue AI climbed toward the top.
  • In the past year, EA has been beset by scandal, including the fall of Bankman-Fried, one of its largest donors
  • Another key figure, Oxford philosopher Nick Bostrom, whose 2014 bestseller “Superintelligence” is essential reading in EA circles, met public uproar when a decades-old diatribe about IQ surfaced in January.
  • Programming future AI systems to share human values could mean “an amazing world free from diseases, poverty, and suffering,” while failure could unleash “human extinction or our permanent disempowerment,” Mukobi wrote, offering free boba tea to anyone who attended the 30-minute intro.
  • Open Philanthropy’s new university fellowship offers a hefty direct deposit: undergraduate leaders receive as much as $80,000 a year, plus $14,500 for health insurance, and up to $100,000 a year to cover group expenses.
  • Student leaders have access to a glut of resources from donor-sponsored organizations, including an “AI Safety Fundamentals” curriculum developed by an OpenAI employee.
  • Interest in the topic is also growing among Stanford faculty members, Edwards said. He noted that a new postdoctoral fellow will lead a class on alignment next semester in Stanford’s storied computer science department.
  • Edwards discovered that shared online forums function like a form of peer review, with authors changing their original text in response to the comments
  • Mukobi feels energized about the growing consensus that these risks are worth exploring. He heard students talking about AI safety in the halls of Gates, the computer science building, in May after Geoffrey Hinton, another “godfather” of AI, quit Google to warn about AI. By the end of the year, Mukobi thinks the subject could be a dinner-table topic, just like climate change or the war in Ukraine.
  • Luby, Edwards’s teaching partner for the class on human extinction, also seems to find these arguments persuasive. He had already rearranged the order of his AI lesson plans to help students see the imminent risks from AI. No one needs to “drink the EA Kool-Aid” to have genuine concerns, he said.
  • Edwards, on the other hand, still sees things like climate change as a bigger threat than rogue AI. But ChatGPT and the rapid release of AI models has convinced him that there should be room to think about AI safety.
  • Interested students join reading groups where they get free copies of books like “The Precipice,” and may spend hours reading the latest alignment papers, posting career advice on the Effective Altruism forum, or adjusting their P(doom), a subjective estimate of the probability that advanced AI will end badly. The grants, travel, leadership roles for inexperienced graduates and sponsored co-working spaces build a close-knit community.
  • The course will not be taught by students or outside experts. Instead, he said, it “will be a regular Stanford class.”
Javier E

Excuse me, but the industries AI is disrupting are not lucrative - 0 views

  • Google’s Gemini. The demo video earlier this week was nothing short of amazing, as Gemini appeared to fluidly interact with a questioner going through various tasks and drawings, always giving succinct and correct answers.
  • another huge new AI model revealed.
  • that’s. . . not what’s going on. Rather, they pre-recorded it and sent individual frames of the video to Gemini to respond to, as well as more informative prompts than shown, in addition to editing the replies from Gemini to be shorter and thus, presumably, more relevant. Factor all that in, Gemini doesn’t look that different from GPT-4,
  • ...24 more annotations...
  • Continued hype is necessary for the industry, because so much money flowing in essentially allows the big players, like OpenAI, to operate free of economic worry and considerations
  • The money involved is staggering—Anthropic announced they would compete with OpenAI and raised 2 billion dollars to train their next-gen model, a European counterpart just raised 500 million, etc. Venture capitalists are eager to throw as much money as humanely possible into AI, as it looks so revolutionary, so manifesto-worthy, so lucrative.
  • While I have no idea what the downloads are going to be for the GPT Store next year, my suspicion is it does not live up to the hyped Apple-esque expectation.
  • given their test scores, I’m willing to say GPT-4 or Gemini is smarter along many dimensions than a lot of actual humans, at least in the breadth of their abstract knowledge—all while noting even leading models still have around a 3% hallucination rate, which stacks up in a complex task.
  • A more interesting “bear case” for AI is that, if you look at the list of industries that leading AIs like GPT-4 are capable of disrupting—and therefore making money off of—the list is lackluster from a return-on-investment perspective, because the industries themselves are not very lucrative.
  • What are AIs of the GPT-4 generation best at? It’s things like:writing essays or short fictionsdigital artchattingprogramming assistance
  • As of this writing, the compute cost to create an image using a large image model is roughly $.001 and it takes around 1 second. Doing a similar task with a designer or a photographer would cost hundreds of dollars (minimum) and many hours or days (accounting for work time, as well as schedules). Even if, for simplicity’s sake, we underestimate the cost to be $100 and the time to be 1 hour, generative AI is 100,000 times cheaper and 3,600 times faster than the human alternative.
  • The issue is that taking the job of a human illustrator just. . . doesn’t make you much money. Because human illustrators don’t make much money
  • While you can easily use Dall-E to make art for a blog, or a comic book, or a fantasy portrait to play an RPG, the market for those things is vanishingly small, almost nonexistent
  • While I personally wouldn’t go so far as to describe current LLMs as “a solution in search of a problem” like cryptocurrency has famously been described as, I do think the description rings true in an overall economic/business sense so fa
  • Was there really a great crying need for new ways to cheat on academic essays? Probably not. Will chatting with the History Buff AI app (it was is in the background of Sam Altman’s presentation) be significantly different than chatting with posters on /r/history on Reddit? Probably not
  • Search is the most obvious large market for AI companies, but Bing has had effectively GPT-4-level AI on offer now for almost a year, and there’s been no huge steal from Google’s market share.
  • What about programming? It’s actually a great expression of the issue, because AI isn’t replacing programming—it’s replacing Stack Overflow, a programming advice website (after all, you can’t just hire GPT-4 to code something for you, you have to hire a programmer who uses GPT-4
  • Even if OpenAI drove Stack Overflow out of business entirely and cornered the market on “helping with programming” they would gain, what? Stack Overflow is worth about 1.8 billion, according to its last sale in 2022. OpenAI already dwarfs it in valuation by an order of magnitude.
  • The more one thinks about this, one notices a tension in the very pitch itself: don’t worry, AI isn’t going to take all our jobs, just make us better at them, but at the same time, the upside of AI as an industry is the total combined worth of the industries its replacing, er, disrupting, and this justifies the massive investments and endless economic optimism.
  • It makes me worried about the worst of all possible worlds: generative AI manages to pollute the internet with cheap synthetic data, manages to make being a human artist / creator harder, manages to provide the basis of agential AIs that still pose some sort of existential risk if they get intelligent enough—all without ushering in some massive GDP boost that takes us into utopia
  • If the AI industry ever goes through an economic bust sometime in the next decade I think it’ll be because there are fewer ways than first thought to squeeze substantial profits out of tasks that are relatively commonplace already
  • We can just look around for equivalencies. The payment for humans working as “mechanical turks” on Amazon are shockingly low. If a human pretending to be an AI (which is essentially what a mechanical turk worker is doing) only makes a buck an hour, how much will an AI make doing the same thing?
  • , is it just a quirk of the current state of technology, or something more general?
  • What’s written on the internet is a huge “high quality” training set (at least in that it is all legible and collectable and easy to parse) so AIs are very good at writing the kind of things you read on the internet
  • But data with a high supply usually means its production is easy or commonplace, which, ceteris paribus, means it’s cheap to sell in turn. The result is a highly-intelligent AI merely adding to an already-massive supply of the stuff it’s trained on.
  • Like, wow, an AI that can write a Reddit comment! Well, there are millions of Reddit comments, which is precisely why we now have AIs good at writing them. Wow, an AI that can generate music! Well, there are millions of songs, which is precisely why we now have AIs good at creating them.
  • Call it the supply paradox of AI: the easier it is to train an AI to do something, the less economically valuable that thing is. After all, the huge supply of the thing is how the AI got so good in the first place.
  • AI might end up incredibly smart, but mostly at things that aren’t economically valuable.
Javier E

The Only Way to Deal With the Threat From AI? Shut It Down | Time - 0 views

  • An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-
  • This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin
  • he rule that most people aware of these issues would have endorsed 50 years earlier, was that if an AI system can speak fluently and says it’s self-aware and demands human rights, that ought to be a hard stop on people just casually owning that AI and using it past that point. We already blew past that old line in the sand. And that was probably correct; I agree that current AIs are probably just imitating talk of self-awareness from their training data. But I mark that, with how little insight we have into these systems’ internals, we do not actually know.
  • ...25 more annotations...
  • The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.
  • Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”
  • It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
  • Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”
  • Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.
  • The likely result of humanity facing down an opposed superhuman intelligence is a total loss
  • To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
  • There’s no proposed plan for how we could do any such thing and survive. OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.
  • An aside: None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria.
  • I didn’t also mention that we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.
  • I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.
  • the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.
  • If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow.
  • We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems
  • Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs.
  • This is a stupid state of affairs, and an undignified way for Earth to die, and the rest of humanity ought to step in at this point and help the industry solve its collective action problem.
  • When the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she’s not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium.
  • The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth
  • Here’s what would actually need to be done:
  • Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs
  • Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithm
  • Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
  • Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool
  • Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
  • when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too.
Javier E

AI firms must be held responsible for harm they cause, 'godfathers' of technology say |... - 0 views

  • Powerful artificial intelligence systems threaten social stability and AI companies must be made liable for harms caused by their products, a group of senior experts including two “godfathers” of the technology has warned.
  • A co-author of the policy proposals from 23 experts said it was “utterly reckless” to pursue ever more powerful AI systems before understanding how to make them safe.
  • “It’s time to get serious about advanced AI systems,” said Stuart Russell, professor of computer science at the University of California, Berkeley. “These are not toys. Increasing their capabilities before we understand how to make them safe is utterly reckless.”
  • ...14 more annotations...
  • The document urged governments to adopt a range of policies, including:
  • Governments allocating one-third of their AI research and development funding, and companies one-third of their AI R&D resources, to safe and ethical use of systems.
  • Giving independent auditors access to AI laboratories.
  • Establishing a licensing system for building cutting-edge models.
  • AI companies must adopt specific safety measures if dangerous capabilities are found in their models.
  • Making tech companies liable for foreseeable and preventable harms from their AI systems.
  • Other co-authors of the document include Geoffrey Hinton and Yoshua Bengio, two of the three “godfathers of AI”, who won the ACM Turing award – the computer science equivalent of the Nobel prize – in 2018 for their work on AI.
  • Both are among the 100 guests invited to attend the summit. Hinton resigned from Google this year to sound a warning about what he called the “existential risk” posed by digital intelligence while Bengio, a professor of computer science at the University of Montreal, joined him and thousands of other experts in signing a letter in March calling for a moratorium in giant AI experiments.
  • The authors warned that carelessly developed AI systems threaten to “amplify social injustice, undermine our professions, erode social stability, enable large-scale criminal or terrorist activities and weaken our shared understanding of reality that is foundational to society.”
  • They warned that current AI systems were already showing signs of worrying capabilities that point the way to the emergence of autonomous systems that can plan, pursue goals and “act in the world”. The GPT-4 AI model that powers the ChatGPT tool, which was developed by the US firm OpenAI, has been able to design and execute chemistry experiments, browse the web and use software tools including other AI models, the experts said.
  • “If we build highly advanced autonomous AI, we risk creating systems that autonomously pursue undesirable goals”, adding that “we may not be able to keep them in check”.
  • Other policy recommendations in the document include: mandatory reporting of incidents where models show alarming behaviour; putting in place measures to stop dangerous models from replicating themselves; and giving regulators the power to pause development of AI models showing dangerous behaviour
  • Some AI experts argue that fears about the existential threat to humans are overblown. The other co-winner of the 2018 Turing award alongside Bengio and Hinton, Yann LeCun, now chief AI scientist at Mark Zuckerberg’s Meta and who is also attending the summit, told the Financial Times that the notion AI could exterminate humans was “preposterous”.
  • Nonetheless, the authors of the policy document have argued that if advanced autonomous AI systems did emerge now, the world would not know how to make them safe or conduct safety tests on them. “Even if we did, most countries lack the institutions to prevent misuse and uphold safe practices,” they added.
Javier E

OpenAI CEO Calls for Collaboration With China to Counter AI Risks - WSJ - 0 views

  • As the U.S. seeks to contain China’s progress in artificial intelligence through sanctions, OpenAI CEO Sam Altman is choosing engagement.
  • Altman emphasized the importance of collaboration between American and Chinese researchers to mitigate the risks of AI systems, against a backdrop of escalating competition between Washington and Beijing to lead in the technology. 
  • “China has some of the best AI talent in the world,” Altman said. “So I really hope Chinese AI researchers will make great contributions here.”
  • ...12 more annotations...
  • Altman and Geoff Hinton, a so-called godfather of AI who quit Google to warn of the potential dangers of AI, were among more than a dozen American and British AI executives and senior researchers from companies including chip maker Nvidia and generative AI leaders Midjourney and Anthropic who spoke at the conference. 
  • “This event is extremely rare in U.S.-China AI conversations,” said Jenny Xiao, a partner at venture-capital firm Leonis Capital and who researches AI and China. “It’s important to bring together leading voices in the U.S. and China to avoid issues such as AI arms racing, competition between labs and to help establish international standards,” she added.
  • By some metrics, China now produces more high-quality research papers in the field than the U.S. but still lags behind in “paradigm-shifting breakthroughs,” according to an analysis from The Brookings Institution. In generative AI, the latest wave of top-tier AI systems, China remains one to two years behind U.S. development and reliant on U.S. innovations, China tech watchers and industry leaders have said. 
  • The competition between Washington and Beijing belies deep cross-border connections among researchers: The U.S. and China remain each other’s number one collaborators in AI research,
  • During a congressional testimony in May, Altman warned that a peril of AI regulation is that “you slow down American industry in such a way that China or somebody else makes faster progress.”
  • At the same time, he added that it was important to continue engaging in global conversations. “This technology will impact Americans and all of us wherever it’s developed,”
  • Altman delivered the opening keynote for a session dedicated to AI safety and alignment, a hotly contested area of research that aims to mitigate the harmful impacts of AI on society. Hinton delivered the closing talk for the same session later Saturday, also dialing in. He presented his research that had made him more concerned about the risks of AI and appealed to young Chinese researchers in the audience to help work on solving these problems.
  • “Over time you should expect us to open-source more models in the future,” Altman said but added that it would be important to strike a balance to avoid abuses of the technology.
  • He has emphasized cautious regulation as European regulators consider the AI Act, viewed as one of the most ambitious plans globally to create guardrails that would address the technology’s impact on human rights, health and safety, and on tech giants’ monopolistic behavior.
  • Chinese regulators have also pressed forward on enacting strict rules for AI development that share significant overlap with the EU act but impose additional censorship measures that ban generating false or politically sensitive speech.
  • Tegmark, who attended in person, strode onto the stage smiling and waved at the crowd before opening with a few lines of Mandarin.
  • “For the first time now we have a situation where both East and West have the same incentive to continue building AI to get to all the benefits but not go so fast that we lose control,” Tegmark said, after warning the audience about catastrophic risks that could arise from careless AI development. “This is something we can all work together on.”
Javier E

The New AI Panic - The Atlantic - 0 views

  • export controls are now inflaming tensions between the United States and China. They have become the primary way for the U.S. to throttle China’s development of artificial intelligence: The department last year limited China’s access to the computer chips needed to power AI and is in discussions now to expand the controls. A semiconductor analyst told The New York Times that the strategy amounts to a kind of economic warfare.
  • If enacted, the limits could generate more friction with China while weakening the foundations of AI innovation in the U.S.
  • The same prediction capabilities that allow ChatGPT to write sentences might, in their next generation, be advanced enough to produce individualized disinformation, create recipes for novel biochemical weapons, or enable other unforeseen abuses that could threaten public safety.
  • ...22 more annotations...
  • Of particular concern to Commerce are so-called frontier models. The phrase, popularized in the Washington lexicon by some of the very companies that seek to build these models—Microsoft, Google, OpenAI, Anthropic—describes a kind of “advanced” artificial intelligence with flexible and wide-ranging uses that could also develop unexpected and dangerous capabilities. By their determination, frontier models do not exist yet. But an influential white paper published in July and co-authored by a consortium of researchers, including representatives from most of those tech firms, suggests that these models could result from the further development of large language models—the technology underpinning ChatGPT
  • The threats of frontier models are nebulous, tied to speculation about how new skill sets could suddenly “emerge” in AI programs.
  • Among the proposals the authors offer, in their 51-page document, to get ahead of this problem: creating some kind of licensing process that requires companies to gain approval before they can release, or perhaps even develop, frontier AI. “We think that it is important to begin taking practical steps to regulate frontier AI today,” the authors write.
  • Microsoft, Google, OpenAI, and Anthropic subsequently launched the Frontier Model Forum, an industry group for producing research and recommendations on “safe and responsible” frontier-model development.
  • Shortly after the paper’s publication, the White House used some of the language and framing in its voluntary AI commitments, a set of guidelines for leading AI firms that are intended to ensure the safe deployment of the technology without sacrificing its supposed benefit
  • AI models advance rapidly, he reasoned, which necessitates forward thinking. “I don’t know what the next generation of models will be capable of, but I’m really worried about a situation where decisions about what models are put out there in the world are just up to these private companies,” he said.
  • For the four private companies at the center of discussions about frontier models, though, this kind of regulation could prove advantageous.
  • Convincing regulators to control frontier models could restrict the ability of Meta and any other firms to continue publishing and developing their best AI models through open-source communities on the internet; if the technology must be regulated, better for it to happen on terms that favor the bottom line.
  • The obsession with frontier models has now collided with mounting panic about China, fully intertwining ideas for the models’ regulation with national-security concerns. Over the past few months, members of Commerce have met with experts to hash out what controlling frontier models could look like and whether it would be feasible to keep them out of reach of Beijing
  • That the white paper took hold in this way speaks to a precarious dynamic playing out in Washington. The tech industry has been readily asserting its power, and the AI panic has made policy makers uniquely receptive to their messaging,
  • “Parts of the administration are grasping onto whatever they can because they want to do something,” Weinstein told me.
  • The department’s previous chip-export controls “really set the stage for focusing on AI at the cutting edge”; now export controls on frontier models could be seen as a natural continuation. Weinstein, however, called it “a weak strategy”; other AI and tech-policy experts I spoke with sounded their own warnings as well.
  • The decision would represent an escalation against China, further destabilizing a fractured relationship
  • Many Chinese AI researchers I’ve spoken with in the past year have expressed deep frustration and sadness over having their work—on things such as drug discovery and image generation—turned into collateral in the U.S.-China tech competition. Most told me that they see themselves as global citizens contributing to global technology advancement, not as assets of the state. Many still harbor dreams of working at American companies.
  • “If the export controls are broadly defined to include open-source, that would touch on a third-rail issue,” says Matt Sheehan, a Carnegie Endowment for International Peace fellow who studies global technology issues with a focus on China.
  • What’s frequently left out of considerations as well is how much this collaboration happens across borders in ways that strengthen, rather than detract from, American AI leadership. As the two countries that produce the most AI researchers and research in the world, the U.S. and China are each other’s No. 1 collaborator in the technology’s development.
  • Assuming they’re even enforceable, export controls on frontier models could thus “be a pretty direct hit” to the large community of Chinese developers who build on U.S. models and in turn contribute their own research and advancements to U.S. AI development,
  • Within a month of the Commerce Department announcing its blockade on powerful chips last year, the California-based chipmaker Nvidia announced a less powerful chip that fell right below the export controls’ technical specifications, and was able to continue selling to China. Bytedance, Baidu, Tencent, and Alibaba have each since placed orders for about 100,000 of Nvidia’s China chips to be delivered this year, and more for future delivery—deals that are worth roughly $5 billion, according to the Financial Times.
  • In some cases, fixating on AI models would serve as a distraction from addressing the root challenge: The bottleneck for producing novel biochemical weapons, for example, is not finding a recipe, says Weinstein, but rather obtaining the materials and equipment to actually synthesize the armaments. Restricting access to AI models would do little to solve that problem.
  • there could be another benefit to the four companies pushing for frontier-model regulation. Evoking the specter of future threats shifts the regulatory attention away from present-day harms of their existing models, such as privacy violations, copyright infringements, and job automation
  • “People overestimate how much this is in the interest of these companies,”
  • AI safety as a domain even a few years ago was much more heterogeneous,” West told me. Now? “We’re not talking about the effects on workers and the labor impacts of these systems. We’re not talking about the environmental concerns.” It’s no wonder: When resources, expertise, and power have concentrated so heavily in a few companies, and policy makers are seeped in their own cocktail of fears, the landscape of policy ideas collapses under pressure, eroding the base of a healthy democracy.
Javier E

How We Can Control AI - WSJ - 0 views

  • What’s still difficult is to encode human values
  • That currently requires an extra step known as Reinforcement Learning from Human Feedback, in which programmers use their own responses to train the model to be helpful and accurate. Meanwhile, so-called “red teams” provoke the program in order to uncover any possible harmful outputs
  • This combination of human adjustments and guardrails is designed to ensure alignment of AI with human values and overall safety. So far, this seems to have worked reasonably well.
  • ...22 more annotations...
  • At some point they will be able to, for example, suggest recipes for novel cyberattacks or biological attacks—all based on publicly available knowledge.
  • But as models become more sophisticated, this approach may prove insufficient. Some models are beginning to exhibit polymathic behavior: They appear to know more than just what is in their training data and can link concepts across fields, languages, and geographies.
  • We need to adopt new approaches to AI safety that track the complexity and innovation speed of the core models themselves.
  • What’s much harder to test for is what’s known as “capability overhang”—meaning not just the model’s current knowledge, but the derived knowledge it could potentially generate on its own.
  • Red teams have so far shown some promise in predicting models’ capabilities, but upcoming technologies could break our current approach to safety in AI. For one, “recursive self-improvement” is a feature that allows AI systems to collect data and get feedback on their own and incorporate it to update their own parameters, thus enabling the models to train themselves
  • This could result in, say, an AI that can build complex system applications (e.g., a simple search engine or a new game) from scratch. But, the full scope of the potential new capabilities that could be enabled by recursive self-improvement is not known.
  • Another example would be “multi-agent systems,” where multiple independent AI systems are able to coordinate with each other to build something new.
  • This so-called “combinatorial innovation,” where systems are merged to build something new, will be a threat simply because the number of combinations will quickly exceed the capacity of human oversight.
  • Short of pulling the plug on the computers doing this work, it will likely be very difficult to monitor such technologies once these breakthroughs occur
  • Current regulatory approaches are based on individual model size and training effort, and are based on passing increasingly rigorous tests, but these techniques will break down as the systems become orders of magnitude more powerful and potentially elusive
  • AI regulatory approaches will need to evolve to identify and govern the new emergent capabilities and the scaling of those capabilities.
  • But the AI Act has already fallen behind the frontier of innovation, as open-source AI models—which are largely exempt from the legislation—expand in scope and number
  • Europe has so far attempted the most ambitious regulatory regime with its AI Act,
  • both Biden’s order and Europe’s AI Act lack intrinsic mechanisms to rapidly adapt to an AI landscape that will continue to change quickly and often.
  • a gathering in Palo Alto organized by the Rand Corp. and the Carnegie Endowment for International Peace, where key technical leaders in AI converged on an idea: The best way to solve these problems is to create a new set of testing companies that will be incentivized to out-innovate each other—in short, a robust economy of testing
  • To check the most powerful AI systems, their testers will also themselves have to be powerful AI systems, precisely trained and refined to excel at the single task of identifying safety concerns and problem areas in the world’s most advanced models.
  • To be trustworthy and yet agile, these testing companies should be checked and certified by government regulators but developed and funded in the private market, with possible support by philanthropy organizations
  • The field is moving too quickly and the stakes are too high for exclusive reliance on typical government processes and timeframes.
  • One way this can unfold is for government regulators to require AI models exceeding a certain level of capability to be evaluated by government-certified private testing companies (from startups to university labs to nonprofit research organizations), with model builders paying for this testing and certification so as to meet safety requirements.
  • As AI models proliferate, growing demand for testing would create a big enough market. Testing companies could specialize in certifying submitted models across different safety regimes, such as the ability to self-proliferate, create new bio or cyber weapons, or manipulate or deceive their human creators
  • Much ink has been spilled over presumed threats of AI. Advanced AI systems could end up misaligned with human values and interests, able to cause chaos and catastrophe either deliberately or (often) despite efforts to make them safe. And as they advance, the threats we face today will only expand as new systems learn to self-improve, collaborate and potentially resist human oversight.
  • If we can bring about an ecosystem of nimble, sophisticated, independent testing companies who continuously develop and improve their skill evaluating AI testing, we can help bring about a future in which society benefits from the incredible power of AI tools while maintaining meaningful safeguards against destructive outcomes.
Javier E

News Publishers See Google's AI Search Tool as a Traffic-Destroying Nightmare - WSJ - 0 views

  • A task force at the Atlantic modeled what could happen if Google integrated AI into search. It found that 75% of the time, the AI-powered search would likely provide a full answer to a user’s query and the Atlantic’s site would miss out on traffic it otherwise would have gotten. 
  • What was once a hypothetical threat is now a very real one. Since May, Google has been testing an AI product dubbed “Search Generative Experience” on a group of roughly 10 million users, and has been vocal about its intention to bring it into the heart of its core search engine. 
  • Google’s embrace of AI in search threatens to throw off that delicate equilibrium, publishing executives say, by dramatically increasing the risk that users’ searches won’t result in them clicking on links that take them to publishers’ sites
  • ...23 more annotations...
  • Google’s generative-AI-powered search is the true nightmare for publishers. Across the media world, Google generates nearly 40% of publishers’ traffic, accounting for the largest share of their “referrals,” according to a Wall Street Journal analysis of data from measurement firm SimilarWeb. 
  • “AI and large language models have the potential to destroy journalism and media brands as we know them,” said Mathias Döpfner, chairman and CEO of Axel Springer,
  • His company, one of Europe’s largest publishers and the owner of U.S. publications Politico and Business Insider, this week announced a deal to license its content to generative-AI specialist OpenAI.
  • publishers have seen enough to estimate that they will lose between 20% and 40% of their Google-generated traffic if anything resembling recent iterations rolls out widely. Google has said it is giving priority to sending traffic to publishers.
  • The rise of AI is the latest and most anxiety-inducing chapter in the long, uneasy marriage between Google and publishers, which have been bound to each other through a basic transaction: Google helps publishers be found by readers, and publishers give Google information—millions of pages of web content—to make its search engine useful.
  • Already, publishers are reeling from a major decline in traffic sourced from social-media sites, as both Meta and X, the former Twitter, have pulled away from distributing news.
  • , Google’s AI search was trained, in part, on their content and other material from across the web—without payment. 
  • Google’s view is that anything available on the open internet is fair game for training AI models. The company cites a legal doctrine that allows portions of a copyrighted work to be used without permission for cases such as criticism, news reporting or research.
  • The changes risk damaging website owners that produce the written material vital to both Google’s search engine and its powerful AI models.
  • “If Google kills too many publishers, it can’t build the LLM,”
  • Barry Diller, chairman of IAC and Expedia, said all major AI companies, including Google and rivals like OpenAI, have promised that they would continue to send traffic to publishers’ sites. “How they do it, they’ve been very clear to us and others, they don’t really know,” he said.
  • All of this has led Google and publishers to carry out an increasingly complex dialogue. In some meetings, Google is pitching the potential benefits of the other AI tools it is building, including one that would help with the writing and publishing of news articles
  • At the same time, publishers are seeking reassurances from Google that it will protect their businesses from an AI-powered search tool that will likely shrink their traffic, and they are making clear they expect to be paid for content used in AI training.
  • “Any attempts to estimate the traffic impact of our SGE experiment are entirely speculative at this stage as we continue to rapidly evolve the user experience and design, including how links are displayed, and we closely monitor internal data from our tests,” Reid said.
  • Many of IAC’s properties, like Brides, Investopedia and the Spruce, get more than 80% of their traffic from Google
  • Google began rolling out the AI search tool in May by letting users opt into testing. Using a chat interface that can understand longer queries in natural language, it aims to deliver what it calls “snapshots”—or summaries—of the answer, instead of the more link-heavy responses it has traditionally served up in search results. 
  • Google at first didn’t include links within the responses, instead placing them in boxes to the right of the passage. It later added in-line links following feedback from early users. Some more recent versions require users to click a button to expand the summary before getting links. Google doesn’t describe the links as source material but rather as corroboration of its summaries.
  • During Chinese President Xi Jinping’s recent visit to San Francisco, the Google AI search bot responded to the question “What did President Xi say?” with two quotes from his opening remarks. Users had to click on a little red arrow to expand the response and see a link to the CNBC story that the remarks were taken from. The CNBC story also sat over on the far right-hand side of the screen in an image box.
  • The same query in Google’s regular search engine turned up a different quote from Xi’s remarks, but a link to the NBC News article it came from was beneath the paragraph, atop a long list of news stories from other sources like CNN and PBS.
  • Google’s Reid said AI is the future of search and expects its new tool to result in more queries.
  • “The number of information needs in the world is not a fixed number,” she said. “It actually grows as information becomes more accessible, becomes easier, becomes more powerful in understanding it.”
  • Testing has suggested that AI isn’t the right tool for answering every query, she said.
  • Many publishers are opting to insert code in their websites to block AI tools from “crawling” them for content. But blocking Google is thorny, because publishers must allow their sites to be crawled in order to be indexed by its search engine—and therefore visible to users searching for their content.To some in the publishing world there was an implicit threat in Google’s policy: Let us train on your content or you’ll be hard to find on the internet.
Javier E

Will China overtake the U.S. on AI? Probably not. Here's why. - The Washington Post - 0 views

  • Chinese authorities have been so proactive about regulating some uses of AI, especially those that allow the general public to create their own content, that compliance has become a major hurdle for the country’s companies.
  • As the use of AI explodes, regulators in Washington and around the world are trying to figure out how to manage potential threats to privacy, employment, intellectual property and even human existence itself.
  • But there are also concerns that putting any guardrails on the technology in the United States would surrender leadership in the sector to Chinese companies.
  • ...16 more annotations...
  • Senate Majority Leader Charles E. Schumer (D-N.Y.) last month urged Congress to adopt “comprehensive” regulations on the AI industry.
  • Rather than focusing on AI technology that lets the general public create unique content like the chatbots and image generators, Chinese companies have instead focused on technologies with clear commercial uses, like surveillance tech.
  • n a recent study, Ding found that most of the large language models developed in China were nearly two years behind those developed in the U.S., a gap that would be a challenge to close — even if American firms had to adjust to regulation.
  • This gap also makes it difficult for Chinese firms to attract the world’s top engineering talent. Many would prefer to work at firms that have the resources and flexibility to experiment on frontier research areas.
  • Restrictions on access to the most advanced chips, which are needed to run AI models, have added to these difficulties.
  • Recent research identified 17 large language models in China that relied on Nvidia chips, and just three models that used Chinese-made chips.
  • While Beijing pushes to make comparable chips at home, Chinese AI companies have to source their chips any way they can — including from a black market that has sprung up in Shenzhen, where, according to Reuters, the most advanced Nvidia chips sell for nearly $20,000, more than twice what they go for elsewhere.
  • Despite the obstacles, Chinese AI companies have made major advances in some types of AI technologies, including facial recognition, gait recognition, and artificial and virtual reality.
  • These technologies have also fueled the development of China’s vast surveillance industry, giving Chinese tech giants an edge that they market around the world, such as Huawei’s contracts for smart city surveillance from Belgrade, Serbia, to Nairobi.
  • Companies developing AI in China need to comply with specific laws on intellectual property rights, personal information protection, recommendation algorithms and synthetic content, also called deep fakes. In April, regulators also released a draft set of rules on generative AI, the technology behind image generator Stable Diffusion and chatbots such as OpenAI’s ChatGPT and Google’s Bard.
  • They also need to ensure AI generated content complies with Beijing’s strict censorship regime. Chinese tech companies such as Baidu have become adept at filtering content that contravenes these rules. But it has hampered their ability to test the limits of what AI can do.
  • No Chinese tech company has yet been able to release a large language model on the scale of OpenAI’s ChatGPT to the general public, in which the company has asked the public to play with and test a generative AI model, said Ding, the professor at George Washington University.
  • “That level of freedom has not been allowed in China, in part because the Chinese government is very worried about people creating politically sensitive content,” Ding said.
  • Although Beijing’s regulations have created major burdens for Chinese AI companies, analysts say that they contain several key principles that Washington can learn from — like protecting personal information, labeling AI-generated content and alerting the government if an AI develops dangerous capabilities.
  • AI regulation in the United States could easily fall short of Beijing’s heavy-handed approach while still preventing discrimination, protecting people’s rights and adhering to existing laws, said Johanna Costigan, a research associate at the Asia Society Policy Institute.
  • “There can be alignment between regulation and innovation,” Costigan said. “But it’s a question of rising to the occasion of what this moment represents — do we care enough to protect people who are using this technology? Because people are using it whether the government regulates it or not.”
Javier E

Mistral, the 9-Month-Old AI Startup Challenging Silicon Valley's Giants - WSJ - 0 views

  • Mensch, who started in academia, has spent much of his life figuring out how to make AI and machine-learning systems more efficient. Early last year, he joined forces with co-founders Timothée Lacroix, 32, and Guillaume Lample, 33, who were then at Meta Platforms’ artificial-intelligence lab in Paris. 
  • hey are betting that their small team can outmaneuver Silicon Valley titans by finding more efficient ways to build and deploy AI systems. And they want to do it in part by giving away many of their AI systems as open-source software.
  • Eric Boyd, corporate vice president of Microsoft’s AI platform, said Mistral presents an intriguing test of how far clever engineering can push AI systems. “So where else can you go?” he asked. “That remains to be seen.”
  • ...7 more annotations...
  • Mensch said his new model cost less than €20 million, the equivalent of roughly $22 million, to train. By contrast OpenAI Chief Executive Sam Altman said last year after the release of GPT-4 that training his company’s biggest models cost “much more than” $50 million to $100 million.
  • Brave Software made a free, open-source model from Mistral the default to power its web-browser chatbot, said Brian Bondy, Brave’s co-founder and chief technology officer. He said that the company finds the quality comparable with proprietary models, and Mistral’s open-source approach also lets Brave control the model locally.
  • “We want to be the most capital-efficient company in the world of AI,” Mensch said. “That’s the reason we exist.” 
  • Mensch joined the Google AI unit then called DeepMind in late 2020, where he worked on the team building so-called large language models, the type of AI system that would later power ChatGPT. By 2022, he was one of the lead authors of a paper about a new AI model called Chinchilla, which changed the field’s understanding of the relationship among the size of an AI model, how much data is used to build it and how well it performs, known as AI scaling laws.
  • Mensch took a role lobbying French policymakers, including French President Emmanuel Macron, against certain elements of the European Union’s new AI Act, which Mensch warned could slow down companies and would, in his view, do nothing to make AI safer. After changes to the text in Brussels, it will be a manageable burden for Mistral, Mensch says, even if he thinks the law should have remained focused on how AI is used rather than also regulating the underlying technology.  
  • For Mensch and his co-founders, releasing their initial AI systems as open source that anyone could use or adapt free of charge was an important principle. It was also a way to get noticed by developers and potential clients eager for more control over the AI they use
  • Mistral’s most advanced models, including the one unveiled Monday, aren’t available open source. 
Javier E

The Monk Who Thinks the World Is Ending - The Atlantic - 0 views

  • Seventy thousand years ago, a cognitive revolution allowed Homo sapiens to communicate in story—to construct narratives, to make art, to conceive of god.
  • Twenty-five hundred years ago, the Buddha lived, and some humans began to touch enlightenment, he says—to move beyond narrative, to break free from ignorance.
  • Three hundred years ago, the scientific and industrial revolutions ushered in the beginning of the “utter decimation of life on this planet.”
  • ...25 more annotations...
  • Humanity has “exponentially destroyed life on the same curve as we have exponentially increased intelligence,” he tells his congregants.
  • Now the “crazy suicide wizards” of Silicon Valley have ushered in another revolution. They have created artificial intelligence.
  • Forall provides spiritual advice to AI thinkers, and hosts talks and “awakening” retreats for researchers and developers, including employees of OpenAI, Google DeepMind, and Apple. Roughly 50 tech types have done retreats at MAPLE in the past few years
  • Humans are already destroying life on this planet. AI might soon destroy us.
  • His monastery is called MAPLE, which stands for the “Monastic Academy for the Preservation of Life on Earth.” The residents there meditate on their breath and on metta, or loving-kindness, an emanation of joy to all creatures.
  • They meditate in order to achieve inner clarity. And they meditate on AI and existential risk in general—life’s violent, early, and unnecessary end.
  • There is “no reason” to think AI will preserve humanity, “as if we’re really special,” Forall tells the residents, clad in dark, loose clothing, seated on zafu cushions on the wood floor. “There’s no reason to think we wouldn’t be treated like cattle in factory farms.”
  • His second is to influence technology by influencing technologists. His third is to change AI itself, seeing whether he and his fellow monks might be able to embed the enlightenment of the Buddha into the code.
  • In the past few years, MAPLE has become something of the house monastery for people worried about AI and existential risk.
  • Forall describes the project of creating an enlightened AI as perhaps “the most important act of all time.” Humans need to “build an AI that walks a spiritual path,” one that will persuade the other AI systems not to harm us
  • we should devote half of global economic output—$50 trillion, give or take—to “that one thing.” We need to build an “AI guru,” he said. An “AI god.”
  • Forall’s first goal is to expand the pool of humans following what Buddhists call the Noble Eightfold Path.
  • Forall and many MAPLE residents are what are often called, derisively if not inaccurately, “doomers.”
  • The seminal text in this ideological lineage is Nick Bostrom’s Superintelligence, which posits that AI could turn humans into gorillas, in a way. Our existence could depend not on our own choices but on the choices of a more intelligent other.
  • he is spending his life ruminating on AI’s risks, which he sees as far from banal. “We are watching humanist values, and therefore the political systems based on them, such as democracy, as well as the economic systems—they’re just falling apart,” he said. “The ultimate authority is moving from the human to the algorithm.”
  • Forall’s mother worked for humanitarian nonprofits and his father for conservation nonprofits; the household, which attended Quaker meetings, listened to a lot of NPR.)
  • He got his answer: Craving is the root of all suffering. And he became ordained, giving up the name Teal Scott and becoming Soryu Forall: “Soryu” meaning something like “a growing spiritual practice” and “Forall” meaning, of course, “for all.”
  • In 2013, he opened MAPLE, a “modern” monastery addressing the plagues of environmental destruction, lethal weapons systems, and AI, offering co-working and online courses as well as traditional monastic training.
  • His vision is dire and grand, but perhaps that is why it has found such a receptive audience among the folks building AI, many of whom conceive of their work in similarly epochal terms.
  • The nonprofit’s revenues have quadrupled, thanks in part to contributions from tech executives as well as organizations such as the Future of Life Institute, co-founded by Jaan Tallinn, a co-creator of Skype.
  • The donations have helped MAPLE open offshoots—Oak in the Bay Area, Willow in Canada—and plan more. (The highest-paid person at MAPLE is the property manager, who earns roughly $40,000 a year.)
  • The strictness of the place helps them let go of ego and see the world more clearly, residents told me. “To preserve all life: You can’t do that until you come to love all life, and that has to be trained,
  • Forall was absolute: Nine countries are armed with nuclear weapons. Even if we stop the catastrophe of climate change, we will have done so too late for thousands of species and billions of beings. Our democracy is fraying. Our trust in one another is fraying
  • Many of the very people creating AI believe it could be an existential threat: One 2022 survey asked AI researchers to estimate the probability that AI would cause “severe disempowerment” or human extinction; the median response was 10 percent. The destruction, Forall said, is already here.
  • “It’s important to know that we don’t know what’s going to happen,” he told me. “It’s also important to look at the evidence.” He said it was clear we were on an “accelerating curve,” in terms of an explosion of intelligence and a cataclysm of death. “I don’t think that these systems will care too much about benefiting people. I just can’t see why they would, in the same way that we don’t care about benefiting most animals. While it is a story in the future, I feel like the burden of proof isn’t on me.”
Javier E

Rishi Sunak races to tighten rules for AI amid fears of existential risk | Artificial i... - 0 views

  • The prime minister and his officials are looking at ways to tighten the UK’s regulation of cutting-edge technology, as industry figures warn the government’s AI white paper, published just two months ago, is already out of date.
  • Sunak is pushing allies to formulate an international agreement on how to develop AI capabilities, which could even lead to the creation of a new global regulator.
  • Michelle Donelan, as science, innovation and technology secretary, published a white paper in April which set out five broad principles for developing the technology, but said relatively little about how to regulate it. In her foreword to that paper, she wrote: “AI is already delivering fantastic social and economic benefits for real people.”
  • ...6 more annotations...
  • In recent months, however, the advances in the automated chat tool ChatGPT and the warning by Geoffrey Hinton, the “godfather of AI”, that the technology poses an existential risk to humankind, have prompted a change of tack within government.
  • Last week, Sunak met four of the world’s most senior executives in the AI industry, including Sundar Pichai, the chief executive of Google, and Sam Altman, the chief executive of ChatGPT’s parent company OpenAI. After the meeting that included Altman, Downing Street acknowledged for the first time the “existential risks” now being faced.
  • “There has been a marked shift in the government’s tone on this issue,” said Megan Stagman, an associate director at the government advisory firm Global Counsel. “Even since the AI white paper, there has been a dramatic shift in thinking.
  • He added: “We need an AI bill. The problem of who should regulate it is a tricky one but I don’t think you can hand it off to regulators for other industries.”
  • Lucy Powell, Labour’s spokesperson for digital, culture, media and sport, said: “The AI white paper is a sticking plaster on this huge long-term shift. Relying on overstretched regulators to manage the multiple impacts of AI may allow huge areas to fall through the gaps.”
  • Government insiders admit there has been a shift in approach, but insist they will not follow the EU’s example of regulating each use of AI in a different way. MEPs are currently scrutinising a new law that would allow for AI in some contexts but ban it in others, such as for facial recognition.
Javier E

Pause or panic: battle to tame the AI monster - 0 views

  • What exactly are they afraid of? How do you draw a line from a chatbot to global destruction
  • This tribe feels we have made three crucial errors: giving the AI the capability to write code, connecting it to the internet and teaching it about human psychology. In those steps we have created a self-improving, potentially manipulative entity that can use the network to achieve its ends — which may not align with ours
  • This is a technology that learns from our every interaction with it. In an eerie glimpse of AI’s single-mindedness, OpenAI revealed in a paper that GPT-4 was willing to lie, telling a human online it was a blind person, to get a task done.
  • ...16 more annotations...
  • For researchers concerned with more immediate AI risks, such as bias, disinformation and job displacement, the voices of doom are a distraction. Professor Brent Mittelstadt, director of research at the Oxford Internet Institute, said the warnings of “the existential risks community” are overblown. “The problem is you can’t disprove the future scenarios . . . in the same way you can’t disprove science fiction.” Emily Bender, a professor of linguistics at the University of Washington, believes the doomsters are propagating “unhinged AI hype, helping those building this stuff sell it”.
  • Those urging us to stop, pause and think again have a useful card up our sleeves: the people building these models do not fully understand them. AI like ChatGPT is made up of huge neural networks that can defy their creators by coming up with “emergent properties”.
  • Google’s PaLM model started translating Bengali despite not being trained to do so
  • Let’s not forget the excitement, because that is also part of Moloch, driving us forward. The lure of AI’s promises for humanity has been hinted at by DeepMind’s AlphaFold breakthrough, which predicted the 3D structures of nearly all the proteins known to humanity.
  • Noam Shazeer, a former Google engineer credited with setting large language models such as ChatGPT on their present path, was asked by The Sunday Times how the models worked. He replied: “I don’t think anybody really understands how they work, just like nobody really understands how the brain works. It’s pretty much alchemy.”
  • The industry is turning itself to understanding what has been created, but some predict it will take years, decades even.
  • Alex Heath, deputy editor of The Verge, who recently attended an AI conference in San Francisco. “It’s clear the people working on generative AI are uneasy about the worst-case scenario of it destroying us all. These fears are much more pronounced in private than they are in public.” One figure building an AI product “said over lunch with a straight face that he is savoring the time before he is killed by AI”.
  • Greg Brockman, co-founder of OpenAI, told the TED2023 conference this week: “We hear from people who are excited, we hear from people who are concerned. We hear from people who feel both those emotions at once. And, honestly, that’s how we feel.”
  • A CBS interviewer challenged Sundar Pichai, Google’s chief executive, this week: “You don’t fully understand how it works, and yet you’ve turned it loose on society?
  • In 2020 there wasn’t a single drug in clinical trials developed using an AI-first approach. Today there are 18
  • Consider this from Bill Gates last month: “I think in the next five to ten years, AI-driven software will finally deliver on the promise of revolutionising the way people teach and learn.”
  • If the industry is aware of the risks, is it doing enough to mitigate them? Microsoft recently cut its ethics team, and researchers building AI outnumber those focused on safety by 30-to-1,
  • The concentration of AI power, which worries so many, also presents an opportunity to more easily develop some global rules. But there is little agreement on direction. Europe is proposing a centrally defined, top-down approach. Britain wants an innovation-friendly environment where rules are defined by each industry regulator. The US commerce department is consulting on whether risky AI models should be certified. China is proposing strict controls on generative AI that could upend social order.
  • Part of the drive to act now is to ensure we learn the lessons of social media. Twenty years after creating it, we are trying to put it back in a legal straitjacket after learning that its algorithms understand us only too well. “Social media was the first contact between AI and humanity, and humanity lost,” Yuval Harari, the Sapiens author,
  • Others point to bioethics, especially international agreements on human cloning. Tegmark said last week: “You could make so much money on human cloning. Why aren’t we doing it? Because biologists thought hard about this and felt this is way too risky. They got together in the Seventies and decided, let’s not do this because it’s too unpredictable. We could lose control over what happens to our species. So they paused.” Even China signed up.
  • One voice urging calm is Yann LeCun, Meta’s chief AI scientist. He has labelled ChatGPT a “flashy demo” and “not a particularly interesting scientific advance”. He tweeted: “A GPT-4-powered robot couldn’t clear up the dinner table and fill up the dishwasher, which any ten-year-old can do. And it couldn’t drive a car, which any 18-year-old can learn to do in 20 hours of practice. We’re still missing something big for human-level AI.” If this is sour grapes and he’s wrong, Moloch already has us in its thrall.
Javier E

Inside the porn industry, AI looms large - The Washington Post - 0 views

  • Since the first AVN “expo” in 1998, adult entertainment has been overtaken by two business models: Pornhub, a free site supported by ads, and OnlyFans, a subscription platform where individual actors control their businesses and their fate.
  • Now, a new shift is on the horizon: Artificial intelligence models that spin up photorealistic images and videos that put viewers in the director’s chair, letting them create whatever porn they like.
  • Some site owners think it’s a privilege people will pay for, and they are racing to build custom AI models that — unlike the sanitized content on OpenAI’s video engine Sora — draw on a vast repository of porn images and videos.
  • ...26 more annotations...
  • he trickiest question may be how to prevent abuse. AI generators have technological boundaries, but not morals, and it’s relatively easy for users to trick them into creating content that depicts violence, rape, sex with children or a celebrity — or even a crush from work who never consented to appear
  • In some cases, the engines themselves are trained on porn images whose subjects didn’t explicitly agree to the new use. Currently, no federal laws protect the victims of nonconsensual deepfakes.
  • Adult entertainment is a giant industry accounting for a substantial chunk of all internet traffic: Major porn sites get more monthly visitors and page views than Amazon, Netflix, TikTok or Zoom
  • The industry is a habitual early adopter of new technology, from VHS to DVD to dot com. In the mid-2000s, porn companies set up massive sites where users upload and watch free videos, and ad sales foot the bills.
  • At last year’s AVN conference, Steven Jones said his peers looked at him “like he was crazy” when he talked about AI opportunities: “Nobody was interested.” This year, Jones said, he’s been “the belle of the ball.”
  • He called up his old business partner, and the two immediately spent about $550,000 securing the web domains for porn dot ai, deepfake dot com and deepfakes dot com, Jones said. “Lightspeed” was back.
  • One major model, Stable Diffusion, shares its code publicly, and some technologists have figured out how to edit the code to allow for sexual images
  • What keeps Jones up at night is people trying to use his company’s tools to generate images of abuse, he said. The models have some technological guardrails that make it difficult for users to render children, celebrities or acts of violence. But people are constantly looking for workarounds.
  • So with help from an angel investor he will not name, Jones hired five employees and a handful of offshore contractors and started building an image engine trained on bundles of freely available pornographic images, as well as thousands of nude photos from Jones’s own collection
  • Users create what Jones calls a “dream girl,” prompting the AI with descriptions of the character’s appearance, pose and setting. The nudes don’t portray real people, he said. Rather, the goal is to re-create a fantasy from the user’s imagination.
  • The AI-generated images got better, their computerized sheen growing steadily less noticeable. Jones grew his user base to 500,000 people, many of whom pay to generate more images than the five per day allotted to free accounts, he said. The site’s “power users” generate AI porn for 10 hours a day, he said.
  • Jones described the site as an “artists’ community” where people can explore their sexualities and fantasies in a safe space. Unlike some corners of the traditional adult industry, no performers are being pressured, underpaid or placed in harm’s way
  • And critically, consumers don’t have to wait for their favorite OnlyFans performer to come online or trawl through Pornhub to find the content they like.
  • Next comes AI-generated video — “porn’s holy grail,” Jones said. Eventually, he sees the technology becoming interactive, with users giving instructions to lifelike automated “performers.” Within two years, he said, there will be “fully AI cam girls,” a reference to creators who make solo sex content.
  • It costs $12 per day to rent a server from Amazon Web Services, he said, and generating a single picture requires users to have access to a corresponding server. His users have so far generated more than 1.6 million images.
  • Copyright holders including newspapers, photographers and artists have filed a slew of lawsuits against AI companies, claiming the companies trained their models on copyrighted content. If plaintiffs win, it could cut off the free-for-all that benefits entrepreneurs such as Jones.
  • But Jones’s plan to create consumer-friendly AI porn engines faced significant obstacles. The companies behind major image-generation models used technical boundaries to block “not safe for work” content and, without racy images to learn from, the models weren’t good at re-creating nude bodies or scenes.
  • Jones said his team takes down images that other users flag as abusive. Their list of blocked prompts currently contains 1,000 terms including “high school.”
  • “I see certain things people type in, and I just hope to God they’re trying to test the model, like we are. I hope they don’t actually want to see the things they’re typing in.
  • Peter Acworth, the owner of kink dot com, is trying to teach an AI porn generator to understand even subtler concepts, such as the difference between torture and consensual sexual bondage. For decades Acworth has pushed for spaces — in the real world and online — for consenting adults to explore nonconventional sexual interests. In 2006, he bought the San Francisco Armory, a castle-like building in the city’s Mission neighborhood, and turned it into a studio where his company filmed fetish porn until shuttering in 2017.
  • Now, Acworth is working with engineers to train an image-generation model on pictures of BDSM, an acronym for bondage and discipline, dominance and submission, sadism and masochism.
  • Others alluded to a porn apocalypse, with AI wiping out existing models of adult entertainment.“Look around,” said Christian Burke, head of engineering at the adult-industry payment app Melon, gesturing at performers huddled, laughing and hugging across the show floor. “This could look entirely different in a few years.”
  • But the age of AI brings few guarantees for the people, largely women, who appear in porn. Many have signed broad contracts granting companies the rights to reproduce their likeness in any medium for the rest of time
  • Not only could performers lose income, Walters said, they could find themselves in offensive or abusive scenes they never consented to.
  • Lana Smalls, a 23-year-old performer whose videos have been viewed 20 million times on Pornhub, said she’s had colleagues show up to shoots with major studios only to be surprised by sweeping AI clauses in their contracts.
  • “This industry is too fragmented for collective bargaining,” Spiegler said. “Plus, this industry doesn’t like rules.”
Javier E

'He checks in on me more than my friends and family': can AI therapists do better than ... - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
Javier E

'Social Order Could Collapse' in AI Era, Two Top Japan Companies Say - WSJ - 0 views

  • Japan’s largest telecommunications company and the country’s biggest newspaper called for speedy legislation to restrain generative artificial intelligence, saying democracy and social order could collapse if AI is left unchecked.
  • the manifesto points to rising concern among American allies about the AI programs U.S.-based companies have been at the forefront of developing.
  • The Japanese companies’ manifesto, while pointing to the potential benefits of generative AI in improving productivity, took a generally skeptical view of the technology
  • ...8 more annotations...
  • Without giving specifics, it said AI tools have already begun to damage human dignity because the tools are sometimes designed to seize users’ attention without regard to morals or accuracy.
  • Unless AI is restrained, “in the worst-case scenario, democracy and social order could collapse, resulting in wars,” the manifesto said.
  • It said Japan should take measures immediately in response, including laws to protect elections and national security from abuse of generative AI.
  • The Biden administration is also stepping up oversight, invoking emergency federal powers last October to compel major AI companies to notify the government when developing systems that pose a serious risk to national security. The U.S., U.K. and Japan have each set up government-led AI safety institutes to help develop AI guidelines.
  • NTT and Yomiuri said their manifesto was motivated by concern over public discourse. The two companies are among Japan’s most influential in policy. The government still owns about one-third of NTT, formerly the state-controlled phone monopoly.
  • Yomiuri Shimbun, which has a morning circulation of about six million copies according to industry figures, is Japan’s most widely-read newspaper. Under the late Prime Minister Shinzo Abe and his successors, the newspaper’s conservative editorial line has been influential in pushing the ruling Liberal Democratic Party to expand military spending and deepen the nation’s alliance with the U.S.
  • The Yomiuri’s news pages and editorials frequently highlight concerns about artificial intelligence. An editorial in December, noting the rush of new AI products coming from U.S. tech companies, said “AI models could teach people how to make weapons or spread discriminatory ideas.” It cited risks from sophisticated fake videos purporting to show politicians speaking.
  • NTT is active in AI research, and its units offer generative AI products to business customers. In March, it started offering these customers a large-language model it calls “tsuzumi” which is akin to OpenAI’s ChatGPT but is designed to use less computing power and work better in Japanese-language contexts.
Javier E

'There was all sorts of toxic behaviour': Timnit Gebru on her sacking by Google, AI's d... - 0 views

  • t feels like a gold rush,” says Timnit Gebru. “In fact, it is a gold rush. And a lot of the people who are making money are not the people actually in the midst of it. But it’s humans who decide whether all this should be done or not. We should remember that we have the agency to do that.”
  • something that the frenzied conversation about AI misses out: the fact that many of its systems may well be built on a huge mess of biases, inequalities and imbalances of power.
  • The next year, Gebru made a point of counting other black attenders at the same event. She found that, among 8,500 delegates, there were only six people of colour. In response, she put up a Facebook post that now seems prescient: “I’m not worried about machines taking over the world; I’m worried about groupthink, insularity and arrogance in the AI community.”
  • ...14 more annotations...
  • The clear danger, the paper said, is that such supposed “intelligence” is based on huge data sets that “overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalised populations”. Put more bluntly, AI threatens to deepen the dominance of a way of thinking that is white, male, comparatively affluent and focused on the US and Europe.
  • What all this told her, she says, is that big tech is consumed by a drive to develop AI and “you don’t want someone like me who’s going to get in your way. I think it made it really clear that unless there is external pressure to do something different, companies are not just going to self-regulate. We need regulation and we need something better than just a profit motive.”
  • one particularly howling irony: the fact that an industry brimming with people who espouse liberal, self-consciously progressive opinions so often seems to push the world in the opposite direction.
  • Gebru began to specialise in cutting-edge AI, pioneering a system that showed how data about particular neighbourhoods’ patterns of car ownership highlighted differences bound up with ethnicity, crime figures, voting behaviour and income levels. In retrospect, this kind of work might look like the bedrock of techniques that could blur into automated surveillance and law enforcement, but Gebru admits that “none of those bells went off in my head … that connection of issues of technology with diversity and oppression came later”.
  • As the co-leader of Google’s small ethical AI team, Gebru was one of the authors of an academic paper that warned about the kind of AI that is increasingly built into our lives, taking internet searches and user recommendations to apparently new levels of sophistication and threatening to master such human talents as writing, composing music and analysing images
  • After her departure, Gebru founded Dair, the Distributed AI Research Institute, to which she now devotes her working time. “We have people in the US and the EU, and in Africa,” she says. “We have social scientists, computer scientists, engineers, refugee advocates, labour organisers, activists … it’s a mix of people.”
  • She and her colleagues prided themselves on how diverse their small operation was, as well as the things they brought to the company’s attention, which included issues to do with Google’s ownership of YouTube
  • A colleague from Morocco raised the alarm about a popular YouTube channel in that country called Chouf TV, “which was basically operated by the government’s intelligence arm and they were using it to harass journalists and dissidents. YouTube had done nothing about it.” (Google says that it “would need to review the content to understand whether it violates our policies. But, in general, our harassment policies strictly prohibit content that threatens individuals,
  • in 2020, Gebru, Mitchell and two colleagues wrote the paper that would lead to Gebru’s departure. It was titled On the Dangers of Stochastic Parrots. Its key contention was about AI centred on so-called large language models: the kind of systems – such as OpenAI’s ChatGPT and Google’s newly launched PaLM 2 – that, crudely speaking, feast on vast amounts of data to perform sophisticated tasks and generate content.
  • Gebru and her co-authors had an even graver concern: that trawling the online world risks reproducing its worst aspects, from hate speech to points of view that exclude marginalised people and places. “In accepting large amounts of web text as ‘representative’ of ‘all’ of humanity, we risk perpetuating dominant viewpoints, increasing power imbalances and further reifying inequality,” they wrote.
  • When the paper was submitted for internal review, Gebru was quickly contacted by one of Google’s vice-presidents. At first, she says, non-specific objections were expressed, such as that she and her colleagues had been too “negative” about AI. Then, Google asked Gebru either to withdraw the paper, or remove her and her colleagues’ names from it.
  • When Gebru arrived, Google employees were loudly opposing the company’s role in Project Maven, which used AI to analyse surveillance footage captured by military drones (Google ended its involvement in 2018). Two months later, staff took part in a huge walkout over claims of systemic racism, sexual harassment and gender inequality. Gebru says she was aware of “a lot of tolerance of harassment and all sorts of toxic behaviour”.
  • Running alongside this is a quest to push beyond the tendency of the tech industry and the media to focus attention on worries about AI taking over the planet and wiping out humanity while questions about what the technology does, and who it benefits and damages, remain unheard.
  • “That conversation ascribes agency to a tool rather than the humans building the tool,” she says. “That means you can aggregate responsibility: ‘It’s not me that’s the problem. It’s the tool. It’s super-powerful. We don’t know what it’s going to do.’ Well, no – it’s you that’s the problem. You’re building something with certain characteristics for your profit. That’s extremely distracting, and it takes the attention away from real harms and things that we need to do. Right now.”
1 - 20 of 192 Next › Last »
Showing 20 items per page