Skip to main content

Home/ History Readings/ Group items tagged artificial intelligence

Rss Feed Group items tagged

Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

Sam Altman, the ChatGPT King, Is Pretty Sure It's All Going to Be OK - The New York Times - 0 views

  • He believed A.G.I. would bring the world prosperity and wealth like no one had ever seen. He also worried that the technologies his company was building could cause serious harm — spreading disinformation, undercutting the job market. Or even destroying the world as we know it.
  • “I try to be upfront,” he said. “Am I doing something good? Or really bad?”
  • In 2023, people are beginning to wonder if Sam Altman was more prescient than they realized.
  • ...44 more annotations...
  • And yet, when people act as if Mr. Altman has nearly realized his long-held vision, he pushes back.
  • This past week, more than a thousand A.I. experts and tech leaders called on OpenAI and other companies to pause their work on systems like ChatGPT, saying they present “profound risks to society and humanity.”
  • As people realize that this technology is also a way of spreading falsehoods or even persuading people to do things they should not do, some critics are accusing Mr. Altman of reckless behavior.
  • “The hype over these systems — even if everything we hope for is right long term — is totally out of control for the short term,” he told me on a recent afternoon. There is time, he said, to better understand how these systems will ultimately change the world.
  • Many industry leaders, A.I. researchers and pundits see ChatGPT as a fundamental technological shift, as significant as the creation of the web browser or the iPhone. But few can agree on the future of this technology.
  • Some believe it will deliver a utopia where everyone has all the time and money ever needed. Others believe it could destroy humanity. Still others spend much of their time arguing that the technology is never as powerful as everyone says it is, insisting that neither nirvana nor doomsday is as close as it might seem.
  • he is often criticized from all directions. But those closest to him believe this is as it should be. “If you’re equally upsetting both extreme sides, then you’re doing something right,” said OpenAI’s president, Greg Brockman.
  • To spend time with Mr. Altman is to understand that Silicon Valley will push this technology forward even though it is not quite sure what the implications will be
  • in 2019, he paraphrased Robert Oppenheimer, the leader of the Manhattan Project, who believed the atomic bomb was an inevitability of scientific progress. “Technology happens because it is possible,” he said
  • His life has been a fairly steady climb toward greater prosperity and wealth, driven by an effective set of personal skills — not to mention some luck. It makes sense that he believes that the good thing will happen rather than the bad.
  • He said his company was building technology that would “solve some of our most pressing problems, really increase the standard of life and also figure out much better uses for human will and creativity.”
  • He was not exactly sure what problems it will solve, but he argued that ChatGPT showed the first signs of what is possible. Then, with his next breath, he worried that the same technology could cause serious harm if it wound up in the hands of some authoritarian government.
  • Kelly Sims, a partner with the venture capital firm Thrive Capital who worked with Mr. Altman as a board adviser to OpenAI, said it was like he was constantly arguing with himself.
  • “In a single conversation,” she said, “he is both sides of the debate club.”
  • He takes pride in recognizing when a technology is about to reach exponential growth — and then riding that curve into the future.
  • he is also the product of a strange, sprawling online community that began to worry, around the same time Mr. Altman came to the Valley, that artificial intelligence would one day destroy the world. Called rationalists or effective altruists, members of this movement were instrumental in the creation of OpenAI.
  • Does it make sense to ride that curve if it could end in diaster? Mr. Altman is certainly determined to see how it all plays out.
  • “Why is he working on something that won’t make him richer? One answer is that lots of people do that once they have enough money, which Sam probably does. The other is that he likes power.”
  • “He has a natural ability to talk people into things,” Mr. Graham said. “If it isn’t inborn, it was at least fully developed before he was 20. I first met Sam when he was 19, and I remember thinking at the time: ‘So this is what Bill Gates must have been like.
  • poker taught Mr. Altman how to read people and evaluate risk.
  • It showed him “how to notice patterns in people over time, how to make decisions with very imperfect information, how to decide when it was worth pain, in a sense, to get more information,” he told me while strolling across his ranch in Napa. “It’s a great game.”
  • He believed, according to his younger brother Max, that he was one of the few people who could meaningfully change the world through A.I. research, as opposed to the many people who could do so through politics.
  • In 2019, just as OpenAI’s research was taking off, Mr. Altman grabbed the reins, stepping down as president of Y Combinator to concentrate on a company with fewer than 100 employees that was unsure how it would pay its bills.
  • Within a year, he had transformed OpenAI into a nonprofit with a for-profit arm. That way he could pursue the money it would need to build a machine that could do anything the human brain could do.
  • Mr. Brockman, OpenAI’s president, said Mr. Altman’s talent lies in understanding what people want. “He really tries to find the thing that matters most to a person — and then figure out how to give it to them,” Mr. Brockman told me. “That is the algorithm he uses over and over.”
  • Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, another lab intent on building artificial general intelligence.
  • “These are people who have left an indelible mark on the fabric of the tech industry and maybe the fabric of the world,” he said. “I think Sam is going to be one of those people.”
  • The trouble is, unlike the days when Apple, Microsoft and Meta were getting started, people are well aware of how technology can transform the world — and how dangerous it can be.
  • Mr. Scott of Microsoft believes that Mr. Altman will ultimately be discussed in the same breath as Steve Jobs, Bill Gates and Mark Zuckerberg.
  • The woman was the Canadian singer Grimes, Mr. Musk’s former partner, and the hat guy was Eliezer Yudkowsky, a self-described A.I. researcher who believes, perhaps more than anyone, that artificial intelligence could one day destroy humanity.
  • The selfie — snapped by Mr. Altman at a party his company was hosting — shows how close he is to this way of thinking. But he has his own views on the dangers of artificial intelligence.
  • In March, Mr. Altman tweeted out a selfie, bathed by a pale orange flash, that showed him smiling between a blond woman giving a peace sign and a bearded guy wearing a fedora.
  • He also helped spawn the vast online community of rationalists and effective altruists who are convinced that A.I. is an existential risk. This surprisingly influential group is represented by researchers inside many of the top A.I. labs, including OpenAI.
  • They don’t see this as hypocrisy: Many of them believe that because they understand the dangers clearer than anyone else, they are in the best position to build this technology.
  • Mr. Altman believes that effective altruists have played an important role in the rise of artificial intelligence, alerting the industry to the dangers. He also believes they exaggerate these dangers.
  • As OpenAI developed ChatGPT, many others, including Google and Meta, were building similar technology. But it was Mr. Altman and OpenAI that chose to share the technology with the world.
  • Many in the field have criticized the decision, arguing that this set off a race to release technology that gets things wrong, makes things up and could soon be used to rapidly spread disinformation.
  • Mr. Altman argues that rather than developing and testing the technology entirely behind closed doors before releasing it in full, it is safer to gradually share it so everyone can better understand risks and how to handle them.
  • He told me that it would be a “very slow takeoff.”
  • When I asked Mr. Altman if a machine that could do anything the human brain could do would eventually drive the price of human labor to zero, he demurred. He said he could not imagine a world where human intelligence was useless.
  • If he’s wrong, he thinks he can make it up to humanity.
  • His grand idea is that OpenAI will capture much of the world’s wealth through the creation of A.G.I. and then redistribute this wealth to the people. In Napa, as we sat chatting beside the lake at the heart of his ranch, he tossed out several figures — $100 billion, $1 trillion, $100 trillion.
  • If A.G.I. does create all that wealth, he is not sure how the company will redistribute it. Money could mean something very different in this new world.
  • But as he once told me: “I feel like the A.G.I. can help with that.”
Javier E

AI scientist Ray Kurzweil: 'We are going to expand intelligence a millionfold by 2045' ... - 0 views

  • American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer
  • no longer seem so wacky.
  • Your 2029 and 2045 projections haven’t changed…I have stayed consistent. So 2029, both for human-level intelligence and for artificial general intelligence (AGI) – which is a little bit different. Human-level intelligence generally means AI that has reached the ability of the most skilled humans in a particular domain and by 2029 that will be achieved in most respects. (There may be a few years of transition beyond 2029 where AI has not surpassed the top humans in a few key skills like writing Oscar-winning screenplays or generating deep new philosophical insights, though it will.) AGI means AI that can do everything that any human can do, but to a superior level. AGI sounds more difficult, but it’s coming at the same time.
  • ...15 more annotations...
  • Why write this book? The Singularity Is Near talked about the future, but 20 years ago, when people didn’t know what AI was. It was clear to me what would happen, but it wasn’t clear to everybody. Now AI is dominating the conversation. It is time to take a look again both at the progress we’ve made – large language models (LLMs) are quite delightful to use – and the coming breakthroughs.
  • It is hard to imagine what this would be like, but it doesn’t sound very appealing… Think of it like having your phone, but in your brain. If you ask a question your brain will be able to go out to the cloud for an answer similar to the way you do on your phone now – only it will be instant, there won’t be any input or output issues, and you won’t realise it has been done (the answer will just appear). People do say “I don’t want that”: they thought they didn’t want phones either!
  • The most important driver is the exponential growth in the amount of computing power for the price in constant dollars. We are doubling price-performance every 15 months. LLMs just began to work two years ago because of the increase in computation.
  • What’s missing currently to bring AI to where you are predicting it will be in 2029? One is more computing power – and that’s coming. That will enable improvements in contextual memory, common sense reasoning and social interaction, which are all areas where deficiencies remain
  • LLM hallucinations [where they create nonsensical or inaccurate outputs] will become much less of a problem, certainly by 2029 – they already happen much less than they did two years ago. The issue occurs because they don’t have the answer, and they don’t know that. They look for the best thing, which might be wrong or not appropriate. As AI gets smarter, it will be able to understand its own knowledge more precisely and accurately report to humans when it doesn’t know.
  • What exactly is the Singularity? Today, we have one brain size which we can’t go beyond to get smarter. But the cloud is getting smarter and it is growing really without bounds. The Singularity, which is a metaphor borrowed from physics, will occur when we merge our brain with the cloud. We’re going to be a combination of our natural intelligence and our cybernetic intelligence and it’s all going to be rolled into one. Making it possible will be brain-computer interfaces which ultimately will be nanobots – robots the size of molecules – that will go noninvasively into our brains through the capillaries. We are going to expand intelligence a millionfold by 2045 and it is going to deepen our awareness and consciousness.
  • Why should we believe your dates? I’m really the only person that predicted the tremendous AI interest that we’re seeing today. In 1999 people thought that would take a century or more. I said 30 years and look what we have.
  • I have a chapter on perils. I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]
  • All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive.
  • Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you? Being wealthy allows you to afford these technologies at an early point, but also one where they don’t work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didn’t talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So it’s going to be the same thing here: this issue goes away over time.
  • The book looks in detail at AI’s job-killing potential. Should we be worried? Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like “social media influencer” didn’t make sense, even 10 years ago. Today we have more jobs than we’ve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to today’s dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It won’t be adequate at that point but over time it will become so.
  • Everything is progressing exponentially: not only computing power but our understanding of biology and our ability to engineer at far smaller scales. In the early 2030s we can expect to reach longevity escape velocity where every year of life we lose through ageing we get back from scientific progress. And as we move past that we’ll actually get back more years.
  • What is your own plan for immortality? My first plan is to stay alive, therefore reaching longevity escape velocity. I take about 80 pills a day to help keep me healthy. Cryogenic freezing is the fallback. I’m also intending to create a replicant of myself [an afterlife AI avatar], which is an option I think we’ll all have in the late 2020s
  • I did something like that with my father, collecting everything that he had written in his life, and it was a little bit like talking to him. [My replicant] will be able to draw on more material and so represent my personality more faithfully.
  • What should we be doing now to best prepare for the future? It is not going to be us versus AI: AI is going inside ourselves. It will allow us to create new things that weren’t feasible before. It’ll be a pretty fantastic future.
Javier E

The Contradictions of Sam Altman, the AI Crusader Behind ChatGPT - WSJ - 0 views

  • Mr. Altman said he fears what could happen if AI is rolled out into society recklessly. He co-founded OpenAI eight years ago as a research nonprofit, arguing that it’s uniquely dangerous to have profits be the main driver of developing powerful AI models.
  • He is so wary of profit as an incentive in AI development that he has taken no direct financial stake in the business he built, he said—an anomaly in Silicon Valley, where founders of successful startups typically get rich off their equity. 
  • His goal, he said, is to forge a new world order in which machines free people to pursue more creative work. In his vision, universal basic income—the concept of a cash stipend for everyone, no strings attached—helps compensate for jobs replaced by AI. Mr. Altman even thinks that humanity will love AI so much that an advanced chatbot could represent “an extension of your will.”
  • ...44 more annotations...
  • The Tesla Inc. CEO tweeted in February that OpenAI had been founded as an open-source nonprofit “to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.”
  • Backers say his brand of social-minded capitalism makes him the ideal person to lead OpenAI. Others, including some who’ve worked for him, say he’s too commercially minded and immersed in Silicon Valley thinking to lead a technological revolution that is already reshaping business and social life. 
  • In the long run, he said, he wants to set up a global governance structure that would oversee decisions about the future of AI and gradually reduce the power OpenAI’s executive team has over its technology. 
  • Mr. Altman said he doesn’t necessarily need to be first to develop artificial general intelligence, a world long imagined by researchers and science-fiction writers where software isn’t just good at one specific task like generating text or images but can understand and learn as well or better than a human can. He instead said OpenAI’s ultimate mission is to build AGI, as it’s called, safely.
  • In its founding charter, OpenAI pledged to abandon its research efforts if another project came close to building AGI before it did. The goal, the company said, was to avoid a race toward building dangerous AI systems fueled by competition and instead prioritize the safety of humanity.
  • While running Y Combinator, Mr. Altman began to nurse a growing fear that large research labs like DeepMind, purchased by Google in 2014, were creating potentially dangerous AI technologies outside the public eye. Mr. Musk has voiced similar concerns of a dystopian world controlled by powerful AI machines. 
  • Messrs. Altman and Musk decided it was time to start their own lab. Both were part of a group that pledged $1 billion to the nonprofit, OpenAI Inc. 
  • OpenAI researchers soon concluded that the most promising path to achieve artificial general intelligence rested in large language models, or computer programs that mimic the way humans read and write. Such models were trained on large volumes of text and required a massive amount of computing power that OpenAI wasn’t equipped to fund as a nonprofit, according to Mr. Altman. 
  • “We didn’t have a visceral sense of just how expensive this project was going to be,” he said. “We still don’t.”
  • Tensions also grew with Mr. Musk, who became frustrated with the slow progress and pushed for more control over the organization, people familiar with the matter said. 
  • OpenAI executives ended up reviving an unusual idea that had been floated earlier in the company’s history: creating a for-profit arm, OpenAI LP, that would report to the nonprofit parent. 
  • Reid Hoffman, a LinkedIn co-founder who advised OpenAI at the time and later served on the board, said the idea was to attract investors eager to make money from the commercial release of some OpenAI technology, accelerating OpenAI’s progress
  • “You want to be there first and you want to be setting the norms,” he said. “That’s part of the reason why speed is a moral and ethical thing here.”
  • The decision further alienated Mr. Musk, the people familiar with the matter said. He parted ways with OpenAI in February 2018. 
  • Mr. Musk announced his departure in a company all-hands, former employees who attended the meeting said. Mr. Musk explained that he thought he had a better chance at creating artificial general intelligence through Tesla, where he had access to greater resources, they said.
  • A young researcher questioned whether Mr. Musk had thought through the safety implications, the former employees said. Mr. Musk grew visibly frustrated and called the intern a “jackass,” leaving employees stunned, they said. It was the last time many of them would see Mr. Musk in person.  
  • Mr. Musk’s departure marked a turning point. Later that year, OpenAI leaders told employees that Mr. Altman was set to lead the company. He formally became CEO and helped complete the creation of the for-profit subsidiary in early 2019.
  • OpenAI said that it received about $130 million in contributions from the initial $1 billion pledge, but that further donations were no longer needed after the for-profit’s creation. Mr. Musk has tweeted that he donated around $100 million to OpenAI. 
  • In the meantime, Mr. Altman began hunting for investors. His break came at Allen & Co.’s annual conference in Sun Valley, Idaho in the summer of 2018, where he bumped into Satya Nadella, the Microsoft CEO, on a stairwell and pitched him on OpenAI. Mr. Nadella said he was intrigued. The conversations picked up that winter.
  • “I remember coming back to the team after and I was like, this is the only partner,” Mr. Altman said. “They get the safety stuff, they get artificial general intelligence. They have the capital, they have the ability to run the compute.”   
  • Mr. Altman shared the contract with employees as it was being negotiated, hosting all-hands and office hours to allay concerns that the partnership contradicted OpenAI’s initial pledge to develop artificial intelligence outside the corporate world, the former employees said. 
  • Some employees still saw the deal as a Faustian bargain. 
  • OpenAI’s lead safety researcher, Dario Amodei, and his lieutenants feared the deal would allow Microsoft to sell products using powerful OpenAI technology before it was put through enough safety testing,
  • They felt that OpenAI’s technology was far from ready for a large release—let alone with one of the world’s largest software companies—worrying it could malfunction or be misused for harm in ways they couldn’t predict.  
  • Mr. Amodei also worried the deal would tether OpenAI’s ship to just one company—Microsoft—making it more difficult for OpenAI to stay true to its founding charter’s commitment to assist another project if it got to AGI first, the former employees said.
  • Microsoft initially invested $1 billion in OpenAI. While the deal gave OpenAI its needed money, it came with a hitch: exclusivity. OpenAI agreed to only use Microsoft’s giant computer servers, via its Azure cloud service, to train its AI models, and to give the tech giant the sole right to license OpenAI’s technology for future products.
  • In a recent investment deck, Anthropic said it was “committed to large-scale commercialization” to achieve the creation of safe AGI, and that it “fully committed” to a commercial approach in September. The company was founded as an AI safety and research company and said at the time that it might look to create commercial value from its products. 
  • Mr. Altman “has presided over a 180-degree pivot that seems to me to be only giving lip service to concern for humanity,” he said. 
  • “The deal completely undermines those tenets to which they secured nonprofit status,” said Gary Marcus, an emeritus professor of psychology and neural science at New York University who co-founded a machine-learning company
  • The cash turbocharged OpenAI’s progress, giving researchers access to the computing power needed to improve large language models, which were trained on billions of pages of publicly available text. OpenAI soon developed a more powerful language model called GPT-3 and then sold developers access to the technology in June 2020 through packaged lines of code known as application program interfaces, or APIs. 
  • Mr. Altman and Mr. Amodei clashed again over the release of the API, former employees said. Mr. Amodei wanted a more limited and staged release of the product to help reduce publicity and allow the safety team to conduct more testing on a smaller group of users, former employees said. 
  • Mr. Amodei left the company a few months later along with several others to found a rival AI lab called Anthropic. “They had a different opinion about how to best get to safe AGI than we did,” Mr. Altman said.
  • Anthropic has since received more than $300 million from Google this year and released its own AI chatbot called Claude in March, which is also available to developers through an API. 
  • Mr. Altman disagreed. “The unusual thing about Microsoft as a partner is that it let us keep all the tenets that we think are important to our mission,” he said, including profit caps and the commitment to assist another project if it got to AGI first. 
  • In the three years after the initial deal, Microsoft invested a total of $3 billion in OpenAI, according to investor documents. 
  • More than one million users signed up for ChatGPT within five days of its November release, a speed that surprised even Mr. Altman. It followed the company’s introduction of DALL-E 2, which can generate sophisticated images from text prompts.
  • By February, it had reached 100 million users, according to analysts at UBS, the fastest pace by a consumer app in history to reach that mark.
  • n’s close associates praise his ability to balance OpenAI’s priorities. No one better navigates between the “Scylla of misplaced idealism” and the “Charybdis of myopic ambition,” Mr. Thiel said. 
  • Mr. Altman said he delayed the release of the latest version of its model, GPT-4, from last year to March to run additional safety tests. Users had reported some disturbing experiences with the model, integrated into Bing, where the software hallucinated—meaning it made up answers to questions it didn’t know. It issued ominous warnings and made threats. 
  • “The way to get it right is to have people engage with it, explore these systems, study them, to learn how to make them safe,” Mr. Altman said.
  • After Microsoft’s initial investment is paid back, it would capture 49% of OpenAI’s profits until the profit cap, up from 21% under prior arrangements, the documents show. OpenAI Inc., the nonprofit parent, would get the rest.
  • He has put almost all his liquid wealth in recent years in two companies. He has put $375 million into Helion Energy, which is seeking to create carbon-free energy from nuclear fusion and is close to creating “legitimate net-gain energy in a real demo,” Mr. Altman said.
  • He has also put $180 million into Retro, which aims to add 10 years to the human lifespan through “cellular reprogramming, plasma-inspired therapeutics and autophagy,” or the reuse of old and damaged cell parts, according to the company. 
  • He noted how much easier these problems are, morally, than AI. “If you’re making nuclear fusion, it’s all upside. It’s just good,” he said. “If you’re making AI, it is potentially very good, potentially very terrible.” 
Javier E

'Never summon a power you can't control': Yuval Noah Harari on how AI could threaten de... - 0 views

  • The Phaethon myth and Goethe’s poem fail to provide useful advice because they misconstrue the way humans gain power. In both fables, a single human acquires enormous power, but is then corrupted by hubris and greed. The conclusion is that our flawed individual psychology makes us abuse power.
  • What this crude analysis misses is that human power is never the outcome of individual initiative. Power always stems from cooperation between large numbers of humans. Accordingly, it isn’t our individual psychology that causes us to abuse power.
  • Our tendency to summon powers we cannot control stems not from individual psychology but from the unique way our species cooperates in large numbers. Humankind gains enormous power by building large networks of cooperation, but the way our networks are built predisposes us to use power unwisely
  • ...57 more annotations...
  • We are also producing ever more powerful weapons of mass destruction, from thermonuclear bombs to doomsday viruses. Our leaders don’t lack information about these dangers, yet instead of collaborating to find solutions, they are edging closer to a global war.
  • Despite – or perhaps because of – our hoard of data, we are continuing to spew greenhouse gases into the atmosphere, pollute rivers and oceans, cut down forests, destroy entire habitats, drive countless species to extinction, and jeopardise the ecological foundations of our own species
  • For most of our networks have been built and maintained by spreading fictions, fantasies and mass delusions – ranging from enchanted broomsticks to financial systems. Our problem, then, is a network problem. Specifically, it is an information problem. For information is the glue that holds networks together, and when people are fed bad information they are likely to make bad decisions, no matter how wise and kind they personally are.
  • Traditionally, the term “AI” has been used as an acronym for artificial intelligence. But it is perhaps better to think of it as an acronym for alien intelligence
  • AI is an unprecedented threat to humanity because it is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands
  • Nuclear bombs do not themselves decide whom to kill, nor can they improve themselves or invent even more powerful bombs. In contrast, autonomous drones can decide by themselves who to kill, and AIs can create novel bomb designs, unprecedented military strategies and better AIs.
  • AI isn’t a tool – it’s an agent. The biggest threat of AI is that we are summoning to Earth countless new powerful agents that are potentially more intelligent and imaginative than us, and that we don’t fully understand or control.
  • repreneurs such as Yoshua Bengio, Geoffrey Hinton, Sam Altman, Elon Musk and Mustafa Suleyman have warned that AI could destroy our civilisation. In a 2023 survey of 2,778 AI researchers, more than a third gave at least a 10% chance of advanced AI leading to outcomes as bad as human extinction.
  • As AI evolves, it becomes less artificial (in the sense of depending on human designs) and more alien
  • AI isn’t progressing towards human-level intelligence. It is evolving an alien type of intelligence.
  • generative AIs like GPT-4 already create new poems, stories and images. This trend will only increase and accelerate, making it more difficult to understand our own lives. Can we trust computer algorithms to make wise decisions and create a better world? That’s a much bigger gamble than trusting an enchanted broom to fetch water
  • it is more than just human lives we are gambling on. AI is already capable of producing art and making scientific discoveries by itself. In the next few decades, it will be likely to gain the ability even to create new life forms, either by writing genetic code or by inventing an inorganic code animating inorganic entities. AI could therefore alter the course not just of our species’ history but of the evolution of all life forms.
  • “Then … came move number 37,” writes Suleyman. “It made no sense. AlphaGo had apparently blown it, blindly following an apparently losing strategy no professional player would ever pursue. The live match commentators, both professionals of the highest ranking, said it was a ‘very strange move’ and thought it was ‘a mistake’.
  • as the endgame approached, that ‘mistaken’ move proved pivotal. AlphaGo won again. Go strategy was being rewritten before our eyes. Our AI had uncovered ideas that hadn’t occurred to the most brilliant players in thousands of years.”
  • “In AI, the neural networks moving toward autonomy are, at present, not explainable. You can’t walk someone through the decision-making process to explain precisely why an algorithm produced a specific prediction. Engineers can’t peer beneath the hood and easily explain in granular detail what caused something to happen. GPT‑4, AlphaGo and the rest are black boxes, their outputs and decisions based on opaque and impossibly intricate chains of minute signals.”
  • Yet during all those millennia, human minds have explored only certain areas in the landscape of Go. Other areas were left untouched, because human minds just didn’t think to venture there. AI, being free from the limitations of human minds, discovered and explored these previously hidden areas.
  • Second, move 37 demonstrated the unfathomability of AI. Even after AlphaGo played it to achieve victory, Suleyman and his team couldn’t explain how AlphaGo decided to play it.
  • Move 37 is an emblem of the AI revolution for two reasons. First, it demonstrated the alien nature of AI. In east Asia, Go is considered much more than a game: it is a treasured cultural tradition. For more than 2,500 years, tens of millions of people have played Go, and entire schools of thought have developed around the game, espousing different strategies and philosophies
  • The rise of unfathomable alien intelligence poses a threat to all humans, and poses a particular threat to democracy. If more and more decisions about people’s lives are made in a black box, so voters cannot understand and challenge them, democracy ceases to functio
  • Human voters may keep choosing a human president, but wouldn’t this be just an empty ceremony? Even today, only a small fraction of humanity truly understands the financial system
  • As the 2007‑8 financial crisis indicated, some complex financial devices and principles were intelligible to only a few financial wizards. What happens to democracy when AIs create even more complex financial devices and when the number of humans who understand the financial system drops to zero?
  • Translating Goethe’s cautionary fable into the language of modern finance, imagine the following scenario: a Wall Street apprentice fed up with the drudgery of the financial workshop creates an AI called Broomstick, provides it with a million dollars in seed money, and orders it to make more money.
  • n pursuit of more dollars, Broomstick not only devises new investment strategies, but comes up with entirely new financial devices that no human being has ever thought about.
  • many financial areas were left untouched, because human minds just didn’t think to venture there. Broomstick, being free from the limitations of human minds, discovers and explores these previously hidden areas, making financial moves that are the equivalent of AlphaGo’s move 37.
  • For a couple of years, as Broomstick leads humanity into financial virgin territory, everything looks wonderful. The markets are soaring, the money is flooding in effortlessly, and everyone is happy. Then comes a crash bigger even than 1929 or 2008. But no human being – either president, banker or citizen – knows what caused it and what could be done about it
  • AI, too, is a global problem. Accordingly, to understand the new computer politics, it is not enough to examine how discrete societies might react to AI. We also need to consider how AI might change relations between societies on a global level.
  • As long as humanity stands united, we can build institutions that will regulate AI, whether in the field of finance or war. Unfortunately, humanity has never been united. We have always been plagued by bad actors, as well as by disagreements between good actors. The rise of AI poses an existential danger to humankind, not because of the malevolence of computers, but because of our own shortcomings.
  • errorists might use AI to instigate a global pandemic. The terrorists themselves may have little knowledge of epidemiology, but the AI could synthesise for them a new pathogen, order it from commercial laboratories or print it in biological 3D printers, and devise the best strategy to spread it around the world, via airports or food supply chain
  • desperate governments request help from the only entity capable of understanding what is happening – Broomstick. The AI makes several policy recommendations, far more audacious than quantitative easing – and far more opaque, too. Broomstick promises that these policies will save the day, but human politicians – unable to understand the logic behind Broomstick’s recommendations – fear they might completely unravel the financial and even social fabric of the world. Should they listen to the AI?
  • Human civilisation could also be devastated by weapons of social mass destruction, such as stories that undermine our social bonds. An AI developed in one country could be used to unleash a deluge of fake news, fake money and fake humans so that people in numerous other countries lose the ability to trust anything or anyone.
  • Many societies – both democracies and dictatorships – may act responsibly to regulate such usages of AI, clamp down on bad actors and restrain the dangerous ambitions of their own rulers and fanatics. But if even a handful of societies fail to do so, this could be enough to endanger the whole of humankind
  • Thus, a paranoid dictator might hand unlimited power to a fallible AI, including even the power to launch nuclear strikes. If the AI then makes an error, or begins to pursue an unexpected goal, the result could be catastrophic, and not just for that country
  • magine a situation – in 20 years, say – when somebody in Beijing or San Francisco possesses the entire personal history of every politician, journalist, colonel and CEO in your country: every text they ever sent, every web search they ever made, every illness they suffered, every sexual encounter they enjoyed, every joke they told, every bribe they took. Would you still be living in an independent country, or would you now be living in a data colony?
  • What happens when your country finds itself utterly dependent on digital infrastructures and AI-powered systems over which it has no effective control?
  • In the economic realm, previous empires were based on material resources such as land, cotton and oil. This placed a limit on the empire’s ability to concentrate both economic wealth and political power in one place. Physics and geology don’t allow all the world’s land, cotton or oil to be moved to one country
  • t is different with the new information empires. Data can move at the speed of light, and algorithms don’t take up much space. Consequently, the world’s algorithmic power can be concentrated in a single hub. Engineers in a single country might write the code and control the keys for all the crucial algorithms that run the entire world.
  • AI and automation therefore pose a particular challenge to poorer developing countries. In an AI-driven global economy, the digital leaders claim the bulk of the gains and could use their wealth to retrain their workforce and profit even more
  • Meanwhile, the value of unskilled labourers in left-behind countries will decline, causing them to fall even further behind. The result might be lots of new jobs and immense wealth in San Francisco and Shanghai, while many other parts of the world face economic ruin.
  • AI is expected to add $15.7tn (£12.3tn) to the global economy by 2030. But if current trends continue, it is projected that China and North America – the two leading AI superpowers – will together take home 70% of that money.
  • uring the cold war, the iron curtain was in many places literally made of metal: barbed wire separated one country from another. Now the world is increasingly divided by the silicon curtain. The code on your smartphone determines on which side of the silicon curtain you live, which algorithms run your life, who controls your attention and where your data flows.
  • Cyberweapons can bring down a country’s electric grid, but they can also be used to destroy a secret research facility, jam an enemy sensor, inflame a political scandal, manipulate elections or hack a single smartphone. And they can do all that stealthily. They don’t announce their presence with a mushroom cloud and a storm of fire, nor do they leave a visible trail from launchpad to target
  • The two digital spheres may therefore drift further and further apart. For centuries, new information technologies fuelled the process of globalisation and brought people all over the world into closer contact. Paradoxically, information technology today is so powerful it can potentially split humanity by enclosing different people in separate information cocoons, ending the idea of a single shared human reality
  • For decades, the world’s master metaphor was the web. The master metaphor of the coming decades might be the cocoon.
  • Other countries or blocs, such as the EU, India, Brazil and Russia, may try to create their own digital cocoons,
  • Instead of being divided between two global empires, the world might be divided among a dozen empires.
  • The more the new empires compete against one another, the greater the danger of armed conflict.
  • The cold war between the US and the USSR never escalated into a direct military confrontation, largely thanks to the doctrine of mutually assured destruction. But the danger of escalation in the age of AI is bigger, because cyber warfare is inherently different from nuclear warfare.
  • US companies are now forbidden to export such chips to China. While in the short term this hampers China in the AI race, in the long term it pushes China to develop a completely separate digital sphere that will be distinct from the American digital sphere even in its smallest buildings.
  • The temptation to start a limited cyberwar is therefore big, and so is the temptation to escalate it.
  • A second crucial difference concerns predictability. The cold war was like a hyper-rational chess game, and the certainty of destruction in the event of nuclear conflict was so great that the desire to start a war was correspondingly small
  • Cyberwarfare lacks this certainty. Nobody knows for sure where each side has planted its logic bombs, Trojan horses and malware. Nobody can be certain whether their own weapons would actually work when called upon
  • Such uncertainty undermines the doctrine of mutually assured destruction. One side might convince itself – rightly or wrongly – that it can launch a successful first strike and avoid massive retaliation
  • Even if humanity avoids the worst-case scenario of global war, the rise of new digital empires could still endanger the freedom and prosperity of billions of people. The industrial empires of the 19th and 20th centuries exploited and repressed their colonies, and it would be foolhardy to expect new digital empires to behave much better
  • Moreover, if the world is divided into rival empires, humanity is unlikely to cooperate to overcome the ecological crisis or to regulate AI and other disruptive technologies such as bioengineering.
  • The division of the world into rival digital empires dovetails with the political vision of many leaders who believe that the world is a jungle, that the relative peace of recent decades has been an illusion, and that the only real choice is whether to play the part of predator or prey.
  • Given such a choice, most leaders would prefer to go down in history as predators and add their names to the grim list of conquerors that unfortunate pupils are condemned to memorise for their history exams.
  • These leaders should be reminded, however, that there is a new alpha predator in the jungle. If humanity doesn’t find a way to cooperate and protect our shared interests, we will all be easy prey to AI.
Javier E

How Nations Are Losing a Global Race to Tackle A.I.'s Harms - The New York Times - 0 views

  • When European Union leaders introduced a 125-page draft law to regulate artificial intelligence in April 2021, they hailed it as a global model for handling the technology.
  • E.U. lawmakers had gotten input from thousands of experts for three years about A.I., when the topic was not even on the table in other countries. The result was a “landmark” policy that was “future proof,” declared Margrethe Vestager, the head of digital policy for the 27-nation bloc.
  • Then came ChatGPT.
  • ...45 more annotations...
  • The eerily humanlike chatbot, which went viral last year by generating its own answers to prompts, blindsided E.U. policymakers. The type of A.I. that powered ChatGPT was not mentioned in the draft law and was not a major focus of discussions about the policy. Lawmakers and their aides peppered one another with calls and texts to address the gap, as tech executives warned that overly aggressive regulations could put Europe at an economic disadvantage.
  • Even now, E.U. lawmakers are arguing over what to do, putting the law at risk. “We will always be lagging behind the speed of technology,” said Svenja Hahn, a member of the European Parliament who was involved in writing the A.I. law.
  • Lawmakers and regulators in Brussels, in Washington and elsewhere are losing a battle to regulate A.I. and are racing to catch up, as concerns grow that the powerful technology will automate away jobs, turbocharge the spread of disinformation and eventually develop its own kind of intelligence.
  • Nations have moved swiftly to tackle A.I.’s potential perils, but European officials have been caught off guard by the technology’s evolution, while U.S. lawmakers openly concede that they barely understand how it works.
  • The absence of rules has left a vacuum. Google, Meta, Microsoft and OpenAI, which makes ChatGPT, have been left to police themselves as they race to create and profit from advanced A.I. systems
  • At the root of the fragmented actions is a fundamental mismatch. A.I. systems are advancing so rapidly and unpredictably that lawmakers and regulators can’t keep pace
  • That gap has been compounded by an A.I. knowledge deficit in governments, labyrinthine bureaucracies and fears that too many rules may inadvertently limit the technology’s benefits.
  • Even in Europe, perhaps the world’s most aggressive tech regulator, A.I. has befuddled policymakers.
  • The European Union has plowed ahead with its new law, the A.I. Act, despite disputes over how to handle the makers of the latest A.I. systems.
  • The result has been a sprawl of responses. President Biden issued an executive order in October about A.I.’s national security effects as lawmakers debate what, if any, measures to pass. Japan is drafting nonbinding guidelines for the technology, while China has imposed restrictions on certain types of A.I. Britain has said existing laws are adequate for regulating the technology. Saudi Arabia and the United Arab Emirates are pouring government money into A.I. research.
  • A final agreement, expected as soon as Wednesday, could restrict certain risky uses of the technology and create transparency requirements about how the underlying systems work. But even if it passes, it is not expected to take effect for at least 18 months — a lifetime in A.I. development — and how it will be enforced is unclear.
  • Many companies, preferring nonbinding codes of conduct that provide latitude to speed up development, are lobbying to soften proposed regulations and pitting governments against one another.
  • “No one, not even the creators of these systems, know what they will be able to do,” said Matt Clifford, an adviser to Prime Minister Rishi Sunak of Britain, who presided over an A.I. Safety Summit last month with 28 countries. “The urgency comes from there being a real question of whether governments are equipped to deal with and mitigate the risks.”
  • Europe takes the lead
  • In mid-2018, 52 academics, computer scientists and lawyers met at the Crowne Plaza hotel in Brussels to discuss artificial intelligence. E.U. officials had selected them to provide advice about the technology, which was drawing attention for powering driverless cars and facial recognition systems.
  • as they discussed A.I.’s possible effects — including the threat of facial recognition technology to people’s privacy — they recognized “there were all these legal gaps, and what happens if people don’t follow those guidelines?”
  • In 2019, the group published a 52-page report with 33 recommendations, including more oversight of A.I. tools that could harm individuals and society.
  • By October, the governments of France, Germany and Italy, the three largest E.U. economies, had come out against strict regulation of general purpose A.I. models for fear of hindering their domestic tech start-ups. Others in the European Parliament said the law would be toothless without addressing the technology. Divisions over the use of facial recognition technology also persisted.
  • So when the A.I. Act was unveiled in 2021, it concentrated on “high risk” uses of the technology, including in law enforcement, school admissions and hiring. It largely avoided regulating the A.I. models that powered them unless listed as dangerous
  • “They sent me a draft, and I sent them back 20 pages of comments,” said Stuart Russell, a computer science professor at the University of California, Berkeley, who advised the European Commission. “Anything not on their list of high-risk applications would not count, and the list excluded ChatGPT and most A.I. systems.”
  • E.U. leaders were undeterred.“Europe may not have been the leader in the last wave of digitalization, but it has it all to lead the next one,” Ms. Vestager said when she introduced the policy at a news conference in Brussels.
  • In 2020, European policymakers decided that the best approach was to focus on how A.I. was used and not the underlying technology. A.I. was not inherently good or bad, they said — it depended on how it was applied.
  • Nineteen months later, ChatGPT arrived.
  • The Washington game
  • Lacking tech expertise, lawmakers are increasingly relying on Anthropic, Microsoft, OpenAI, Google and other A.I. makers to explain how it works and to help create rules.
  • “We’re not experts,” said Representative Ted Lieu, Democrat of California, who hosted Sam Altman, OpenAI’s chief executive, and more than 50 lawmakers at a dinner in Washington in May. “It’s important to be humble.”
  • Tech companies have seized their advantage. In the first half of the year, many of Microsoft’s and Google’s combined 169 lobbyists met with lawmakers and the White House to discuss A.I. legislation, according to lobbying disclosures. OpenAI registered its first three lobbyists and a tech lobbying group unveiled a $25 million campaign to promote A.I.’s benefits this year.
  • In that same period, Mr. Altman met with more than 100 members of Congress, including former Speaker Kevin McCarthy, Republican of California, and the Senate leader, Chuck Schumer, Democrat of New York. After testifying in Congress in May, Mr. Altman embarked on a 17-city global tour, meeting world leaders including President Emmanuel Macron of France, Mr. Sunak and Prime Minister Narendra Modi of India.
  • , the White House announced that the four companies had agreed to voluntary commitments on A.I. safety, including testing their systems through third-party overseers — which most of the companies were already doing.
  • “It was brilliant,” Mr. Smith said. “Instead of people in government coming up with ideas that might have been impractical, they said, ‘Show us what you think you can do and we’ll push you to do more.’”
  • In a statement, Ms. Raimondo said the federal government would keep working with companies so “America continues to lead the world in responsible A.I. innovation.”
  • Over the summer, the Federal Trade Commission opened an investigation into OpenAI and how it handles user data. Lawmakers continued welcoming tech executives.
  • In September, Mr. Schumer was the host of Elon Musk, Mark Zuckerberg of Meta, Sundar Pichai of Google, Satya Nadella of Microsoft and Mr. Altman at a closed-door meeting with lawmakers in Washington to discuss A.I. rules. Mr. Musk warned of A.I.’s “civilizational” risks, while Mr. Altman proclaimed that A.I. could solve global problems such as poverty.
  • A.I. companies are playing governments off one another. In Europe, industry groups have warned that regulations could put the European Union behind the United States. In Washington, tech companies have cautioned that China might pull ahead.
  • In May, Ms. Vestager, Ms. Raimondo and Antony J. Blinken, the U.S. secretary of state, met in Lulea, Sweden, to discuss cooperating on digital policy.
  • “China is way better at this stuff than you imagine,” Mr. Clark of Anthropic told members of Congress in January.
  • After two days of talks, Ms. Vestager announced that Europe and the United States would release a shared code of conduct for safeguarding A.I. “within weeks.” She messaged colleagues in Brussels asking them to share her social media post about the pact, which she called a “huge step in a race we can’t afford to lose.”
  • Months later, no shared code of conduct had appeared. The United States instead announced A.I. guidelines of its own.
  • Little progress has been made internationally on A.I. With countries mired in economic competition and geopolitical distrust, many are setting their own rules for the borderless technology.
  • Yet “weak regulation in another country will affect you,” said Rajeev Chandrasekhar, India’s technology minister, noting that a lack of rules around American social media companies led to a wave of global disinformation.
  • “Most of the countries impacted by those technologies were never at the table when policies were set,” he said. “A.I will be several factors more difficult to manage.”
  • Even among allies, the issue has been divisive. At the meeting in Sweden between E.U. and U.S. officials, Mr. Blinken criticized Europe for moving forward with A.I. regulations that could harm American companies, one attendee said. Thierry Breton, a European commissioner, shot back that the United States could not dictate European policy, the person said.
  • Some policymakers said they hoped for progress at an A.I. safety summit that Britain held last month at Bletchley Park, where the mathematician Alan Turing helped crack the Enigma code used by the Nazis. The gathering featured Vice President Kamala Harris; Wu Zhaohui, China’s vice minister of science and technology; Mr. Musk; and others.
  • The upshot was a 12-paragraph statement describing A.I.’s “transformative” potential and “catastrophic” risk of misuse. Attendees agreed to meet again next year.
  • The talks, in the end, produced a deal to keep talking.
Javier E

Our Machine Masters - NYTimes.com - 0 views

  • In the current issue of Wired, the technology writer Kevin Kelly says that we had all better get used to this level of predictive prowess. Kelly argues that the age of artificial intelligence is finally at hand.
  • the smart machines of the future won’t be humanlike geniuses like HAL 9000 in the movie “2001: A Space Odyssey.” They will be more modest machines that will drive your car, translate foreign languages, organize your photos, recommend entertainment options and maybe diagnose your illnesses. “Everything that we formerly electrified we will now cognitize,” Kelly writes. Even more than today, we’ll lead our lives enmeshed with machines that do some of our thinking tasks for us.
  • Two big implications flow from this. The first is sociological. If knowledge is power, we’re about to see an even greater concentration of power.
  • ...14 more annotations...
  • This artificial intelligence breakthrough, he argues, is being driven by cheap parallel computation technologies, big data collection and better algorithms. The upshot is clear, “The business plans of the next 10,000 start-ups are easy to forecast: Take X and add A.I.”
  • Advances in artificial intelligence will accelerate this centralizing trend. That’s because A.I. companies will be able to reap the rewards of network effects. The bigger their network and the more data they collect, the more effective and attractive they become.
  • The Internet has created a long tail, but almost all the revenue and power is among the small elite at the head.
  • in 2001, the top 10 websites accounted for 31 percent of all U.S. page views, but, by 2010, they accounted for 75 percent of them.
  • As a result, our A.I. future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.”
  • In the age of smart machines, we’re not human because we have big brains. We’re human because we have social skills, emotional capacities and moral intuitions.
  • The second implication is philosophical. A.I. will redefine what it means to be human. Our identity as humans is shaped by what machines and other animals can’t do
  • For the last few centuries, reason was seen as the ultimate human faculty. But now machines are better at many of the tasks we associate with thinking — like playing chess, winning at Jeopardy, and doing math.
  • On the other hand, machines cannot beat us at the things we do without conscious thinking: developing tastes and affections, mimicking each other and building emotional attachments, experiencing imaginative breakthroughs, forming moral sentiments.
  • engineers at a few gigantic companies will have vast-though-hidden power to shape how data are collected and framed, to harvest huge amounts of information, to build the frameworks through which the rest of us make decisions and to steer our choices. If you think this power will be used for entirely benign ends, then you have not read enough history.
  • I could paint two divergent A.I. futures, one deeply humanistic, and one soullessly utilitarian.
  • In the humanistic one, machines liberate us from mental drudgery so we can focus on higher and happier things. In this future, differences in innate I.Q. are less important. Everybody has Google on their phones so having a great memory or the ability to calculate with big numbers doesn’t help as much.
  • In this future, there is increasing emphasis on personal and moral faculties: being likable, industrious, trustworthy and affectionate. People are evaluated more on these traits, which supplement machine thinking, and not the rote ones that duplicate it
  • In the cold, utilitarian future, on the other hand, people become less idiosyncratic. If the choice architecture behind many decisions is based on big data from vast crowds, everybody follows the prompts and chooses to be like each other. The machine prompts us to consume what is popular, the things that are easy and mentally undemanding.
Javier E

The Only Way to Deal With the Threat From AI? Shut It Down | Time - 0 views

  • An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-
  • This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin
  • Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.
  • ...25 more annotations...
  • The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.
  • Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”
  • It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
  • Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”
  • he rule that most people aware of these issues would have endorsed 50 years earlier, was that if an AI system can speak fluently and says it’s self-aware and demands human rights, that ought to be a hard stop on people just casually owning that AI and using it past that point. We already blew past that old line in the sand. And that was probably correct; I agree that current AIs are probably just imitating talk of self-awareness from their training data. But I mark that, with how little insight we have into these systems’ internals, we do not actually know.
  • The likely result of humanity facing down an opposed superhuman intelligence is a total loss
  • To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
  • There’s no proposed plan for how we could do any such thing and survive. OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.
  • An aside: None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria.
  • I didn’t also mention that we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.
  • I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.
  • the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.
  • If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow.
  • We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems
  • Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs.
  • This is a stupid state of affairs, and an undignified way for Earth to die, and the rest of humanity ought to step in at this point and help the industry solve its collective action problem.
  • When the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she’s not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium.
  • The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth
  • Here’s what would actually need to be done:
  • Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs
  • Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithm
  • Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
  • Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool
  • Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
  • when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too.
Javier E

'The machine did it coldly': Israel used AI to identify 37,000 Hamas targets | Israel-G... - 0 views

  • All six said that Lavender had played a central role in the war, processing masses of data to rapidly identify potential “junior” operatives to target. Four of the sources said that, at one stage early in the war, Lavender listed as many as 37,000 Palestinian men who had been linked by the AI system to Hamas or PIJ.
  • The health ministry in the Hamas-run territory says 32,000 Palestinians have been killed in the conflict in the past six months. UN data shows that in the first month of the war alone, 1,340 families suffered multiple losses, with 312 families losing more than 10 members.
  • Several of the sources described how, for certain categories of targets, the IDF applied pre-authorised allowances for the estimated number of civilians who could be killed before a strike was authorised.
  • ...32 more annotations...
  • Two sources said that during the early weeks of the war they were permitted to kill 15 or 20 civilians during airstrikes on low-ranking militants. Attacks on such targets were typically carried out using unguided munitions known as “dumb bombs”, the sources said, destroying entire homes and killing all their occupants.
  • “You don’t want to waste expensive bombs on unimportant people – it’s very expensive for the country and there’s a shortage [of those bombs],” one intelligence officer said. Another said the principal question they were faced with was whether the “collateral damage” to civilians allowed for an attack.
  • “Because we usually carried out the attacks with dumb bombs, and that meant literally dropping the whole house on its occupants. But even if an attack is averted, you don’t care – you immediately move on to the next target. Because of the system, the targets never end. You have another 36,000 waiting.”
  • ccording to conflict experts, if Israel has been using dumb bombs to flatten the homes of thousands of Palestinians who were linked, with the assistance of AI, to militant groups in Gaza, that could help explain the shockingly high death toll in the war.
  • Details about the specific kinds of data used to train Lavender’s algorithm, or how the programme reached its conclusions, are not included in the accounts published by +972 or Local Call. However, the sources said that during the first few weeks of the war, Unit 8200 refined Lavender’s algorithm and tweaked its search parameters.
  • Responding to the publication of the testimonies in +972 and Local Call, the IDF said in a statement that its operations were carried out in accordance with the rules of proportionality under international law. It said dumb bombs are “standard weaponry” that are used by IDF pilots in a manner that ensures “a high level of precision”.
  • “The IDF does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist,” it added. “Information systems are merely tools for analysts in the target identification process.”
  • In earlier military operations conducted by the IDF, producing human targets was often a more labour-intensive process. Multiple sources who described target development in previous wars to the Guardian, said the decision to “incriminate” an individual, or identify them as a legitimate target, would be discussed and then signed off by a legal adviser.
  • n the weeks and months after 7 October, this model for approving strikes on human targets was dramatically accelerated, according to the sources. As the IDF’s bombardment of Gaza intensified, they said, commanders demanded a continuous pipeline of targets.
  • “We were constantly being pressured: ‘Bring us more targets.’ They really shouted at us,” said one intelligence officer. “We were told: now we have to fuck up Hamas, no matter what the cost. Whatever you can, you bomb.”
  • Lavender was developed by the Israel Defense Forces’ elite intelligence division, Unit 8200, which is comparable to the US’s National Security Agency or GCHQ in the UK.
  • After randomly sampling and cross-checking its predictions, the unit concluded Lavender had achieved a 90% accuracy rate, the sources said, leading the IDF to approve its sweeping use as a target recommendation tool.
  • Lavender created a database of tens of thousands of individuals who were marked as predominantly low-ranking members of Hamas’s military wing, they added. This was used alongside another AI-based decision support system, called the Gospel, which recommended buildings and structures as targets rather than individuals.
  • The accounts include first-hand testimony of how intelligence officers worked with Lavender and how the reach of its dragnet could be adjusted. “At its peak, the system managed to generate 37,000 people as potential human targets,” one of the sources said. “But the numbers changed all the time, because it depends on where you set the bar of what a Hamas operative is.”
  • broadly, and then the machine started bringing us all kinds of civil defence personnel, police officers, on whom it would be a shame to waste bombs. They help the Hamas government, but they don’t really endanger soldiers.”
  • Before the war, US and Israeli estimated membership of Hamas’s military wing at approximately 25-30,000 people.
  • there was a decision to treat Palestinian men linked to Hamas’s military wing as potential targets, regardless of their rank or importance.
  • According to +972 and Local Call, the IDF judged it permissible to kill more than 100 civilians in attacks on a top-ranking Hamas officials. “We had a calculation for how many [civilians could be killed] for the brigade commander, how many [civilians] for a battalion commander, and so on,” one source said.
  • Another source, who justified the use of Lavender to help identify low-ranking targets, said that “when it comes to a junior militant, you don’t want to invest manpower and time in it”. They said that in wartime there was insufficient time to carefully “incriminate every target”
  • So you’re willing to take the margin of error of using artificial intelligence, risking collateral damage and civilians dying, and risking attacking by mistake, and to live with it,” they added.
  • When it came to targeting low-ranking Hamas and PIJ suspects, they said, the preference was to attack when they were believed to be at home. “We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity,” one said. “It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”
  • Such a strategy risked higher numbers of civilian casualties, and the sources said the IDF imposed pre-authorised limits on the number of civilians it deemed acceptable to kill in a strike aimed at a single Hamas militant. The ratio was said to have changed over time, and varied according to the seniority of the target.
  • The IDF’s targeting processes in the most intensive phase of the bombardment were also relaxed, they said. “There was a completely permissive policy regarding the casualties of [bombing] operations,” one source said. “A policy so permissive that in my opinion it had an element of revenge.”
  • “There were regulations, but they were just very lenient,” another added. “We’ve killed people with collateral damage in the high double digits, if not low triple digits. These are things that haven’t happened before.” There appears to have been significant fluctuations in the figure that military commanders would tolerate at different stages of the war
  • One source said that the limit on permitted civilian casualties “went up and down” over time, and at one point was as low as five. During the first week of the conflict, the source said, permission was given to kill 15 non-combatants to take out junior militants in Gaza
  • at one stage earlier in the war they were authorised to kill up to “20 uninvolved civilians” for a single operative, regardless of their rank, military importance, or age.
  • “It’s not just that you can kill any person who is a Hamas soldier, which is clearly permitted and legitimate in terms of international law,” they said. “But they directly tell you: ‘You are allowed to kill them along with many civilians.’ … In practice, the proportionality criterion did not exist.”
  • Experts in international humanitarian law who spoke to the Guardian expressed alarm at accounts of the IDF accepting and pre-authorising collateral damage ratios as high as 20 civilians, particularly for lower-ranking militants. They said militaries must assess proportionality for each individual strike.
  • An international law expert at the US state department said they had “never remotely heard of a one to 15 ratio being deemed acceptable, especially for lower-level combatants. There’s a lot of leeway, but that strikes me as extreme”.
  • Sarah Harrison, a former lawyer at the US Department of Defense, now an analyst at Crisis Group, said: “While there may be certain occasions where 15 collateral civilian deaths could be proportionate, there are other times where it definitely wouldn’t be. You can’t just set a tolerable number for a category of targets and say that it’ll be lawfully proportionate in each case.”
  • Whatever the legal or moral justification for Israel’s bombing strategy, some of its intelligence officers appear now to be questioning the approach set by their commanders. “No one thought about what to do afterward, when the war is over, or how it will be possible to live in Gaza,” one said.
  • Another said that after the 7 October attacks by Hamas, the atmosphere in the IDF was “painful and vindictive”. “There was a dissonance: on the one hand, people here were frustrated that we were not attacking enough. On the other hand, you see at the end of the day that another thousand Gazans have died, most of them civilians.”
Javier E

Scientists See Advances in Deep Learning, a Part of Artificial Intelligence - NYTimes.com - 0 views

  • Using an artificial intelligence technique inspired by theories about how the brain recognizes patterns, technology companies are reporting startling gains in fields as diverse as computer vision, speech recognition and the identification of promising new molecules for designing drugs.
  • They offer the promise of machines that converse with humans and perform tasks like driving cars and working in factories, raising the specter of automated robots that could replace human workers.
  • what is new in recent months is the growing speed and accuracy of deep-learning programs, often called artificial neural networks or just “neural nets” for their resemblance to the neural connections in the brain.
  • ...2 more annotations...
  • With greater accuracy, for example, marketers can comb large databases of consumer behavior to get more precise information on buying habits. And improvements in facial recognition are likely to make surveillance technology cheaper and more commonplace.
  • Modern artificial neural networks are composed of an array of software components, divided into inputs, hidden layers and outputs. The arrays can be “trained” by repeated exposures to recognize patterns like images or sounds.
Javier E

See How Real AI-Generated Images Have Become - The New York Times - 0 views

  • The rapid advent of artificial intelligence has set off alarms that the technology used to trick people is advancing far faster than the technology that can identify the tricks. Tech companies, researchers, photo agencies and news organizations are scrambling to catch up, trying to establish standards for content provenance and ownership.
  • The advancements are already fueling disinformation and being used to stoke political divisions
  • Last month, some people fell for images showing Pope Francis donning a puffy Balenciaga jacket and an earthquake devastating the Pacific Northwest, even though neither of those events had occurred. The images had been created using Midjourney, a popular image generator.
  • ...16 more annotations...
  • Authoritarian governments have created seemingly realistic news broadcasters to advance their political goals
  • Experts fear the technology could hasten an erosion of trust in media, in government and in society. If any image can be manufactured — and manipulated — how can we believe anything we see?
  • “The tools are going to get better, they’re going to get cheaper, and there will come a day when nothing you see on the internet can be believed,” said Wasim Khaled, chief executive of Blackbird.AI, a company that helps clients fight disinformation.
  • Artificial intelligence allows virtually anyone to create complex artworks, like those now on exhibit at the Gagosian art gallery in New York, or lifelike images that blur the line between what is real and what is fiction. Plug in a text description, and the technology can produce a related image — no special skills required.
  • Midjourney’s images, he said, were able to pass muster in facial-recognition programs that Bellingcat uses to verify identities, typically of Russians who have committed crimes or other abuses. It’s not hard to imagine governments or other nefarious actors manufacturing images to harass or discredit their enemies.
  • In February, Getty accused Stability AI of illegally copying more than 12 million Getty photos, along with captions and metadata, to train the software behind its Stable Diffusion tool. In its lawsuit, Getty argued that Stable Diffusion diluted the value of the Getty watermark by incorporating it into images that ranged “from the bizarre to the grotesque.”
  • Getty’s lawsuit reflects concerns raised by many individual artists — that A.I. companies are becoming a competitive threat by copying content they do not have permission to use.
  • Trademark violations have also become a concern: Artificially generated images have replicated NBC’s peacock logo, though with unintelligible letters, and shown Coca-Cola’s familiar curvy logo with extra O’s looped into the name.
  • The threat to photographers is fast outpacing the development of legal protections, said Mickey H. Osterreicher, general counsel for the National Press Photographers Association
  • Newsrooms will increasingly struggle to authenticate conten
  • Social media users are ignoring labels that clearly identify images as artificially generated, choosing to believe they are real photographs, he said.
  • The video explained that the deepfake had been created, with Ms. Schick’s consent, by the Dutch company Revel.ai and Truepic, a California company that is exploring broader digital content verification
  • The companies described their video, which features a stamp identifying it as computer-generated, as the “first digitally transparent deepfake.” The data is cryptographically sealed into the file; tampering with the image breaks the digital signature and prevents the credentials from appearing when using trusted software.
  • The companies hope the badge, which will come with a fee for commercial clients, will be adopted by other content creators to help create a standard of trust involving A.I. images.
  • “The scale of this problem is going to accelerate so rapidly that it’s going to drive consumer education very quickly,” said Jeff McGregor, chief executive of Truepic
  • Adobe unveiled its own image-generating product, Firefly, which will be trained using only images that were licensed or from its own stock or no longer under copyright. Dana Rao, the company’s chief trust officer, said on its website that the tool would automatically add content credentials — “like a nutrition label for imaging” — that identified how an image had been made. Adobe said it also planned to compensate contributors.
Javier E

Stephen Hawking just gave humanity a due date for finding another planet - The Washingt... - 0 views

  • Hawking told the audience that Earth's cataclysmic end may be hastened by humankind, which will continue to devour the planet’s resources at unsustainable rates
  • “Although the chance of a disaster to planet Earth in a given year may be quite low, it adds up over time, and becomes a near certainty in the next thousand or ten thousand years. By that time we should have spread out into space, and to other stars, so a disaster on Earth would not mean the end of the human race.”
  • “I think the development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC in a 2014 interview that touched upon everything from online privacy to his affinity for his robotic-sounding voice.
  • ...1 more annotation...
  • “Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate,” Hawking warned in recent months. “Humans, who are limited by slow biological evolution, couldn't compete and would be superseded.”
Javier E

Opinion | Warning! Everything Is Going Deep: 'The Age of Surveillance Capitalism' - The... - 0 views

  • recent advances in the speed and scope of digitization, connectivity, big data and artificial intelligence are now taking us “deep” into places and into powers that we’ve never experienced before — and that governments have never had to regulate before.
  • deep learning, deep insights, deep surveillance, deep facial recognition, deep voice recognition, deep automation and deep artificial minds.
  • how did we get so deep down where the sharks live?
  • ...11 more annotations...
  • The short answer: Technology moves up in steps, and each step, each new platform, is usually biased toward a new set of capabilities. Around the year 2000 we took a huge step up that was biased toward connectivity, because of the explosion of fiber-optic cable, wireless and satellites.
  • Around 2007, we took another big step up. The iPhone, sensors, digitization, big data, the internet of things, artificial intelligence and cloud computing melded together and created a new platform that was biased toward abstracting complexity at a speed, scope and scale we’d never experienced before.
  • Over the last decade, these advances in the speed of connectivity and the elimination of complexity have grown exponentially
  • It means machines can answer so many more questions than nonmachines, also known as “humans.” The percentage of calls a chatbot, or virtual agent, is able to handle without turning the caller over to a person is called its “containment rate,” and these rates are steadily soaring. Soon, automated systems will be so humanlike that they will have to self-identify as machines.
  • “People are looking to achieve very big numbers. Earlier they had incremental, 5 to 10 percent goals in reducing their work force. Now they’re saying, ‘Why can’t we do it with 1 percent of the people we have?’
  • But bad guys, who are always early adopters, also see the same potential to go deep in wholly new ways.
  • Surveillance capitalism,” Zuboff wrote, “unilaterally claims human experience as free raw material for translation into behavioral data. Although some of these data are applied to service improvement, the rest are declared as a proprietary behavioral surplus, fed into advanced manufacturing processes known as ‘machine intelligence,’ and fabricated into prediction products that anticipate what you will do now, soon and later. Finally, these prediction products are traded in a new kind of marketplace that I call behavioral futures markets. Surveillance capitalists have grown immensely wealthy from these trading operations, for many companies are willing to lay bets on our future behavior.”
  • Unfortunately, we have not developed the regulations or governance, or scaled the ethics, to manage a world of such deep powers, deep interactions and deep potential abuses.
  • I wish I thought that catch-up was around the corner. I don’t. Our national discussion has never been more shallow — reduced to 280 characters.
  • This has created an opening and burgeoning demand for political, social and religious leaders, government institutions and businesses that can go deep — that can validate what is real and offer the public deep truths, deep privacy protections and deep trust.
  • But deep trust and deep loyalty cannot be forged overnight. They take time.
millerco

Tech Giants Are Paying Huge Salaries for Scarce A.I. Talent - The New York Times - 0 views

  • Silicon Valley’s start-ups have always had a recruiting advantage over the industry’s giants: Take a chance on us and we’ll give you an ownership stake that could make you rich if the company is successful.
  • Now the tech industry’s race to embrace artificial intelligence may render that advantage moot — at least for the few prospective employees who know a lot about A.I.
  • Tech’s biggest companies are placing huge bets on artificial intelligence, banking on things ranging from face-scanning smartphones and conversational coffee-table gadgets to computerized health care and autonomous vehicles.
  • ...6 more annotations...
  • As they chase this future, they are doling out salaries that are startling even in an industry that has never been shy about lavishing a fortune on its top talent.
  • Typical A.I. specialists, including both Ph.D.s fresh out of school and people with less education and just a few years of experience, can be paid from $300,000 to $500,000 a year or more in salary and company stock, according to nine people who work for major tech companies or have entertained job offers from them. All of them requested anonymity because they did not want to damage their professional prospects.
  • Well-known names in the A.I. field have received compensation in salary and shares in a company’s stock that total single- or double-digit millions over a four- or five-year period. And at some point they renew or negotiate a new contract, much like a professional athlete.
  • At the top end are executives with experience managing A.I. projects. In a court filing this year, Google revealed that one of the leaders of its self-driving-car division, Anthony Levandowski, a longtime employee who started with Google in 2007, took home over $120 million in incentives before joining Uber last year through the acquisition of a start-up he had co-founded that drew the two companies into a court fight over intellectual property.
  • Salaries are spiraling so fast that some joke the tech industry needs a National Football League-style salary cap on A.I. specialists. “That would make things easier,” said Christopher Fernandez, one of Microsoft’s hiring managers. “A lot easier.”
  • There are a few catalysts for the huge salaries. The auto industry is competing with Silicon Valley for the same experts who can help build self-driving cars. Giant tech companies like Facebook and Google also have plenty of money to throw around and problems that they think A.I. can help solve, like building digital assistants for smartphones and home gadgets and spotting offensive content.
Javier E

Opinion | Artificial Intelligence Requires Specific Safety Rules - The New York Times - 0 views

  • For about five years, OpenAI used a system of nondisclosure agreements to stifle public criticism from outgoing employees. Current and former OpenAI staffers were paranoid about talking to the press. In May, one departing employee refused to sign and went public in The Times. The company apologized and scrapped the agreements. Then the floodgates opened. Exiting employees began criticizing OpenAI’s safety practices, and a wave of articles emerged about its broken promises.
  • These stories came from people who were willing to risk their careers to inform the public. How many more are silenced because they’re too scared to speak out? Since existing whistle-blower protections typically cover only the reporting of illegal conduct, they are inadequate here. Artificial intelligence can be dangerous without being illegal
  • A.I. needs stronger protections — like those in place in parts of the public sector, finance and publicly traded companies — that prohibit retaliation and establish anonymous reporting channels.
  • ...19 more annotations...
  • The company’s chief executive was briefly fired after the nonprofit board lost trust in him.
  • OpenAI has spent the last year mired in scandal
  • Whistle-blowers alleged to the Securities and Exchange Commission that OpenAI’s nondisclosure agreements were illegal.
  • Safety researchers have left the company in droves
  • Now the firm is restructuring its core business as a for-profit, seemingly prompting the departure of more key leaders
  • On Friday, The Wall Street Journal reported that OpenAI rushed testing of a major model in May, attempting to undercut a rival’s publicity; after the release, employees found out the model exceeded the company’s standards for safety. (The company told The Journal the findings were the result of a methodological flaw.)
  • This behavior would be concerning in any industry, but according to OpenAI itself, A.I. poses unique risks. The leaders of the top A.I. firms and leading A.I. researchers have warned that the technology could lead to human extinction.
  • Since more comprehensive national A.I. regulations aren’t coming anytime soon, we need a narrow federal law allowing employees to disclose information to Congress if they reasonably believe that an A.I. model poses a significant safety risk
  • But McKinsey did not hold the majority of employees’ compensation hostage in exchange for signing lifetime nondisparagement agreements, as OpenAI did.
  • People reporting violations of the Atomic Energy Act have more robust whistle-blower protections than those in most fields, while those working in biological toxins for several government departments are protected by proactive, pro-reporting guidance. A.I. workers need similar rules.
  • Many companies maintain a culture of secrecy beyond what is healthy. I once worked at the consulting firm McKinsey on a team that advised Immigration and Customs Enforcement on implementing Donald Trump’s inhumane immigration policies. I was fearful of going public
  • Congress should establish a special inspector general to serve as a point of contact for these whistle-blowers. The law should mandate companies to notify staff about the channels available to them, which they can use without facing retaliation.
  • Earlier this month, OpenAI released a highly advanced new model. For the first time, experts concluded the model could aid in the construction of a bioweapon more effectively than internet research alone could. A third party hired by the company found that the new system demonstrated evidence of “power seeking” and “the basic capabilities needed to do simple in-context scheming
  • penAI decided to publish these results, but the company still chooses what information to share. It is possible the published information paints an incomplete picture of the model’s risks.
  • The A.I. safety researcher Todor Markov — who recently left OpenAI after nearly six years with the firm — suggested one hypothetical scenario. An A.I. company promises to test its models for dangerous capabilities, then cherry-picks results to make the model look safe. A concerned employee wants to notify someone, but doesn’t know who — and can’t point to a specific law being broken. The new model is released, and a terrorist uses it to construct a novel bioweapon. Multiple former OpenAI employees told me this scenario is plausible.
  • The United States’ current arrangement of managing A.I. risks through voluntary commitments places enormous trust in the companies developing this potentially dangerous technology. Unfortunately, the industry in general — and OpenAI in particular — has shown itself to be unworthy of that trust, time and again.
  • The fate of the first attempt to protect A.I. whistle-blowers rests with Governor Gavin Newsom of California. Mr. Newsom has hinted that he will veto a first-of-its-kind A.I. safety bill, called S.B. 1047, which mandates that the largest A.I. companies implement safeguards to prevent catastrophes, features whistle-blower protections, a rare point of agreement between the bill’s supporters and its critics
  • if those legislators are serious in their support for these protections, they should introduce a federal A.I. whistle-blower protection bill. They are well positioned to do so: The letter’s organizer, Representative Zoe Lofgren, is the ranking Democrat on the House Committee on Science, Space and Technology.
  • Last month, a group of leading A.I. experts warned that as the technology rapidly progresses, “we face growing risks that A.I. could be misused to attack critical infrastructure, develop dangerous weapons or cause other forms of catastrophic harm.” These risks aren’t necessarily criminal, but they are real — and they could prove deadly. If that happens, employees at OpenAI and other companies will be the first to know. But will they tell us?
Javier E

Artificial intelligence is ripe for abuse, tech executive warns: 'a fascist's dream' | ... - 0 views

  • “Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” she said.
  • All of these movements have shared characteristics, including the desire to centralize power, track populations, demonize outsiders and claim authority and neutrality without being accountable. Machine intelligence can be a powerful part of the power playbook, she said.
  • “We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.
  • ...9 more annotations...
  • Another area where AI can be misused is in building registries, which can then be used to target certain population groups. Crawford noted historical cases of registry abuse, including IBM’s role in enabling Nazi Germany to track Jewish, Roma and other ethnic groups with the Hollerith Machine, and the Book of Life used in South Africa during apartheid.
  • Donald Trump has floated the idea of creating a Muslim registry. “We already have that. Facebook has become the default Muslim registry of the world,
  • research from Cambridge University that showed it is possible to predict people’s religious beliefs based on what they “like” on the social network. Christians and Muslims were correctly classified in 82% of cases, and similar results were achieved for Democrats and Republicans (85%). That study was concluded in 2013,
  • Crawford was concerned about the potential use of AI in predictive policing systems, which already gather the kind of data necessary to train an AI system. Such systems are flawed, as shown by a Rand Corporation study of Chicago’s program. The predictive policing did not reduce crime, but did increase harassment of people in “hotspot” areas
  • Another worry related to the manipulation of political beliefs or shifting voters, something Facebook and Cambridge Analytica claim they can already do. Crawford was skeptical about giving Cambridge Analytica credit for Brexit and the election of Donald Trump, but thinks what the firm promises – using thousands of data points on people to work out how to manipulate their views – will be possible “in the next few years”.
  • “This is a fascist’s dream,” she said. “Power without accountability.”
  • Such black box systems are starting to creep into government. Palantir is building an intelligence system to assist Donald Trump in deporting immigrants.
  • Crawford argues that we have to make these AI systems more transparent and accountable. “The ocean of data is so big. We have to map their complex subterranean and unintended effects.”
  • Crawford has founded AI Now, a research community focused on the social impacts of artificial intelligence to do just this “We want to make these systems as ethical as possible and free from unseen biases.”
Javier E

Yuval Noah Harari's Apocalyptic Vision - The Atlantic - 0 views

  • He shares with Jared Diamond, Steven Pinker, and Slavoj Žižek a zeal for theorizing widely, though he surpasses them in his taste for provocative simplifications.
  • In medieval Europe, he explains, “Knowledge = Scriptures x Logic,” whereas after the scientific revolution, “Knowledge = Empirical Data x Mathematics.”
  • Silicon Valley’s recent inventions invite galaxy-brain cogitation of the sort Harari is known for. The larger you feel the disruptions around you to be, the further back you reach for fitting analogies
  • ...44 more annotations...
  • Have such technological leaps been good? Harari has doubts. Humans have “produced little that we can be proud of,” he complained in Sapiens. His next books, Homo Deus: A Brief History of Tomorrow (2015) and 21 Lessons for the 21st Century (2018), gazed into the future with apprehension
  • Harari has written another since-the-dawn-of-time overview, Nexus: A Brief History of Information Networks From the Stone Age to AI. It’s his grimmest work yet
  • Harari rejects the notion that more information leads automatically to truth or wisdom. But it has led to artificial intelligence, whose advent Harari describes apocalyptically. “If we mishandle it,” he warns, “AI might extinguish not only the human dominion on Earth but the light of consciousness itself, turning the universe into a realm of utter darkness.”
  • Those seeking a precedent for AI often bring up the movable-type printing press, which inundated Europe with books and led, they say, to the scientific revolution. Harari rolls his eyes at this story. Nothing guaranteed that printing would be used for science, he notes
  • Copernicus’s On the Revolutions of the Heavenly Spheres failed to sell its puny initial print run of about 500 copies in 1543. It was, the writer Arthur Koestler joked, an “all-time worst seller.”
  • The book that did sell was Heinrich Kramer’s The Hammer of the Witches (1486), which ranted about a supposed satanic conspiracy of sexually voracious women who copulated with demons and cursed men’s penises. The historian Tamar Herzig describes Kramer’s treatise as “arguably the most misogynistic text to appear in print in premodern times.” Yet it was “a bestseller by early modern standards,”
  • Kramer’s book encouraged the witch hunts that killed tens of thousands. These murderous sprees, Harari observes, were “made worse” by the printing press.
  • Ampler information flows made surveillance and tyranny worse too, Harari argues. The Soviet Union was, among other things, “one of the most formidable information networks in history,”
  • Information has always carried this destructive potential, Harari believes. Yet up until now, he argues, even such hellish episodes have been only that: episodes
  • Demagogic manias like the ones Kramer fueled tend to burn bright and flame out.
  • States ruled by top-down terror have a durability problem too, Harari explains. Even if they could somehow intercept every letter and plant informants in every household, they’d still need to intelligently analyze all of the incoming reports. No regime has come close to managing this
  • for the 20th-century states that got nearest to total control, persistent problems managing information made basic governance difficult.
  • So it was, at any rate, in the age of paper. Collecting data is now much, much easier.
  • Some people worry that the government will implant a chip in their brain, but they should “instead worry about the smartphones on which they read these conspiracy theories,” Harari writes. Phones can already track our eye movements, record our speech, and deliver our private communications to nameless strangers. They are listening devices that, astonishingly, people are willing to leave by the bedside while having sex.
  • Harari’s biggest worry is what happens when AI enters the chat. Currently, massive data collection is offset, as it has always been, by the difficulties of data analysis
  • What defense could there be against an entity that recognized every face, knew every mood, and weaponized that information?
  • Today’s political deliriums are stoked by click-maximizing algorithms that steer people toward “engaging” content, which is often whatever feeds their righteous rage.
  • Imagine what will happen, Harari writes, when bots generate that content themselves, personalizing and continually adjusting it to flood the dopamine receptors of each user.
  • Kramer’s Hammer of the Witches will seem like a mild sugar high compared with the heroin rush of content the algorithms will concoct. If AI seizes command, it could make serfs or psychopaths of us all.
  • Harari regards AI as ultimately unfathomable—and that is his concern.
  • Although we know how to make AI models, we don’t understand them. We’ve blithely summoned an “alien intelligence,” Harari writes, with no idea what it will do.
  • Last year, Harari signed an open letter warning of the “profound risks to society and humanity” posed by unleashing “powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.” It called for a pause of at least six months on training advanced AI systems,
  • cynics saw the letter as self-serving. It fed the hype by insisting that artificial intelligence, rather than being a buggy product with limited use, was an epochal development. It showcased tech leaders’ Oppenheimer-style moral seriousness
  • it cost them nothing, as there was no chance their research would actually stop. Four months after signing, Musk publicly launched an AI company.
  • The economics of the Information Age have been treacherous. They’ve made content cheaper to consume but less profitable to produce. Consider the effect of the free-content and targeted-advertising models on journalism
  • Since 2005, the United States has lost nearly a third of its newspapers and more than two-thirds of its newspaper jobs, to the point where nearly 7 percent of newspaper employees now work for a single organization, The New York Times
  • we speak of “news deserts,” places where reporting has essentially vanished.
  • AI threatens to exacerbate this. With better chatbots, platforms won’t need to link to external content, because they’ll reproduce it synthetically. Instead of a Google search that sends users to outside sites, a chatbot query will summarize those sites, keeping users within Google’s walled garden.
  • a Truman Show–style bubble: personally generated content, read by voices that sound real but aren’t, plus product placement
  • this would cut off writers and publishers—the ones actually generating ideas—from readers. Our intellectual institutions would wither, and the internet would devolve into a closed loop of “five giant websites, each filled with screenshots of the other four,” as the software engineer Tom Eastman puts it.
  • Harari is Silicon Valley’s ideal of what a chatbot should be. He raids libraries, detects the patterns, and boils all of history down to bullet points. (Modernity, he writes, “can be summarised in a single phrase: humans agree to give up meaning in exchange for power.”)
  • Individual AI models cost billions of dollars. In 2023, about a fifth of venture capital in North America and Europe went to AI. Such sums make sense only if tech firms can earn enormous revenues off their product, by monopolizing it or marketing it. And at that scale, the most obvious buyers are other large companies or governments. How confident are we that giving more power to corporations and states will turn out well?
  • He discusses it as something that simply happened. Its arrival is nobody’s fault in particular.
  • In Harari’s view, “power always stems from cooperation between large numbers of humans”; it is the product of society.
  • like a chatbot, he has a quasi-antagonistic relationship with his sources, an I’ll read them so you don’t have to attitude. He mines other writers for material—a neat quip, a telling anecdote—but rarely seems taken with anyone else’s view
  • Hand-wringing about the possibility that AI developers will lose control of their creation, like the sorcerer’s apprentice, distracts from the more plausible scenario that they won’t lose control, and that they’ll use or sell it as planned. A better German fable might be Richard Wagner’s The Ring of the Nibelung : A power-hungry incel forges a ring that will let its owner rule the world—and the gods wage war over it.
  • Harari’s eyes are more on the horizon than on Silicon Valley’s economics or politics.
  • In Nexus, he proposes four principles. The first is “benevolence,” explained thus: “When a computer network collects information on me, that information should be used to help me rather than manipulate me.”
  • Harari’s other three values are decentralization of informational channels, accountability from those who collect our data, and some respite from algorithmic surveillance
  • these are fine, but they are quick, unsurprising, and—especially when expressed in the abstract, as things that “we” should all strive for—not very helpful.
  • though his persistent first-person pluralizing (“decisions we all make”) softly suggests that AI is humanity’s collective creation rather than the product of certain corporations and the individuals who run them. This obscures the most important actors in the drama—ironically, just as those actors are sapping our intellectual life, hampering the robust, informed debates we’d need in order to make the decisions Harari envisions.
  • Taking AI seriously might mean directly confronting the companies developing it
  • Harari slots easily into the dominant worldview of Silicon Valley. Despite his oft-noted digital abstemiousness, he exemplifies its style of gathering and presenting information. And, like many in that world, he combines technological dystopianism with political passivity.
  • Although he thinks tech giants, in further developing AI, might end humankind, he does not treat thwarting them as an urgent priority. His epic narratives, told as stories of humanity as a whole, do not make much room for such us-versus-them clashes.
Javier E

Regular Old Intelligence is Sufficient--Even Lovely - 0 views

  • Ezra Klein, has done some of the most dedicated reporting on the topic since he moved to the Bay Area a few years ago, talking with many of the people creating this new technology.
  • one is that the people building these systems have only a limited sense of what’s actually happening inside the black box—the bot is doing endless calculations instantaneously, but not in a way even their inventors can actually follow
  • an obvious question, one Klein has asked: “’If you think calamity so possible, why do this at all?
  • ...18 more annotations...
  • second, the people inventing them think they are potentially incredibly dangerous: ten percent of them, in fact, think they might extinguish the human species. They don’t know exactly how, but think Sorcerer’s Apprentice (or google ‘paper clip maximizer.’)
  • One pundit after another explains that an AI program called Deep Mind worked far faster than scientists doing experiments to uncover the basic structure of all the different proteins, which will allow quicker drug development. It’s regarded as ipso facto better because it’s faster, and hence—implicitly—worth taking the risks that come with AI.
  • That is, it seems to me, a dumb answer from smart people—the answer not of people who have thought hard about ethics or even outcomes, but the answer that would be supplied by a kind of cultist.
  • (Probably the kind with stock options).
  • it does go, fairly neatly, with the default modern assumption that if we can do something we should do it, which is what I want to talk about. The question that I think very few have bothered to answer is, why?
  • But why? The sun won’t blow up for a few billion years, meaning that if we don’t manage to drive ourselves to extinction, we’ve got all the time in the world. If it takes a generation or two for normal intelligence to come up with the structure of all the proteins, some people may die because a drug isn’t developed in time for their particular disease, but erring on the side of avoiding extinction seems mathematically sound
  • Allowing that we’re already good enough—indeed that our limitations are intrinsic to us, define us, and make us human—should guide us towards trying to shut down this technology before it does deep damage.
  • The other challenge that people cite, over and over again, to justify running the risks of AI is to “combat climate change,
  • As it happens, regular old intelligence has already give us most of what we need: engineers have cut the cost of solar power and windpower and the batteries to store the energy they produce so dramatically that they’re now the cheapest power on earth
  • We don’t actually need artificial intelligence in this case; we need natural compassion, so that we work with the necessary speed to deploy these technologies.
  • Beyond those, the cases become trivial, or worse
  • All of this is a way of saying something we don’t say as often as we should: humans are good enough. We don’t require improvement. We can solve the challenges we face, as humans.
  • It may take us longer than if we can employ some “new form of intelligence,” but slow and steady is the whole point of the race.
  • Unless, of course, you’re trying to make money, in which case “first-mover advantage” is the point
  • I find they often answer from something that sounds like the A.I.’s perspective. Many — not all, but enough that I feel comfortable in this characterization — feel that they have a responsibility to usher this new form of intelligence into the world.”
  • here’s the thing: pausing, slowing down, stopping calls on the one human gift shared by no other creature, and perhaps by no machine. We are the animal that can, if we want to, decide not to do something we’re capable of doing.
  • n individual terms, that ability forms the core of our ethical and religious systems; in societal terms it’s been crucial as technology has developed over the last century. We’ve, so far, reined in nuclear and biological weapons, designer babies, and a few other maximally dangerous new inventions
  • It’s time to say do it again, and fast—faster than the next iteration of this tech.
Javier E

'It's already way beyond what humans can do': will AI wipe out architects? | Architectu... - 0 views

  • on a Zoom call with Wanyu He, an architect based in Shenzhen, China, and the founder of XKool, an artificial intelligence company determined to revolutionise the architecture industry. She freezes the dancing blocks and zooms in, revealing a layout of hotel rooms that fidget and reorder themselves as the building swells and contracts. Corridors switch sides, furniture dances to and fro. Another click and an invisible world of pipes and wires appears, a matrix of services bending and splicing in mesmerising unison, the location of lighting, plug sockets and switches automatically optimised. One further click and the construction drawings pop up, along with a cost breakdown and components list. The entire plan is ready to be sent to the factory to be built.
  • I applaud He on what seems to be an impressive theoretical exercise: a 500-room hotel complex designed in minutes with the help of AI. But she looks confused. “Oh,” she says casually, “that’s already been built! It took four and a half months from start to finish.”
  • AI is already being deployed to shape the real world – with far-reaching consequences.
  • ...13 more annotations...
  • They had become disillusioned with what they saw as an outmoded way of working. “It wasn’t how I imagined the future of architecture,” says He, who worked in OMA’s Rotterdam office before moving to China to oversee construction of the Shenzhen Stock Exchange building. “The design and construction processes were so traditional and lacking in innovation.”
  • XKool is at the bleeding edge of architectural AI. And it’s growing fast: over 50,000 people are already using it in China, and an English version of its image-to-image AI tool, LookX, has just been launched. Wanyu He founded the company in 2016, with others who used to work for OMA
  • “The problem with architects is that we almost entirely focus on images,” says Neil Leach, author of Architecture in the Age of Artificial Intelligence. “But the most revolutionary change is in the less sexy area: the automation of the entire design package, from developing initial options right through to construction. In terms of strategic thinking and real-time analysis, AI is already way beyond what human architects are capable of. This could be the final nail in the coffin of a struggling profession.”
  • It’s early days and, so far, the results are clunky: the Shenzhen hotel looks very much like it was designed by robots for an army of robot guests.
  • XKool aims to provide an all-in-one platform, using AI to assist with everything from generating masterplan layouts, using given parameters such as daylight requirements, space standards and local planning regulations, right down to generating interiors and construction details. It has also developed a tool to transform a 2D image of a building into a 3D model, and turn a given list of room sizes into floor plans
  • She and her colleagues were inspired to launch their startup after witnessing AlphaGo, the first computer program to defeat a human champion at the Chinese board game Go in 2016. “What if we could introduce this intelligence to our way of working with algorithmic design?” she says. “CAD [computer aided design] dates from the 70s. BIM [building information modelling] is from the 90s. Now that we have the power of cloud computing and big data, it’s time for something new.”
  • “We have to be careful,” says Martha Tsigkari, head of applied research and development at Foster + Partners in London. “It can be dangerous if you don’t know what data was used to train the model, or if you haven’t classified it properly. Data is everything: if you put garbage in, you’ll get garbage out
  • The implications for data privacy and intellectual property are huge – is our data secured from other users? Is it being used to retrain these models in the background?”
  • Although the actual science needed to make such things possible is a long way off, AI does enable the kind of calculations and predictive modelling that was impossibly time-consuming before
  • Tsigkari’s team has also developed a simulation engine that allows realtime analysis of floor plans – showing how well connected one part of a building is to another – giving designers instant feedback on the implications of moving a wall or piece of furniture.
  • One told me they now regularly use ChatGPT to summarise local planning policies and compare the performance of different materials for, say, insulation. “It’s the kind of task you would have given a junior to do,” they say. “It’s not perfect, but it makes fewer mistakes than someone who hasn’t written a specification before.”
  • Others say their teams regularly use Midjourney to help brainstorm ideas during the concept phase. “We had a client wanting to build mosques in Abu Dhabi,” one architect told me. “I could quickly generate a range of options to show them, to get the conversation going. It’s like an instant mood board.”
  • “I like to think we are augmenting, not replacing, architects,” says Carl Christiansen, a Norwegian software engineer who in 2016 co-founded AI tool Spacemaker, which was acquired by tech giant Autodesk in 2021 for $240m, and then rebranded as Forma. “I call it ‘AI on the shoulder’ to emphasise that you’re still in control.” Forma can rapidly evaluate a large range of factors – from sun and wind to noise and energy needs – and create the perfect site layout. What’s more, its interface is designed to be legible to non-experts.
Javier E

Silicon Valley's Trillion-Dollar Leap of Faith - The Atlantic - 0 views

  • Tech companies like to make two grand pronouncements about the future of artificial intelligence. First, the technology is going to usher in a revolution akin to the advent of fire, nuclear weapons, and the internet.
  • And second, it is going to cost almost unfathomable sums of money.
  • Silicon Valley has already triggered tens or even hundreds of billions of dollars of spending on AI, and companies only want to spend more.
  • ...22 more annotations...
  • Their reasoning is straightforward: These companies have decided that the best way to make generative AI better is to build bigger AI models. And that is really, really expensive, requiring resources on the scale of moon missions and the interstate-highway system to fund the data centers and related infrastructure that generative AI depends on
  • “If we’re going to justify a trillion or more dollars of investment, [AI] needs to solve complex problems and enable us to do things we haven’t been able to do before.” Today’s flagship AI models, he said, largely cannot.
  • Now a number of voices in the finance world are beginning to ask whether all of this investment can pay off. OpenAI, for its part, may lose up to $5 billion this year, almost 10 times more than what the company lost in 2022,
  • Dario Amodei, the CEO of the rival start-up Anthropic, has predicted that a single AI model (such as, say, GPT-6) could cost $100 billion to train by 2027. The global data-center buildup over the next few years could require trillions of dollars from tech companies, utilities, and other industries, according to a July report from Moody’s Ratings.
  • Over the past few weeks, analysts and investors at some of the world’s most influential financial institutions—including Goldman Sachs, Sequoia Capital, Moody’s, and Barclays—have issued reports that raise doubts about whether the enormous investments in generative AI will be profitable.
  • generative AI has already done extraordinary things, of course—advancing drug development, solving challenging math problems, generating stunning video clips. But exactly what uses of the technology can actually make money remains unclear
  • At present, AI is generally good at doing existing tasks—writing blog posts, coding, translating—faster and cheaper than humans can. But efficiency gains can provide only so much value, boosting the current economy but not creating a new one.
  • Right now, Silicon Valley might just functionally be replacing some jobs, such as customer service and form-processing work, with historically expensive software, which is not a recipe for widespread economic transformation.
  • McKinsey has estimated that generative AI could eventually add almost $8 trillion to the global economy every year
  • Tony Kim, the head of technology investment at BlackRock, the world’s largest money manager, told me he believes that AI will trigger one of the most significant technological upheavals ever. “Prior industrial revolutions were never about intelligence,”
  • “Here, we can manufacture intelligence.”
  • this future is not guaranteed. Many of the productivity gains expected from AI could be both greatly overestimated and very premature, Daron Acemoglu, an economist at MIT, has found
  • AI products’ key flaws, such as a tendency to invent false information, could make them unusable, or deployable only under strict human oversight, in certain settings—courts, hospitals, government agencies, schools
  • AI as a truly epoch-shifting technology, it may well be more akin to blockchain, a very expensive tool destined to fall short of promises to fundamentally transform society and the economy.
  • Researchers at Barclays recently calculated that tech companies are collectively paying for enough AI-computing infrastructure to eventually power 12,000 different ChatGPTs. Silicon Valley could very well produce a whole host of hit generative-AI products like ChatGPT, “but probably not 12,000 of them,
  • even if it did, there would be nowhere enough demand to use all those apps and actually turn a profit.
  • Some of the largest tech companies’ current spending on AI data centers will require roughly $600 billion of annual revenue to break even, of which they are currently about $500 billion short.
  • Tech proponents have responded to the criticism that the industry is spending too much, too fast, with something like religious dogma. “I don’t care” how much we spend, Altman has said. “I genuinely don’t.
  • the industry is asking the world to engage in something like a trillion-dollar tautology: AI’s world-transformative potential justifies spending any amount of resources, because its evangelists will spend any amount to make AI transform the world.
  • in the AI era in particular, a lack of clear evidence for a healthy return on investment may not even matter. Unlike the companies that went bust in the dot-com bubble in the early 2000s, Big Tech can spend exorbitant sums of money and be largely fine
  • perhaps even more important in Silicon Valley than a messianic belief in AI is a terrible fear of missing out. “In the tech industry, what drives part of this is nobody wants to be left behind. Nobody wants to be seen as lagging,
  • Go all in on AI, the thinking goes, or someone else will. Their actions evince “a sense of desperation,” Cahn writes. “If you do not move now, you will never get another chance.” Enormous sums of money are likely to continue flowing into AI for the foreseeable future, driven by a mix of unshakable confidence and all-consuming fear.
1 - 20 of 168 Next › Last »
Showing 20 items per page