Skip to main content

Home/ History Readings/ Group items tagged cheat

Rss Feed Group items tagged

Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

Excuse me, but the industries AI is disrupting are not lucrative - 0 views

  • Google’s Gemini. The demo video earlier this week was nothing short of amazing, as Gemini appeared to fluidly interact with a questioner going through various tasks and drawings, always giving succinct and correct answers.
  • another huge new AI model revealed.
  • that’s. . . not what’s going on. Rather, they pre-recorded it and sent individual frames of the video to Gemini to respond to, as well as more informative prompts than shown, in addition to editing the replies from Gemini to be shorter and thus, presumably, more relevant. Factor all that in, Gemini doesn’t look that different from GPT-4,
  • ...24 more annotations...
  • Continued hype is necessary for the industry, because so much money flowing in essentially allows the big players, like OpenAI, to operate free of economic worry and considerations
  • The money involved is staggering—Anthropic announced they would compete with OpenAI and raised 2 billion dollars to train their next-gen model, a European counterpart just raised 500 million, etc. Venture capitalists are eager to throw as much money as humanely possible into AI, as it looks so revolutionary, so manifesto-worthy, so lucrative.
  • While I have no idea what the downloads are going to be for the GPT Store next year, my suspicion is it does not live up to the hyped Apple-esque expectation.
  • given their test scores, I’m willing to say GPT-4 or Gemini is smarter along many dimensions than a lot of actual humans, at least in the breadth of their abstract knowledge—all while noting even leading models still have around a 3% hallucination rate, which stacks up in a complex task.
  • A more interesting “bear case” for AI is that, if you look at the list of industries that leading AIs like GPT-4 are capable of disrupting—and therefore making money off of—the list is lackluster from a return-on-investment perspective, because the industries themselves are not very lucrative.
  • What are AIs of the GPT-4 generation best at? It’s things like:writing essays or short fictionsdigital artchattingprogramming assistance
  • While I personally wouldn’t go so far as to describe current LLMs as “a solution in search of a problem” like cryptocurrency has famously been described as, I do think the description rings true in an overall economic/business sense so fa
  • The issue is that taking the job of a human illustrator just. . . doesn’t make you much money. Because human illustrators don’t make much money
  • While you can easily use Dall-E to make art for a blog, or a comic book, or a fantasy portrait to play an RPG, the market for those things is vanishingly small, almost nonexistent
  • As of this writing, the compute cost to create an image using a large image model is roughly $.001 and it takes around 1 second. Doing a similar task with a designer or a photographer would cost hundreds of dollars (minimum) and many hours or days (accounting for work time, as well as schedules). Even if, for simplicity’s sake, we underestimate the cost to be $100 and the time to be 1 hour, generative AI is 100,000 times cheaper and 3,600 times faster than the human alternative.
  • Like, wow, an AI that can write a Reddit comment! Well, there are millions of Reddit comments, which is precisely why we now have AIs good at writing them. Wow, an AI that can generate music! Well, there are millions of songs, which is precisely why we now have AIs good at creating them.
  • Search is the most obvious large market for AI companies, but Bing has had effectively GPT-4-level AI on offer now for almost a year, and there’s been no huge steal from Google’s market share.
  • What about programming? It’s actually a great expression of the issue, because AI isn’t replacing programming—it’s replacing Stack Overflow, a programming advice website (after all, you can’t just hire GPT-4 to code something for you, you have to hire a programmer who uses GPT-4
  • Even if OpenAI drove Stack Overflow out of business entirely and cornered the market on “helping with programming” they would gain, what? Stack Overflow is worth about 1.8 billion, according to its last sale in 2022. OpenAI already dwarfs it in valuation by an order of magnitude.
  • The more one thinks about this, one notices a tension in the very pitch itself: don’t worry, AI isn’t going to take all our jobs, just make us better at them, but at the same time, the upside of AI as an industry is the total combined worth of the industries its replacing, er, disrupting, and this justifies the massive investments and endless economic optimism.
  • It makes me worried about the worst of all possible worlds: generative AI manages to pollute the internet with cheap synthetic data, manages to make being a human artist / creator harder, manages to provide the basis of agential AIs that still pose some sort of existential risk if they get intelligent enough—all without ushering in some massive GDP boost that takes us into utopia
  • If the AI industry ever goes through an economic bust sometime in the next decade I think it’ll be because there are fewer ways than first thought to squeeze substantial profits out of tasks that are relatively commonplace already
  • We can just look around for equivalencies. The payment for humans working as “mechanical turks” on Amazon are shockingly low. If a human pretending to be an AI (which is essentially what a mechanical turk worker is doing) only makes a buck an hour, how much will an AI make doing the same thing?
  • , is it just a quirk of the current state of technology, or something more general?
  • What’s written on the internet is a huge “high quality” training set (at least in that it is all legible and collectable and easy to parse) so AIs are very good at writing the kind of things you read on the internet
  • But data with a high supply usually means its production is easy or commonplace, which, ceteris paribus, means it’s cheap to sell in turn. The result is a highly-intelligent AI merely adding to an already-massive supply of the stuff it’s trained on.
  • Was there really a great crying need for new ways to cheat on academic essays? Probably not. Will chatting with the History Buff AI app (it was is in the background of Sam Altman’s presentation) be significantly different than chatting with posters on /r/history on Reddit? Probably not
  • Call it the supply paradox of AI: the easier it is to train an AI to do something, the less economically valuable that thing is. After all, the huge supply of the thing is how the AI got so good in the first place.
  • AI might end up incredibly smart, but mostly at things that aren’t economically valuable.
Javier E

Opinion | This Is What Happened When the Authorities Put Trump Under a Microscope - The... - 0 views

  • The two highest-profile congressional investigations of Trump that followed — the 2019 report by the House Intelligence Committee on Trump’s pressuring of Ukraine as well as the recently released report by the select committee on the Jan. 6 attack — read like deliberate contrasts to the document produced by Robert Mueller and his team.
  • Their presentation is dramatic, not dense; their conclusions are blunt, not oblique; their arguments are political as much as legal. And yet, the Ukraine and Jan. 6 reports seem to follow the cues, explicit or implied, that the Mueller report left behind.
  • The Mueller report also notes in its final pages that “only a successor administration would be able to prosecute a former president,” which is what the Jan. 6 special committee, with its multiple criminal referrals, has urged the Biden administration’s Justice Department to do.
  • ...14 more annotations...
  • ALL THREE REPORTS INCLUDE quintessentially Trumpian scenes, consistent in their depictions of the former president’s methods, and very much in keeping with numerous journalistic accounts of how he sought to manipulate people, rules and institutions.
  • The three investigations tell different stories, but the misdeeds all run together, more overlapping than sequential
  • Still, each investigation offers a slightly different theory of Trump. In the Mueller report, Trump and his aides come across as the gang that can’t cheat straight — too haphazard to effectively coordinate with a foreign government, too ignorant of campaign finance laws to purposely violate them, often comically naïve about the gravity of their plight.
  • The Ukraine report, by contrast, regards Trump as more strategic than chaotic, and it does not wallow in the netherworld between the president’s personal benefit and his public service. “The president placed his own personal and political interests above the national interests of the United States, sought to undermine the integrity of the U.S. presidential election process, and endangered U.S. national security,”
  • All three reports show Trump deploying the mechanisms of government for political gain.
  • The Mueller report argues that viewing the president’s “acts collectively can help to illuminate their significance.” The Ukraine report shows that the conversation that Trump described as “a perfect call” was not the ask; it was the confirmation. When Trump said, “I would like you to do us a favor, though,” Zelensky and his aides had already been notified of what was coming. The Ukraine scandal was never about a single call, just as the Jan. 6 report was not about a single day.
  • The Jan. 6 report takes seriously the admonition to view the president’s actions collectively, not individually; the phrase “multipart plan” appears throughout the report, with Trump as the architect.
  • Even more so than the Ukraine report, the Jan. 6 report repeatedly emphasizes how Trump knew, well, everything
  • There is no room here for the plausible deniability that the Mueller report entertained, for the notion that Trump didn’t know better, or that, in the immortal words of Attorney General William P. Barr when he creatively interpreted the Mueller report to exonerate Trump of obstruction of justice, that the president was “frustrated and angered by his sincere belief that the investigation was undermining his presidency.”
  • This alleged sincerity underscored the president’s “noncorrupt motives,” as Barr put it. In the Jan. 6 report, any case for Trumpian sincerity is eviscerated in a six-page chart in the executive summary, which catalogs the many times the president was informed of the facts of the election yet continued to lie about them. “Just say the election was corrupt and leave the rest to me and the Republican congressmen,” Trump told top Department of Justice officials in late December 2020, the report says.
  • Just announce an investigation into the Bidens. Just say the 2020 election was rigged. Trump’s most corrupt action is always the corruption of reality.
  • The studious restraint of the Mueller report came in for much criticism once the special counsel failed to deliver a dagger to the heart of the Trump presidency and once the document was so easily miscast by interested parties
  • for all its diffidence, there is power in the document’s understated prose, in its methodical collection of evidence, in its unwillingness to overstep its bounds while investigating a president who knew few bounds himself.
  • The Ukraine and Jan. 6 reports came at a time when Trump’s misconduct was better understood, when Mueller-like restraint was less in fashion and when those attempting to hold the chief executive accountable grasped every tool at hand. For all their passion and bluntness, they encountered their own constraints, limits that are probably inherent to the form
Javier E

Why Didn't the Government Stop the Crypto Scam? - 1 views

  • Securities and Exchange Commission Chair Gary Gensler, who took office in April of 2021 with a deep background in Wall Street, regulatory policy, and crypto, which he had taught at MIT years before joining the SEC. Gensler came in with the goal of implementing the rule of law in the crypto space, which he knew was full of scams and based on unproven technology. Yesterday, on CNBC, he was again confronted with Andrew Ross Sorkin essentially asking, “Why were you going after minor players when this Ponzi scheme was so flagrant?”
  • Cryptocurrencies are securities, and should fit under securities law, which would have imposed rules that would foster a de facto ban of the entire space. But since regulators had not actually treated them as securities for the last ten years, a whole new gray area of fake law had emerged
  • Almost as soon as he took office, Gensler sought to fix this situation, and treat them as securities. He began investigating important players
  • ...22 more annotations...
  • But the legal wrangling to just get the courts to treat crypto as a set of speculative instruments regulated under securities law made the law moot
  • In May of 2022, a year after Gensler began trying to do something about Terra/Luna, Kwon’s scheme blew up. In a comically-too-late-to-matter gesture, an appeals court then said that the SEC had the right to compel information from Kwon’s now-bankrupt scheme. It is absolute lunacy that well-settled law, like the ability for the SEC to investigate those in the securities business, is now being re-litigated.
  • many crypto ‘enthusiasts’ watching Gensler discuss regulation with his predecessor “called for their incarceration or worse.”
  • it wasn’t just the courts who were an impediment. Gensler wasn’t the only cop on the beat. Other regulators, like those at the Commodities Futures Trading Commission, the Federal Reserve, or the Office of Comptroller of the Currency, not only refused to take action, but actively defended their regulatory turf against an attempt from the SEC to stop the scams.
  • Behind this was the fist of political power. Everyone saw the incentives the Senate laid down when every single Republican, plus a smattering of Democrats, defeated the nomination of crypto-skeptic Saule Omarova in becoming the powerful bank regulator at the Comptroller of the Currency
  • Instead of strong figures like Omarova, we had a weakling acting Comptroller Michael Hsu at the OCC, put there by the excessively cautious Treasury Secretary Janet Yellen. Hsu refused to stop bank interactions with crypto or fintech because, as he told Congress in 2021, “These trends cannot be stopped.”
  • It’s not just these regulators; everyone wanted a piece of the bureaucratic pie. In March of 2022, before it all unraveled, the Biden administration issued an executive order on crypto. In it, Biden said that virtually every single government agency would have a hand in the space.
  • That’s… insane. If everyone’s in charge, no one is.
  • And behind all of these fights was the money and political prestige of some most powerful people in Silicon Valley, who were funding a large political fight to write the rules for crypto, with everyone from former Treasury Secretary Larry Summers to former SEC Chair Mary Jo White on the payroll.
  • (Even now, even after it was all revealed as a Ponzi scheme, Congress is still trying to write rules favorable to the industry. It’s like, guys, stop it. There’s no more bribe money!)
  • Moreover, the institution Gensler took over was deeply weakened. Since the Reagan administration, wave after wave of political leader at the SEC has gutted the place and dumbed down the enforcers. Courts have tied up the commission in knots, and Congress has defanged it
  • Under Trump crypto exploded, because his SEC chair Jay Clayton had no real policy on crypto (and then immediately went into the industry after leaving.) The SEC was so dormant that when Gensler came into office, some senior lawyers actually revolted over his attempt to make them do work.
  • In other words, the regulators were tied up in the courts, they were against an immensely powerful set of venture capitalists who have poured money into Congress and D.C., they had feeble legal levers, and they had to deal with ‘crypto enthusiasts' who thought they should be jailed or harmed for trying to impose basic rules around market manipulation.
  • The bottom line is, Gensler is just one regulator, up against a lot of massed power, money, and bad institutional habits. And we as a society simply made the choice through our elected leaders to have little meaningful law enforcement in financial markets, which first became blindingly obvious in 2008 during the financial crisis, and then became comical ten years later when a sector whose only real use cases were money laundering
  • , Ponzi scheming or buying drugs on the internet, managed to rack up enough political power to bring Tony Blair and Bill Clinton to a conference held in a tax haven billed as ‘the future.’
  • It took a few years, but New Dealers finally implemented a workable set of securities rules, with the courts agreeing on basic definitions of what was a security. By the 1950s, SEC investigators could raise an eyebrow and change market behavior, and the amount of cheating in finance had dropped dramatically.
  • By 1935, the New Dealers had set up a new agency, the Securities and Exchange Commission, and cleaned out the FTC. Yet there was still immense concern that Roosevelt had not been able to tame Wall Street. The Supreme Court didn’t really ratify the SEC as a constitutional body until 1938, and nearly struck it down in 1935 when a conservative Supreme Court made it harder for the SEC to investigate cases.
  • Institutional change, in other words, takes time.
  • It’s a lesson to remember as we watch the crypto space melt down, with ex-billionaire Sam Bankman-Fried
  • It’s not like perfidy in crypto was some hidden secret. At the top of the market, back in December 2021, I wrote a piece very explicitly saying that crypto was a set of Ponzi schemes. It went viral, and I got a huge amount of hate mail from crypto types
  • one of the more bizarre aspects of the crypto meltdown is the deep anger not just at those who perpetrated it, but at those who were trying to stop the scam from going on. For instance, here’s crypto exchange Coinbase CEO Brian Armstrong, who just a year ago was fighting regulators vehemently, blaming the cops for allowing gambling in the casino he helps run.
  • FTX.com was an offshore exchange not regulated by the SEC. The problem is that the SEC failed to create regulatory clarity here in the US, so many American investors (and 95% of trading activity) went offshore. Punishing US companies for this makes no sense.
Javier E

The Phantasms of Judith Butler - The Atlantic - 0 views

  • The central idea of Who’s Afraid of Gender? is that fascism is gaining strength around the world, and that its weapon is what Butler calls the “phantasm of gender,” which they describe as a confused and irrational bundle of fears that displaces real dangers onto imaginary ones.
  • Similarly, Trump’s Christian-right supporters see this adjudicated rapist as a bulwark against sexual libertinism, but he also has a following among young men who admire him as libertine in chief and among people of every stripe who think he’ll somehow make them richer.
  • Butler is obviously correct that the authoritarian right sets itself against feminism and modern sexual rights and freedom.
  • ...19 more annotations...
  • But is the gender phantasm as crucial to the global far right as Butler claims?
  • Butler has little to say about the appeal of nationalism and community, insistence on ethnic purity, opposition to immigration, anxiety over economic and social stresses, fear of middle-class-status loss, hatred of “elites.”
  • why Hungarian Prime Minister Viktor Orbán is so popular, it would be less his invocation of the gender phantasm and more his ruthless determination to keep immigrants out, especially Muslim ones, along with his delivery of massive social services to families in an attempt to raise the birth rate
  • The chapter of Who’s Afraid of Gender? that is most relevant for American and British readers is probably the one about the women, many of them British, whom opponents call “TERFs” (trans-exclusionary radical feminists), but who call themselves “gender-critical feminists.”
  • But is obsession with “gender” really the primary motive behind current right-wing movements? And why is it so hard to trust that the noise around “gender” might actually be indicative of people’s real feelings, and not just the demagogue-fomented distraction Butler asser
  • Instead of proving that “gender” is a crucial part of what motivates popular support for right-wing authoritarianism, Butler simply asserts that it is, and then ties it all up with a bow called “fascism.”
  • ascism is a word that Butler admits is not perfect but then goes on to use repeatedly. I’m sure I’ve used it myself as a shorthand when I’m writing quickly, but it’s a bit manipulative. As used by Butler and much of the left, it covers way too many different issues and suggests that if you aren’t on board with the Butlerian worldview on every single one of them, a brown shirt must surely be hanging in your closet.
  • As they define it—“fascist passions or political trends are those which seek to strip people of the basic rights they require to live”—most societies for most of history have been fascist, including, for long stretches, our own
  • Instead of facing up to the problems of, for example, war, declining living standards, environmental damage, and climate change, right-wing leaders whip up hysteria about threats to patriarchy, traditional families, and heterosexuality.
  • They discuss only two authors at any length, the philosopher Kathleen Stock and J. K. Rowling. Butler does not engage with their writing in any detail—they do not quote even one sentence from Stock’s Material Girls: Why Reality Matters for Feminism, a serious book that has been much discussed, or indeed from any other gender-crit work, except for some writing from Rowling, including her essay in which she describes domestic violence at the hands of her first husband, an accusation he admits to in part.
  • They dismiss, with that invocation of a “phantasm,” apprehension about the presence of trans women in women’s single-sex spaces, (as well as, gender-crits would add, biological men falsely claiming to be trans in order to gain access to same), concerns for biologically female athletes who feel cheated out of scholarships and trophies, and the slight a biological woman might experience by being referred to as a “menstruator.”
  • Butler wants to dismiss gender-crits as fascist-adjacent: Indeed, in an interview, they compare Stock and Rowling to Putin and the pope.
  • It does seem odd that Butler, for whom everything about the body is socially produced, would be so uninterested in exploring the ways that trans identity is itself socially produced, at least in part—by, for example, homophobia and misogyny and the hypersexualization of young girls, by social media and online life, by the increasing popularity of cosmetic surgery, by the libertarian-individualist presumption that you can be whatever you want.
  • what is authenticity
  • In every other context, Butler works to demolish the idea of the eternal human—everything is contingent—except for when it comes to being transgender. There, the individual, and only the individual, knows themself.
  • I can't tell you how many left and liberal people I know who keep quiet about their doubts because they fear being ostracized professionally or socially. Nobody wants to be accused of putting trans people's lives in danger, and, after all, don't we all want, as the slogan goes, to “Be Kind”?
  • The trouble is that, in the long run, the demand for self-suppression fuels reaction. Polls show declining support for various trans demands for acceptance . People don’t like being forced by social pressure to deny what they think of as the reality of sex and gender.
  • They cite the civil-rights activist and singer Bernice Johnson Reagon’s call for “difficult coalitions” but forget that coalitions necessarily involve compromise and choosing your battles, not just accusing people of sharing the views of fascists
  • What if instead of trying to suppress the questioning of skeptics, we admit we don’t have many answers? What if, instead, we had a conversation? After all, isn’t that what philosophy is all about?
Javier E

How a Polyamorous Mom Had 'a Big Sexual Adventure' and Found Herself - The New York Times - 0 views

  • “More,” which Doubleday will release on Jan. 16, is landing at a moment when polyamory is drifting from the margins to the mainstream. About a third of Americans surveyed in a YouGov poll in February of 2023 said they preferred some form of non-monogamy in relationships.
  • Recent titles include memoirs like the journalist Rachel Krantz’s 2022 book “Open: An Uncensored Memoir of Love, Liberation, and Non-Monogamy,” and self-help and inspirational books like “The Anxious Person’s Guide to Non-Monogamy,” “The Polyamory Paradox” and “A Polyamory Devotional,” which has 365 daily reflections for the polyamorous.
  • Winter concedes that polyamory could be exhausting — particularly when she had to balance it with marriage, child care and working as an 8th grade English teacher.“I did not sleep very much,”
  • ...4 more annotations...
  • Opening the marriage wasn’t just about doing whatever — and whoever — she wanted, she said. She had to cast off internalized sexism and her tendency to put others’ needs before her own, issues she worked through in therapy. What began as sexual thrill-seeking led unexpectedly to self-discovery.
  • “I thought non-monogamy was going to be all about the sex,” she said. “I thought I was going on a big sexual adventure, and it was going to be super exciting. And it was, until it wasn’t.”
  • Eventually, Winter swore off men who were cheating and began seeing people who were also in open relationships, a demographic that became easier to find when online dating services added non-monogamous to their menus. Even then, options were limited.
  • Winter and her husband struggled with when and how to tell their sons about their arrangement, and wanted to wait until their children were mature enough to handle it. That plan failed when their oldest son, then 13, saw his dad’s online dating profile on his laptop, and texted his mother in a panic, asking if they were in an open marriage. Her youngest son found out in a similar way a few years ago, when he was 14, she said.
Javier E

In Silicon Valley, You Can Be Worth Billions and It's Not Enough - The New York Times - 0 views

  • He got a phone call about the imminent sale of a tech company and allegedly traded on the confidential information, according to charges filed by the Securities and Exchange Commission. The profit for a few minutes of work: $415,726.
  • rarely has anyone traded his reputation for seemingly so little reward. For Mr. Bechtolsheim, $415,726 was equivalent to a quarter rolling behind the couch. He was ranked No. 124 on the Bloomberg Billionaires Index last week, with an estimated fortune of $16 billion.
  • Last month, Mr. Bechtolsheim, 68, settled the insider trading charges without admitting wrongdoing. He agreed to pay a fine of more than $900,000 and will not serve as an officer or director of a public company for five years.
  • ...16 more annotations...
  • Nothing in his background seems to have brought him to this troubling point. Mr. Bechtolsheim was one of those who gave Silicon Valley its reputation as an engineer’s paradise, a place where getting rich was just something that happened by accident.
  • “He cared so much about making great technology that he would buy a house, not furnish it and sleep on a futon,” said Scott McNealy, who joined with Mr. Bechtolsheim four decades ago to create Sun Microsystems, a maker of computer workstations and servers that was a longtime tech powerhouse. “Money was not how he measured himself.”
  • researchers who analyze trading data say corporate executives broadly profit from confidential information. These executives try to avoid traditional insider trading restrictions by buying shares in economically linked firms, a phenomenon called “shadow trading.”
  • “There appears to be significant profits being made from shadow trading,” said Mihir N. Mehta, an assistant professor of accounting at the University of Michigan and an author of a 2021 study in The Accounting Review that found “robust evidence” of the behavior. “The people doing it have a sense of entitlement or maybe just think, ‘I’m invincible.’”
  • He went to Stanford as a Ph.D. student in the mid-1970s and got to know the then-small programming community around the university. In the early 1980s, he, along with Mr. McNealy, Vinod Khosla and Bill Joy, started Sun Microsystems as an outgrowth of a Stanford project. When Sun initially raised money, Mr. Bechtolsheim put his entire life savings — about $100,000 — into the company.
  • “You could end up losing all your money,” he was warned by the venture capitalists financing Sun. His response: “I see zero risk here.”
  • An impromptu demonstration was hastily arranged for 8 a.m., which Mr. Bechtolsheim cut short. He had seen enough, and besides, he had to get to the office. He gave them a check, and the deal was sealed, Mr. Levy wrote, “with as little fanfare as if he were grabbing a latte on the way to work.
  • Mr. Page and Mr. Brin couldn’t deposit Mr. Bechtolsheim’s check for a month because Google did not have a bank account. When Google went public in 2004, that $100,000 investment was worth at least $1 billion.
  • It wasn’t the money that made the story famous, however. It was the way it confirmed one of Silicon Valley’s most cherished beliefs about itself: that its genius is so blindingly obvious, questions are superfluous.
  • The dot-com boom was a disorienting period for longtime Valley leaders whose interest in money was muted. Mr. Bechtolsheim’s Sun colleague Mr. Joy left Silicon Valley.
  • “There’s so much money around, it’s clouding a lot of people’s ethics,” Mr. Joy said in a 1999 oral history
  • Mr. Bechtolsheim didn’t leave. In 2008, he co-founded Arista, a Silicon Valley computer networking company that went public and now has 4,000 employees and a stock market value of $100 billion.
  • Mr. Bechtolsheim was chair of Arista’s board when an executive from another company called in 2019, according to the S.E.C. Arista and the other company, which was not named in court documents, had a history of sharing confidential information under nondisclosure agreements.
  • immediately after hanging up, the government said, he bought Acacia option contracts in the accounts of a close relative and a colleague. The next day, the deal was announced. Acacia shares jumped 35 percent.
  • Arista’s code of conduct states that “employees who possess material, nonpublic information gained through their work at Arista may not trade in Arista securities or the securities of another company to which the information pertains.”
  • Mr. Levy, the “In the Plex” author, said there were plenty of legal ways to make money in Silicon Valley. “Someone who is regarded as an influential funder and is very well connected gets nearly unlimited opportunities to make very desirable early investments,”
Javier E

'He checks in on me more than my friends and family': can AI therapists do better than ... - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
« First ‹ Previous 121 - 128 of 128
Showing 20 items per page