Skip to main content

Home/ History Readings/ Group items tagged speculation

Rss Feed Group items tagged

Javier E

Is Argentina the First A.I. Election? - The New York Times - 0 views

  • Argentina’s election has quickly become a testing ground for A.I. in campaigns, with the two candidates and their supporters employing the technology to doctor existing images and videos and create others from scratch.
  • A.I. has made candidates say things they did not, and put them in famous movies and memes. It has created campaign posters, and triggered debates over whether real videos are actually real.
  • A.I.’s prominent role in Argentina’s campaign and the political debate it has set off underscore the technology’s growing prevalence and show that, with its expanding power and falling cost, it is now likely to be a factor in many democratic elections around the globe.
  • ...8 more annotations...
  • Experts compare the moment to the early days of social media, a technology offering tantalizing new tools for politics — and unforeseen threats.
  • For years, those fears had largely been speculative because the technology to produce such fakes was too complicated, expensive and unsophisticated.
  • His spokesman later stressed that the post was in jest and clearly labeled A.I.-generated. His campaign said in a statement that its use of A.I. is to entertain and make political points, not deceive.
  • Researchers have long worried about the impact of A.I. on elections. The technology can deceive and confuse voters, casting doubt over what is real, adding to the disinformation that can be spread by social networks.
  • Much of the content has been clearly fake. But a few creations have toed the line of disinformation. The Massa campaign produced one “deepfake” video in which Mr. Milei explains how a market for human organs would work, something he has said philosophically fits in with his libertarian views.
  • So far, the A.I.-generated content shared by the campaigns in Argentina has either been labeled A.I. generated or is so clearly fabricated that it is unlikely it would deceive even the most credulous voters. Instead, the technology has supercharged the ability to create viral content that previously would have taken teams of graphic designers days or weeks to complete.
  • To do so, campaign engineers and artists fed photos of Argentina’s various political players into an open-source software called Stable Diffusion to train their own A.I. system so that it could create fake images of those real people. They can now quickly produce an image or video of more than a dozen top political players in Argentina doing almost anything they ask.
  • For Halloween, the Massa campaign told its A.I. to create a series of cartoonish images of Mr. Milei and his allies as zombies. The campaign also used A.I. to create a dramatic movie trailer, featuring Buenos Aires, Argentina’s capital, burning, Mr. Milei as an evil villain in a straitjacket and Mr. Massa as the hero who will save the country.
Javier E

Naomi Klein on wellness culture: 'We really are alive on the knife's edge' | Well actua... - 0 views

  • Why wellness became a seedbed for the far-right is one of several subjects that Naomi Klein explores in her latest book, Doppelganger: A Trip into the Mirror World.
  • She observed that people working in the field of bodily care seemed particularly drawn to anti-vax, anti-mask, “plandemic” beliefs. The Center for Countering Digital Hate’s report on the Disinformation Dozen – a list of 12 people responsible for circulating the bulk of anti-vax content online – was populated by a chiropractor, three osteopaths, and essential oil sellers, as well Christine Northrup, the former OB-GYN turned Oprah-endorsed celebrity doctor who claimed the virus was part of a deep state depopulation plot, and Kelly Brogan, the “holistic psychiatrist” and new age panic preacher.
  • some of this crossover made economic sense: for people working with bodies, social distancing often meant the loss of their livelihoods, and these “grievances set the stage for many wellness workers to see sinister plots in everything having to do with the virus”.
  • ...25 more annotations...
  • I saw these gym protests as a similar idea: my body is my temple. What I’m doing here is my protection; I’m keeping myself strong. I’m building up my immune system, my body is my force field against whatever is coming.
  • The parts of listening to Bannon that were most destabilizing were when I heard him saying things that sounded like the left, and when I heard him saying things that I agreed with in part – not in whole, but where I saw that kernel of truth and I realized how effective it was going to be in the mix and match with what I see as a fascist project that he’s engaged in.
  • I expect Steve Bannon to be monstrous on immigration, on gender. I expect that from him. It’s when he’s talking about corporate control of the media and saying things that are true about big tech that I start to get queasy and ask, wait a minute, why is he saying more about this than a lot of people on the liberal side of the spectrum? Have we ceded this territory?
  • This point seems central. The mirror world isn’t devoid of truth. Instead, it’s destabilizing because elements of truth are there, but warped.
  • Absolutely. And the destabilizing piece is not simply that they’re saying something true. It’s when you realize people [on the left] have stopped saying that true thing. That’s when you realize that it has power.
  • If we were building multiracial, intergenerational social movements that were really rooted in confronting corporate power, then they could say whatever they want and it wouldn’t really bother me. But we’re talking about it less, and the more [conspiracists] talk about it, the more reticent we become. So it’s a dialectic that makes me queasy.
  • Ehrenreich has a completely different theory, which I think is much more plausible, which is this is the 1980s: people are in the wreckage of the failures of these huge social movements in the ‘60s and ‘70s. There had been this glimpse of collective power that a lot of people really thought was going to change the world, and suddenly they’re living through Thatcherism and Reaganism. And there is this turn towards the self, towards the body as the site of control.
  • Kneeling before the temple of the body also has fascist roots. Historically, certain ideals of human fitness were a way to communicate the value of citizens.Whenever you are working within a system of a hierarchy of humans and bodies, then you’re in fascism territory. I think that it made perfect sense that Nazis were body obsessives who fetishized the natural and the hyperfit form and genes.
  • There is a connection between certain kinds of new age ideas and health fads and the fascist project
  • After the second world war, a lot of people in the world of wellness ran in the opposite direction. But there are some ways in which they are natural affinities and they’re finding each other again
  • there is a way the quest for wellness and hyper fitness becomes obsessive.
  • But the spread of misinformation across wellness culture was likely attributable to more complex factors, including the limits of conventional medicine and the areas of health that are understudied or dismissed.
  • Ehrenreich is trying to understand why this exploded in the 1980s. The whole aerobics craze, the whole jogging craze. You know, how does somebody like Jerry Rubin, a member of the Yippies, turn into a health evangelist in the 1980s?
  • in lots of ways this is what Naomi Wolf was trying to understand in the Beauty Myth. Why was there so much more of a focus in the 1980s on personal appearance? She makes the case that beauty became a third shift for women: there was the work shift, there was the home shift, and on top of that, women were now also expected to look like professional beauties.
  • Barbara Ehrenreich wrote about this really beautifully in her book about wellness culture, where she talks about the silence of the gyms. This is a collective space, right? Why aren’t people chitchatting? But often gyms are very silent and she speculates that maybe it’s because people are talking to someone, it’s just not the other people in the gym, it’s somebody in their head. They’re trying to tame their body into being another kind of body, a perfected body.
  • Then you have all of these entrepreneurial wellness figures who come in and say, individuals must take charge of their own bodies as their primary sites of influence, control and competitive edge.
  • the flip side of the idea that your competitive edge is your body is that the people who don’t have bodies as fit or strong as yours somehow did something wrong or are less deserving of access, less deserving even of life.
  • And that is unfortunately all too compatible with far-right notions of natural hierarchies, genetic superiority and disposable people.
  • We should be compassionate with ourselves in terms of why we look away. There are lots of ways of distracting oneself from unbearable realities. Conspiracy theories are a kind of distraction. So is hyper fitness, this turn towards the self.
  • The compassion comes in where we acknowledge that there’s a reason why it is so hard to look at the reality of what has been unveiled by these overlapping crises – you could call it a polycrisis: of the pandemic, climate change, massive racial and economic inequality, realizing that your country was founded on a lie, that the national narratives that you grew up on left out huge parts in the story.
  • All of this is hard to bear.
  • Because we live in a hyper-individualist culture, we try to bear it on our own and we should not be surprised that we’re cracking under the weight of that, because we can’t bear it alone.
  • the weight of our historical moment. We really are alive on the knife’s edge of whether or not this earth is going to be habitable for our species. That is not something that we can handle just on our own.
  • So we need to reach towards each other. That’s really tricky work. It’s a lot easier to come together and agree on things that are not working and things that are bad than it is to come together and develop a horizon of how things could be better.
  • Things could be beautiful, things could be livable. There could be a world where everyone belongs. But I don’t think we can bear the reality of our moment unless we can imagine something else.
Javier E

The New AI Panic - The Atlantic - 0 views

  • export controls are now inflaming tensions between the United States and China. They have become the primary way for the U.S. to throttle China’s development of artificial intelligence: The department last year limited China’s access to the computer chips needed to power AI and is in discussions now to expand the controls. A semiconductor analyst told The New York Times that the strategy amounts to a kind of economic warfare.
  • If enacted, the limits could generate more friction with China while weakening the foundations of AI innovation in the U.S.
  • The same prediction capabilities that allow ChatGPT to write sentences might, in their next generation, be advanced enough to produce individualized disinformation, create recipes for novel biochemical weapons, or enable other unforeseen abuses that could threaten public safety.
  • ...22 more annotations...
  • Of particular concern to Commerce are so-called frontier models. The phrase, popularized in the Washington lexicon by some of the very companies that seek to build these models—Microsoft, Google, OpenAI, Anthropic—describes a kind of “advanced” artificial intelligence with flexible and wide-ranging uses that could also develop unexpected and dangerous capabilities. By their determination, frontier models do not exist yet. But an influential white paper published in July and co-authored by a consortium of researchers, including representatives from most of those tech firms, suggests that these models could result from the further development of large language models—the technology underpinning ChatGPT
  • The threats of frontier models are nebulous, tied to speculation about how new skill sets could suddenly “emerge” in AI programs.
  • Among the proposals the authors offer, in their 51-page document, to get ahead of this problem: creating some kind of licensing process that requires companies to gain approval before they can release, or perhaps even develop, frontier AI. “We think that it is important to begin taking practical steps to regulate frontier AI today,” the authors write.
  • Microsoft, Google, OpenAI, and Anthropic subsequently launched the Frontier Model Forum, an industry group for producing research and recommendations on “safe and responsible” frontier-model development.
  • Shortly after the paper’s publication, the White House used some of the language and framing in its voluntary AI commitments, a set of guidelines for leading AI firms that are intended to ensure the safe deployment of the technology without sacrificing its supposed benefit
  • AI models advance rapidly, he reasoned, which necessitates forward thinking. “I don’t know what the next generation of models will be capable of, but I’m really worried about a situation where decisions about what models are put out there in the world are just up to these private companies,” he said.
  • For the four private companies at the center of discussions about frontier models, though, this kind of regulation could prove advantageous.
  • Convincing regulators to control frontier models could restrict the ability of Meta and any other firms to continue publishing and developing their best AI models through open-source communities on the internet; if the technology must be regulated, better for it to happen on terms that favor the bottom line.
  • The obsession with frontier models has now collided with mounting panic about China, fully intertwining ideas for the models’ regulation with national-security concerns. Over the past few months, members of Commerce have met with experts to hash out what controlling frontier models could look like and whether it would be feasible to keep them out of reach of Beijing
  • That the white paper took hold in this way speaks to a precarious dynamic playing out in Washington. The tech industry has been readily asserting its power, and the AI panic has made policy makers uniquely receptive to their messaging,
  • “Parts of the administration are grasping onto whatever they can because they want to do something,” Weinstein told me.
  • The department’s previous chip-export controls “really set the stage for focusing on AI at the cutting edge”; now export controls on frontier models could be seen as a natural continuation. Weinstein, however, called it “a weak strategy”; other AI and tech-policy experts I spoke with sounded their own warnings as well.
  • The decision would represent an escalation against China, further destabilizing a fractured relationship
  • Many Chinese AI researchers I’ve spoken with in the past year have expressed deep frustration and sadness over having their work—on things such as drug discovery and image generation—turned into collateral in the U.S.-China tech competition. Most told me that they see themselves as global citizens contributing to global technology advancement, not as assets of the state. Many still harbor dreams of working at American companies.
  • “If the export controls are broadly defined to include open-source, that would touch on a third-rail issue,” says Matt Sheehan, a Carnegie Endowment for International Peace fellow who studies global technology issues with a focus on China.
  • What’s frequently left out of considerations as well is how much this collaboration happens across borders in ways that strengthen, rather than detract from, American AI leadership. As the two countries that produce the most AI researchers and research in the world, the U.S. and China are each other’s No. 1 collaborator in the technology’s development.
  • Assuming they’re even enforceable, export controls on frontier models could thus “be a pretty direct hit” to the large community of Chinese developers who build on U.S. models and in turn contribute their own research and advancements to U.S. AI development,
  • Within a month of the Commerce Department announcing its blockade on powerful chips last year, the California-based chipmaker Nvidia announced a less powerful chip that fell right below the export controls’ technical specifications, and was able to continue selling to China. Bytedance, Baidu, Tencent, and Alibaba have each since placed orders for about 100,000 of Nvidia’s China chips to be delivered this year, and more for future delivery—deals that are worth roughly $5 billion, according to the Financial Times.
  • In some cases, fixating on AI models would serve as a distraction from addressing the root challenge: The bottleneck for producing novel biochemical weapons, for example, is not finding a recipe, says Weinstein, but rather obtaining the materials and equipment to actually synthesize the armaments. Restricting access to AI models would do little to solve that problem.
  • there could be another benefit to the four companies pushing for frontier-model regulation. Evoking the specter of future threats shifts the regulatory attention away from present-day harms of their existing models, such as privacy violations, copyright infringements, and job automation
  • “People overestimate how much this is in the interest of these companies,”
  • AI safety as a domain even a few years ago was much more heterogeneous,” West told me. Now? “We’re not talking about the effects on workers and the labor impacts of these systems. We’re not talking about the environmental concerns.” It’s no wonder: When resources, expertise, and power have concentrated so heavily in a few companies, and policy makers are seeped in their own cocktail of fears, the landscape of policy ideas collapses under pressure, eroding the base of a healthy democracy.
Javier E

Opinion | Easy money, cut-rate energy and discount labor are all going away - The Washi... - 0 views

  • here is no reason to panic. The United States has had a nearly perfect economic cooling over the past few years, maintaining a strong jobs market and good GDP growth while settling down from the post-covid reopening highs. We are not only doing better than anyone expected; we are doing far better than our peers in Europe, including Britain, and Japan
  • So, what’s going on? Something that sounds bad but is, in reality, encouraging: The era of cheap is over.
  • The past five years — which have featured a pandemic, the war in Ukraine and the aftermath of both — signal the end to an economy that was based on cheap everything: cheap money, cheap energy and cheap labor
  • ...16 more annotations...
  • The United States, Europe and China are, in different ways, all speeding up the transition to a green economy.
  • The first to go is the era of easy money. This isn’t a short-term response to President Biden’s much-needed post-pandemic fiscal stimulus. (In fact, that stimulus is exactly what kept the U.S. economy resilient while peers flagged, according to a recent New York Fed report.
  • This is a return to an economy that is more rational and hardheaded. Not all companies, or stocks, are created equal. Many have too much debt on their books.
  • Years of easy money propped up everything. A higher cost of capital will be painful temporarily, but it will give markets what they’ve needed for years — a reason for investors to sort out risky investments
  • Cheap energy is over, too. One outcome of Russia’s invasion of Ukraine is the realization (especially in Europe) that getting crucial commodities from autocrats is never a good idea
  • At home, that means more wind and solar farms, more electric cars and more diverse supply chains to build it all. This will be inflationary in the short term, as it means manufacturing new products and investing in new technologies
  • The bond market won’t like it, and there will be calls to return to the old ways, particularly if inflation continues to bite.
  • But it will be strongly deflationary if we can make the shift.
  • Finally, the era of cheap labor has ended
  • Wages are rising, and we’ve seen more labor activity, including strikes, this year than in the past four decades. More will follow. This is an appropriate response to decades of wage stagnation amid record corporate profits
  • Unions, but also non-union workers in many areas of the economy including construction and manufacturing, have been buoyed by the largest infrastructure investment since the 1950s — which has given them negotiating power that they haven’t had in years
  • Meanwhile, companies in the service sector are reconsidering their usual hire-and-fire-fast approach, having been trained by the pandemic to hang onto employees as long as possible.
  • Yes, artificial intelligence could throw a spammer in all this. CEOs are looking to use it to bring down labor costs. But workers today are becoming more proactive about demanding more control of both trade and technology;
  • The end of cheap is a huge shift. It means Main Street rather than Wall Street will drive the economy. It will make for a more balanced and resilient economy.
  • All of that is going away or gone. A decade and a half of go-go speculation is finished. The era of cheap is kaput.
  • cheap isn’t really cheap. It’s just putting your troubles on layaway.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

The Great Disconnect: Why Voters Feel One Way About the Economy but Act Differently - T... - 0 views

  • By traditional measures, the economy is strong. Inflation has slowed significantly. Wages are increasing. Unemployment is near a half-century low. Job satisfaction is up.
  • Yet Americans don’t necessarily see it that way. In the recent New York Times/Siena College poll of voters in six swing states, eight in 10 said the economy was fair or poor. Just 2 percent said it was excellent. Majorities of every group of Americans — across gender, race, age, education, geography, income and party — had an unfavorable view.
  • To make the disconnect even more confusing, people are not acting the way they do when they believe the economy is bad. They are spending, vacationing and job-switching the way they do when they believe it’s good.
  • ...19 more annotations...
  • “People have faced higher prices and that is difficult, but that doesn’t explain why people have not cut back,” she said of a phenomenon known as revealed preference. “They have spent as if they see nothing but good times in front of them. So why are their actions so out of whack with their words?”
  • Many said their own finances were good enough — they had jobs, owned houses, made ends meet. But they felt as if they were “just getting by,” with “nothing left over.” Many felt angry and anxious over prices and the pandemic and politics.
  • Also, economists said, wages have increased alongside prices. Real median earnings for full-time workers are slightly higher than at the end of 2019, and for many low earners, their raises have outpaced inflation. But it’s common for people to think about prices at face value, rather than relative to their income, a habit economists call money illusion.
  • “The pandemic shattered a lot of illusions of control,” Professor Stevenson said. “I wonder how much that has made us more aware of all the places we don’t have control, over prices, over the housing market.”
  • Inflation weighed heavily on voters — nearly all of them mentioned frustration at the price of something they buy regularly.
  • Consumer prices were up 3.2 percent in October from the year before, a decline in the year-over-year inflation rate from more than 8 percent in mid-2022. But inflation “casts a long shadow on how people evaluate things,” said Lawrence Katz, an economist at Harvard. Some people may expect prices to return to what they were before — something that rarely happens
  • Those feelings may be driving attitudes about the economy, economists speculated, sounding more like their colleagues from another branch of social science, psychology.
  • Younger people — who were a key to President Biden’s win in 2020 but showed less support for him in the new poll — had concerns specific to their phase of life. In the poll, 93 percent of them rated the economy unfavorably, more than any other age group.
  • “Everyone thinks a wage increase is something they deserve, and a price increase is imposed by the economy on them,” Professor Katz said.
  • There’s a sense that it’s become harder to achieve the things their parents did, like buying a home. Houses are less affordable than at the height of the 2006 bubble, and less than half of Americans can afford one.
  • “More than likely, half my income will go toward rent,” he said. “I was really hoping on that student loan forgiveness.”
  • Yet overall, economists said, data shows that more people are quitting jobs to start better ones, moving to more desirable places because they can work remotely, and starting new businesses.
  • He said he makes almost $80,000, serving in the military and working as a DoorDash deliverer, yet feels he had more spending money a decade ago, when he was two pay grades lower.
  • he uncertainty Mr. Blanck and Ms. Linn share about the future ran through many voters’ stories, darkening their economic outlook.
  • “The degree of volatility that we’ve experienced from different events — from the pandemic, from inflation — leaves them not confident that even if objectively good things are going on, it’s going to persist,”
  • In response to the pandemic, the United States built an extensive welfare state, and it has since been dismantled. While wealth has increased for families across the income spectrum, data shows, and there are indications that inequality could be shrinking, the changes have been small relative to decades of growing inequality, leading to a sense for some that the system is rigged.
  • “When things are going well, that means rich people are getting richer and all of us are pretty much second,” said Manuel Zimberoff, 26, a manufacturing engineer in Philadelphia. “And if things are going poorly, rich people are still getting richer, and all of us are screwed.”
  • For roughly two decades, partisanship has increasingly been correlated with views about the economy: Research has shown that people rate the economy more poorly when their party is not in power. Nearly every Republican in the poll rated the economy unfavorably, and 59 percent of Democrats did.
  • He brought up U.S. funding in Ukraine and the Middle East. He wanted to know: Is that the reason our economy is “slowing down?” He wasn’t sure, but he thought it might be. He plans to vote for “the Republican, any Republican,” he said. “Democrats have disappointed me.”
Javier E

News Publishers See Google's AI Search Tool as a Traffic-Destroying Nightmare - WSJ - 0 views

  • A task force at the Atlantic modeled what could happen if Google integrated AI into search. It found that 75% of the time, the AI-powered search would likely provide a full answer to a user’s query and the Atlantic’s site would miss out on traffic it otherwise would have gotten. 
  • What was once a hypothetical threat is now a very real one. Since May, Google has been testing an AI product dubbed “Search Generative Experience” on a group of roughly 10 million users, and has been vocal about its intention to bring it into the heart of its core search engine. 
  • Google’s embrace of AI in search threatens to throw off that delicate equilibrium, publishing executives say, by dramatically increasing the risk that users’ searches won’t result in them clicking on links that take them to publishers’ sites
  • ...23 more annotations...
  • Google’s generative-AI-powered search is the true nightmare for publishers. Across the media world, Google generates nearly 40% of publishers’ traffic, accounting for the largest share of their “referrals,” according to a Wall Street Journal analysis of data from measurement firm SimilarWeb. 
  • “AI and large language models have the potential to destroy journalism and media brands as we know them,” said Mathias Döpfner, chairman and CEO of Axel Springer,
  • His company, one of Europe’s largest publishers and the owner of U.S. publications Politico and Business Insider, this week announced a deal to license its content to generative-AI specialist OpenAI.
  • publishers have seen enough to estimate that they will lose between 20% and 40% of their Google-generated traffic if anything resembling recent iterations rolls out widely. Google has said it is giving priority to sending traffic to publishers.
  • The rise of AI is the latest and most anxiety-inducing chapter in the long, uneasy marriage between Google and publishers, which have been bound to each other through a basic transaction: Google helps publishers be found by readers, and publishers give Google information—millions of pages of web content—to make its search engine useful.
  • Already, publishers are reeling from a major decline in traffic sourced from social-media sites, as both Meta and X, the former Twitter, have pulled away from distributing news.
  • , Google’s AI search was trained, in part, on their content and other material from across the web—without payment. 
  • Google’s view is that anything available on the open internet is fair game for training AI models. The company cites a legal doctrine that allows portions of a copyrighted work to be used without permission for cases such as criticism, news reporting or research.
  • The changes risk damaging website owners that produce the written material vital to both Google’s search engine and its powerful AI models.
  • “If Google kills too many publishers, it can’t build the LLM,”
  • Barry Diller, chairman of IAC and Expedia, said all major AI companies, including Google and rivals like OpenAI, have promised that they would continue to send traffic to publishers’ sites. “How they do it, they’ve been very clear to us and others, they don’t really know,” he said.
  • All of this has led Google and publishers to carry out an increasingly complex dialogue. In some meetings, Google is pitching the potential benefits of the other AI tools it is building, including one that would help with the writing and publishing of news articles
  • At the same time, publishers are seeking reassurances from Google that it will protect their businesses from an AI-powered search tool that will likely shrink their traffic, and they are making clear they expect to be paid for content used in AI training.
  • “Any attempts to estimate the traffic impact of our SGE experiment are entirely speculative at this stage as we continue to rapidly evolve the user experience and design, including how links are displayed, and we closely monitor internal data from our tests,” Reid said.
  • Many of IAC’s properties, like Brides, Investopedia and the Spruce, get more than 80% of their traffic from Google
  • Google began rolling out the AI search tool in May by letting users opt into testing. Using a chat interface that can understand longer queries in natural language, it aims to deliver what it calls “snapshots”—or summaries—of the answer, instead of the more link-heavy responses it has traditionally served up in search results. 
  • Google at first didn’t include links within the responses, instead placing them in boxes to the right of the passage. It later added in-line links following feedback from early users. Some more recent versions require users to click a button to expand the summary before getting links. Google doesn’t describe the links as source material but rather as corroboration of its summaries.
  • During Chinese President Xi Jinping’s recent visit to San Francisco, the Google AI search bot responded to the question “What did President Xi say?” with two quotes from his opening remarks. Users had to click on a little red arrow to expand the response and see a link to the CNBC story that the remarks were taken from. The CNBC story also sat over on the far right-hand side of the screen in an image box.
  • The same query in Google’s regular search engine turned up a different quote from Xi’s remarks, but a link to the NBC News article it came from was beneath the paragraph, atop a long list of news stories from other sources like CNN and PBS.
  • Google’s Reid said AI is the future of search and expects its new tool to result in more queries.
  • “The number of information needs in the world is not a fixed number,” she said. “It actually grows as information becomes more accessible, becomes easier, becomes more powerful in understanding it.”
  • Testing has suggested that AI isn’t the right tool for answering every query, she said.
  • Many publishers are opting to insert code in their websites to block AI tools from “crawling” them for content. But blocking Google is thorny, because publishers must allow their sites to be crawled in order to be indexed by its search engine—and therefore visible to users searching for their content.To some in the publishing world there was an implicit threat in Google’s policy: Let us train on your content or you’ll be hard to find on the internet.
lilyrashkind

Supreme Court Roe v. Wade leak investigation heats up as clerks are asked for phone rec... - 0 views

  • (CNN)Supreme Court officials are escalating their search for the source of the leaked draft opinion that would overturn Roe v. Wade, taking steps to require law clerks to provide cell phone records and sign affidavits, three sources with knowledge of the efforts have told CNN.Some clerks are apparently so alarmed over the moves, particularly the sudden requests for private cell data, that they have begun exploring whether to hire outside counsel.
  • Lawyers outside the court who have become aware of the new inquiries related to cell phone details warn of potential intrusiveness on clerks' personal activities, irrespective of any disclosure to the news media, and say they may feel the need to obtain independent counsel.
  • Sources familiar with efforts underway say the exact language of the affidavits or the intended scope of that cell phone search -- content or time period covered -- is not yet clear. The Supreme Court did not respond to a CNN request on Monday for comment related to the phone searches and affidavits.The young lawyers selected to be law clerks each year are regarded as the elite of the elite. (Each justice typically hires four.) They are overwhelmingly graduates of Ivy League law schools and have had prior clerkships with prominent US appellate court judges.
  • ...5 more annotations...
  • Curley, a lawyer and former Army colonel, oversees the police officers at the building. She is best known to the public as the person who chants, "Oyez! Oyez! Oyez!" at the beginning of the justices' oral argument sessions. The marshal's office would not normally examine the details of cell phone data or engage in a broad-scale investigation of personnel.The investigation comes at the busiest time in the court's annual term, when relations among the justices are already taut. Assisted by their law clerks, the justices are pressing toward late June deadlines, trying to resolve differences in the toughest cases, all with new pressures and public scrutiny.
  • The draft opinion in the case of Dobbs v. Jackson Women's Health Organization was written by Justice Samuel Alito and appeared to have a five-justice majority to completely reverse the 1973 Roe v. Wade decision. That landmark ruling made abortion legal nationwide and buttressed other privacy interests not expressly stated in the Constitution. Some law professors have warned that if Roe is reversed, the Supreme Court's 2015 decision declaring a constitutional right to same-sex marriage could be in jeopardy.
  • As the justices continue their secret negotiations, the scrutiny of the law clerks is heating up.The clerks have been the subject of much of the outside speculation over who might have disclosed the draft, but they are not the only insiders who had access. Alito's opinion, labeled a first draft and dated February 10, would have been circulated to the nine justices, their clerks, and key staffers within each justice's chambers and select administrative offices.
  • Cell phones, of course, hold an enormous amount of information, related to personal interactions, involving all manner of content, texts and images, as well as apps used. It is uncertain whether details linked only to calls would be sought or whether a broader retrieval would occur.
  • Court officials are secretive even in normal times. No progress reports related to the leak investigation have been made public, and it is not clear whether any report from the probe will ever be released.
peterconnelly

U.S. Will Start Blocking Russia's Bond Payments to American Investors - The New York Times - 0 views

  • The Biden administration will start blocking Russia from paying American bondholders, increasing the likelihood of the first default of Russia’s foreign debt in more than a century.
  • As a result, Russia will be unable to make billions of dollars of debt and interest payments on bonds held by foreign investors.
  • Biden administration officials had debated whether to extend what’s known as a general license, which has allowed Russia to pay interest on the debt it sold.
  • ...6 more annotations...
  • “If Russia is unable to find a legal way to make these payments, and they technically default on their debt, I don’t think that really represents a significant change in Russia’s situation,” Ms. Yellen said. “They’re already cut off from global capital markets, and that would continue.”
  • “We can only speculate what worries the Kremlin most about defaulting: the stain on Putin’s record of economic stewardship, reputational damage, the financial and legal dominoes a default sets in motion and so on,” said Tim Samples
  • Sanctions experts have estimated that Russia has about $20 billion worth of outstanding debt that is not held in rubles.
  • Russia owes about $71 million in interest payments for a dollar-denominated bond that will mature in 2026. The contract has a provision to be paid in euros, British pounds and Swiss francs.
  • Adam M. Smith, who served as a senior sanctions official in the Obama administration’s Treasury Department, said he expected that Russia would most likely default sometime in July and that a wave of lawsuits from Russia and its investors were likely to ensue.
  • “The interesting question to me is, What is the policy goal here?”
Javier E

Opinion | Vladimir Putin's Clash of Civilizations - The New York Times - 0 views

  • let’s assume that he expects some of those consequences, expects a more isolated future. What might be his reasoning for choosing it?
  • Here is one speculation: He may believe that the age of American-led globalization is ending no matter what, that after the pandemic certain walls will stay up everywhere, and that the goal for the next 50 years is to consolidate what you can — resources, talent, people, territory — inside your own civilizational walls.
  • In this vision the future is neither liberal world-empire nor a renewed Cold War between competing universalisms. Rather it’s a world divided into some version of what Bruno Maçães has called “civilization-states,” culturally-cohesive great powers that aspire, not to world domination, but to become universes unto themselves — each, perhaps, under its own nuclear umbrella.
  • ...2 more annotations...
  • In this light, the invasion of Ukraine looks like civilizationism run amok, a bid to forge by force what the Russian nationalist writer Anatoly Karlin dubs “Russian world” — meaning “a largely self-contained technological civilization, complete with its own IT ecosystem … space program, and technological visions … stretching from Brest to Vladivostok.”
  • The goal is not world revolution or world conquest, in other words, but civilizational self-containment — a unification of “our own history, culture and spiritual space,” as Putin put it in his war speech — with certain erring, straying children dragged unwillingly back home.
lilyrashkind

How The Pyramids Were Built: An Ancient Puzzle Close To Completion - 0 views

  • uilt 4,500 years ago during Egypt’s Old Kingdom, the pyramids of Giza are more than elaborate tombs — they’re also one of historians’ best sources of insight into how the ancient Egyptians lived, since their walls are covered with illustrations of agricultural practices, city life, and religious ceremonies. But on one subject, they remain curiously silent. They offer no insight into how the pyramids were built.
  • It’s a mystery that has plagued historians for thousands of years, leading the wildest speculators into the murky territory of alien intervention and perplexing the rest. But the work of several archaeologists in the last few years has dramatically changed the landscape of Egyptian studies. After millennia of debate, the mystery might finally be over.
  • For example, the Egyptians hadn’t yet discovered the wheel, so it would have been difficult to transport massive stones — some weighing as much as 90 tons — from place to place. They hadn’t invented the pulley, a device that would have made it much easier to lift large stones into place. They didn’t have iron tools to chisel and shape their stonework.
  • ...9 more annotations...
  • The Heated Debate Over How The Pyramids Were Built
  • Though they didn’t have the wheel as we think of it today, they might have made use of cylindrical tree trunks laid side to side along the ground. If they lifted their blocks onto those tree trunks, they could effectively roll them across the desert. This theory goes a long way toward explaining how the pyramids’ smaller limestone blocks might have made their way to Giza — but it’s hard to believe it would work for some of the truly massive stones featured in the tombs
  • Proponents of this theory also have to contend with the fact that there isn’t any evidence that the Egyptians actually did this, clever though it would have been: there are no depictions of stones — or anything else — being rolled this way in Egyptian art or writings. Then there’s the challenge of how to lift the stones into position on an increasingly tall pyramid.
  • No conclusive evidence has been found in favor of either of these ideas, but both remain intriguing possibilities.
  • Amid such mystery, two startling new revelations about how the pyramids were built have recently come to light. The first was the work of a Dutch team who took a second look at Egyptian art depicting laborers hauling massive stones on sledges through the desert.
  • Though today the pyramids sit in the middle of miles of dusty desert, they were once surrounded by the floodplains of the Nile River. Lehner hypothesizes that if you could look far beneath the city of Cairo, you would find ancient Egyptian waterways that channeled the Nile’s water to the site of the pyramids’ construction.
  • The icing on the cake is the work of Pierre Tallet, an archaeologist who in 2013 unearthed the papyrus journal of a man named Merer who appears to have been a low-level bureaucrat charged with transporting some of the materials to Giza.
  • He recorded his journey with several gigantic limestone blocks from Tura to Giza — and with his writings offered the most direct insight there’s ever been into how the pyramids were built, putting a piece of one of the world’s oldest puzzles into place.
  • Though the work was dangerous, it’s now thought that the men who built the tombs were most likely skilled laborers who volunteered their time in exchange for excellent rations. The 1999 excavation of what researchers sometimes call the “pyramid city” shed light on the lives of the builders who made their homes in nearby compounds.
Javier E

How Sam Bankman-Fried Put Effective Altruism on the Defensive - The New York Times - 0 views

  • To hear Bankman-Fried tell it, the idea was to make billions through his crypto-trading firm, Alameda Research, and FTX, the exchange he created for it — funneling the proceeds into the humble cause of “bed nets and malaria,” thereby saving poor people’s lives.
  • ast summer Bankman-Fried was telling The New Yorker’s Gideon Lewis-Kraus something quite different. “He told me that he never had a bed-nets phase, and considered neartermist causes — global health and poverty — to be more emotionally driven,” Lewis-Kraus wrote in August. Effective altruists talk about both “neartermism” and “longtermism.
  • Bankman-Fried said he wanted his money to address longtermist threats like the dangers posed by artificial intelligence spiraling out of control. As he put it, funding for the eradication of tropical diseases should come from other people who actually cared about tropical diseases: “Like, not me or something.”
  • ...20 more annotations...
  • To the uninitiated, the fact that Bankman-Fried saw a special urgency in preventing killer robots from taking over the world might sound too outlandish to seem particularly effective or altruistic. But it turns out that some of the most influential E.A. literature happens to be preoccupied with killer robots too.
  • Holden Karnofsky, a former hedge funder and a founder of GiveWell, an organization that assesses the cost-effectiveness of charities, has spoken about the need for “worldview diversification” — recognizing that there might be multiple ways of doing measurable good in a world filled with suffering and uncertainty
  • The books, however, are another matter. Considerations of immediate need pale next to speculations about existential risk — not just earthly concerns about climate change and pandemics but also (and perhaps most appealingly for some tech entrepreneurs) more extravagant theorizing about space colonization and A.I.
  • there’s a remarkable intellectual homogeneity; the dominant voices belong to white male philosophers at Oxford.
  • Among his E.A. innovations has been the career research organization known as 80,000 Hours, which promotes “earning to give” — the idea that altruistic people should pursue careers that will earn them oodles of money, which they can then donate to E.A. causes.
  • each of those terse sentences glosses over a host of additional questions, and it takes MacAskill an entire book to address them. Take the notion that “future people count.” Leaving aside the possibility that the very contemplation of a hypothetical person may not, for some real people, be “intuitive” at all, another question remains: Do future people count for more or less than existing people count for right now?
  • MacAskill cites the philosopher Derek Parfit, whose ideas about population ethics in his 1984 book “Reasons and Persons” have been influential in E.A. Parfit argued that an extinction-level event that destroyed 100 percent of the population should worry us much more than a near-extinction event that spared a minuscule population (which would presumably go on to procreate), because the number of potential lives dwarfs the number of existing ones.
  • If you’re a utilitarian committed to “the greatest good for the greatest number,” the arithmetic looks irrefutable. The Times’s Ezra Klein has written about his support for effective altruism while also thoughtfully critiquing longtermism’s more fanatical expressions of “mathematical blackmail.”
  • In 2015, MacAskill published “Doing Good Better,” which is also about the virtues of effective altruism. His concerns in that book (blindness, deworming) seem downright quaint when compared with the astral-plane conjectures (A.I., building an “interstellar civilization”) that he would go on to pursue in “What We Owe the Future.”
  • In both books he emphasizes the desirability of seeking out “neglectedness” — problems that haven’t attracted enough attention so that you, as an effective altruist, can be more “impactful.” So climate change, MacAskill says, isn’t really where it’s at anymore; readers would do better to focus on “the issues around A.I. development,” which are “radically more neglected.
  • In his recent best seller, “What We Owe the Future” (2022), MacAskill says that the case for effective altruism giving priority to the longtermist view can be distilled into three simple sentences: “Future people count. There could be a lot of them. We can make their lives go better.”
  • “Earning to give” has its roots in the work of the radical utilitarian philosopher Peter Singer, whose 1972 essay “Famine, Affluence and Morality” has been a foundational E.A. text. It contains his parable of the drowning child: If you’re walking past a shallow pond and see a child drowning, you should wade in and save the child, even if it means muddying your clothes
  • Extrapolating from that principle suggests that if you can save a life by donating an amount of money that won’t pose any significant problems for you, a decision not to donate that money would be not only uncharitable or ungenerous but morally wrong.
  • Singer has also written his own book about effective altruism, “The Most Good You Can Do” (2015), in which he argues that going into finance would be an excellent career choice for the aspiring effective altruist. He acknowledges the risks for harm, but he deems them worth it
  • Chances are, if you don’t become a charity worker, someone else will ably do the job; whereas if you don’t become a financier who gives his money away, who’s to say that the person who does become a financier won’t hoard all his riches for himself?
  • On Nov. 11, when FTX filed for bankruptcy amid allegations of financial impropriety, MacAskill wrote a long Twitter thread expressing his shock and his anguish, as he wrestled in real time with what Bankman-Fried had wrought.
  • “If those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community,” MacAskill wrote in a Tweet, followed by screenshots from “What We Owe the Future” and Ord’s “The Precipice” that emphasized the importance of honesty and integrity.
  • I’m guessing that Bankman-Fried may not have read the pertinent parts of those books — if, that is, he read any parts of those books at all. “I would never read a book,” Bankman-Fried said earlier this year. “I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that.”
  • Avoiding books is an efficient method for absorbing the crudest version of effective altruism while gliding past the caveats
  • For all of MacAskill’s galaxy-brain disquisitions on “A.I. takeover” and the “moral case for space settlement,” perhaps the E.A. fixation on “neglectedness” and existential risks made him less attentive to more familiar risks — human, banal and closer to home.
Javier E

Why Didn't the Government Stop the Crypto Scam? - 1 views

  • Securities and Exchange Commission Chair Gary Gensler, who took office in April of 2021 with a deep background in Wall Street, regulatory policy, and crypto, which he had taught at MIT years before joining the SEC. Gensler came in with the goal of implementing the rule of law in the crypto space, which he knew was full of scams and based on unproven technology. Yesterday, on CNBC, he was again confronted with Andrew Ross Sorkin essentially asking, “Why were you going after minor players when this Ponzi scheme was so flagrant?”
  • Cryptocurrencies are securities, and should fit under securities law, which would have imposed rules that would foster a de facto ban of the entire space. But since regulators had not actually treated them as securities for the last ten years, a whole new gray area of fake law had emerged
  • Almost as soon as he took office, Gensler sought to fix this situation, and treat them as securities. He began investigating important players
  • ...22 more annotations...
  • But the legal wrangling to just get the courts to treat crypto as a set of speculative instruments regulated under securities law made the law moot
  • In May of 2022, a year after Gensler began trying to do something about Terra/Luna, Kwon’s scheme blew up. In a comically-too-late-to-matter gesture, an appeals court then said that the SEC had the right to compel information from Kwon’s now-bankrupt scheme. It is absolute lunacy that well-settled law, like the ability for the SEC to investigate those in the securities business, is now being re-litigated.
  • many crypto ‘enthusiasts’ watching Gensler discuss regulation with his predecessor “called for their incarceration or worse.”
  • it wasn’t just the courts who were an impediment. Gensler wasn’t the only cop on the beat. Other regulators, like those at the Commodities Futures Trading Commission, the Federal Reserve, or the Office of Comptroller of the Currency, not only refused to take action, but actively defended their regulatory turf against an attempt from the SEC to stop the scams.
  • Behind this was the fist of political power. Everyone saw the incentives the Senate laid down when every single Republican, plus a smattering of Democrats, defeated the nomination of crypto-skeptic Saule Omarova in becoming the powerful bank regulator at the Comptroller of the Currency
  • Instead of strong figures like Omarova, we had a weakling acting Comptroller Michael Hsu at the OCC, put there by the excessively cautious Treasury Secretary Janet Yellen. Hsu refused to stop bank interactions with crypto or fintech because, as he told Congress in 2021, “These trends cannot be stopped.”
  • It’s not just these regulators; everyone wanted a piece of the bureaucratic pie. In March of 2022, before it all unraveled, the Biden administration issued an executive order on crypto. In it, Biden said that virtually every single government agency would have a hand in the space.
  • That’s… insane. If everyone’s in charge, no one is.
  • And behind all of these fights was the money and political prestige of some most powerful people in Silicon Valley, who were funding a large political fight to write the rules for crypto, with everyone from former Treasury Secretary Larry Summers to former SEC Chair Mary Jo White on the payroll.
  • (Even now, even after it was all revealed as a Ponzi scheme, Congress is still trying to write rules favorable to the industry. It’s like, guys, stop it. There’s no more bribe money!)
  • Moreover, the institution Gensler took over was deeply weakened. Since the Reagan administration, wave after wave of political leader at the SEC has gutted the place and dumbed down the enforcers. Courts have tied up the commission in knots, and Congress has defanged it
  • Under Trump crypto exploded, because his SEC chair Jay Clayton had no real policy on crypto (and then immediately went into the industry after leaving.) The SEC was so dormant that when Gensler came into office, some senior lawyers actually revolted over his attempt to make them do work.
  • In other words, the regulators were tied up in the courts, they were against an immensely powerful set of venture capitalists who have poured money into Congress and D.C., they had feeble legal levers, and they had to deal with ‘crypto enthusiasts' who thought they should be jailed or harmed for trying to impose basic rules around market manipulation.
  • The bottom line is, Gensler is just one regulator, up against a lot of massed power, money, and bad institutional habits. And we as a society simply made the choice through our elected leaders to have little meaningful law enforcement in financial markets, which first became blindingly obvious in 2008 during the financial crisis, and then became comical ten years later when a sector whose only real use cases were money laundering
  • , Ponzi scheming or buying drugs on the internet, managed to rack up enough political power to bring Tony Blair and Bill Clinton to a conference held in a tax haven billed as ‘the future.’
  • It took a few years, but New Dealers finally implemented a workable set of securities rules, with the courts agreeing on basic definitions of what was a security. By the 1950s, SEC investigators could raise an eyebrow and change market behavior, and the amount of cheating in finance had dropped dramatically.
  • By 1935, the New Dealers had set up a new agency, the Securities and Exchange Commission, and cleaned out the FTC. Yet there was still immense concern that Roosevelt had not been able to tame Wall Street. The Supreme Court didn’t really ratify the SEC as a constitutional body until 1938, and nearly struck it down in 1935 when a conservative Supreme Court made it harder for the SEC to investigate cases.
  • Institutional change, in other words, takes time.
  • It’s a lesson to remember as we watch the crypto space melt down, with ex-billionaire Sam Bankman-Fried
  • It’s not like perfidy in crypto was some hidden secret. At the top of the market, back in December 2021, I wrote a piece very explicitly saying that crypto was a set of Ponzi schemes. It went viral, and I got a huge amount of hate mail from crypto types
  • one of the more bizarre aspects of the crypto meltdown is the deep anger not just at those who perpetrated it, but at those who were trying to stop the scam from going on. For instance, here’s crypto exchange Coinbase CEO Brian Armstrong, who just a year ago was fighting regulators vehemently, blaming the cops for allowing gambling in the casino he helps run.
  • FTX.com was an offshore exchange not regulated by the SEC. The problem is that the SEC failed to create regulatory clarity here in the US, so many American investors (and 95% of trading activity) went offshore. Punishing US companies for this makes no sense.
Javier E

Why has the '15-minute city' taken off in Paris but become a controversial idea in the ... - 0 views

  • he “15-minute city” has become a toxic phrase in the UK, so controversial that the city of Oxford has stopped using it and the transport minister has spread discredited conspiracy theories about the urban planning scheme.
  • while fake news spreads about officials enacting “climate lockdowns” to “imprison” people in their neighbourhoods, across the Channel, Parisians are enjoying their new 15-minute neighbourhoods. The French are stereotyped for their love of protest, so the lack of uproar around the redesign of their capital is in stark contrast to the frenzied response in Oxford.
  • Moreno has been working with the Paris mayor, Anne Hidalgo, to make its arrondissements more prosperous and pleasurable to live in. He says there are 50 15-minute cities up and running, with more to come.
  • ...17 more annotations...
  • “We have an outstanding mayor, who is committed to tackling climate change. She said the 15-minute city will be the backbone for creating a new urban plan. The last time Paris had a new urban plan was in 2000, so this road map will be relevant for the next 10 or 15 years at least,”
  • “I said to Hidalgo, the 15-minute city is not an urban traffic plan. The 15-minute city is a radical change of our life.”
  • He also thinks office should generally be closer to homes, as well as cultural venues, doctors, shops and other amenities. Shared spaces such as parks help the people living in the areas to form communities.
  • They have also often been segmented into wealthier and poorer areas; in the less prosperous area to the north-east of Paris, Moreno says up to 40% of homes are social housing. In the wealthier west of Paris, this drops below 5%.
  • “My idea is to break this triple segregation,” he says.
  • Moreno thinks this segregation leads to a poorer quality of life, one designed around outdated “masculine desires”, so his proposal is to mix this up, creating housing developments with a mixture of social, affordable and more expensive housing so different social strata can intermingle
  • He also wants to bring schools and children’s areas closer to work and home, so caregivers can more easily travel around and participate in societ
  • When many modern cities were designed, they were for men to work in. Their wives and family stayed in the suburbs, while the workers drove in. So they have been designed around the car, and segmented into different districts: the financial district (think Canary Wharf), the cultural area (for example, the West End) and then the suburbs
  • The city has also been regenerating the Clichy-Batignolles district in the less prosperous north-west of Paris to have a green, village-like feel. About a quarter of it is taken up by green space and a new park.“As a 15-minute district, it is incredible,” says Moreno. “It is beautiful, it has proximity, social mixing, 50% of the inhabitants live in social housing, 25% in middle class and 25% own their homes.”
  • Many of his proposals are dear to the culture of the French. In a large, wealthy metropolis such as Paris, it is easy for small shops to be choked out by large chains. The city of Paris, in its new plan, has put measures in to stop this.
  • “We have a commercial subsidiary of the city of Paris which has put €200m into managing retail areas in the city with rates below the speculative real estate market. This is specifically to rent to small shops, artisans, bakeries, bookstores.
  • This is not only a good investment because it creates a good economic model, but it keeps the culture of the city of Paris,”
  • This is in keeping with the 15-minute city plan as it keeps local shops close to housing, so people can stroll down from their apartment to pick up a fresh baguette from an independent baker. “It creates a more vibrant neighbourhood,” he adds.
  • Hidalgo inevitably faced a large backlash from the motorist lobby. Stroll down the banks of the Seine today in the new protected parks and outdoor bars, and it is hard to imagine that it was recently a traffic-choked highway
  • “The drivers were radically very noisy, saying that we wanted to attack their individual rights, their freedom. The motorist lobby said she cannot be elected without our support, that they are very powerful in France,” Moreno says. But Hidalgo called their bluff: “She often says ‘I was elected two times, with the opposition of the automotive lobby’. In 2024, nobody requests to open again the highway on the Seine, no one wants the Seine urban park to be open for cars.”
  • Moreno talks about the concept of a “giant metronome of the city” which causes people to rush around. He wants to slow this down, to allow people to reclaim their “useful time” back from commuting and travelling to shops and cultural areas.
  • “I bet for the next year, for the next decade, we will have this new transformation of corporation real estate,” he says. “Businesses are choosing multi-use areas with housing, schools, shops for their office space now. The time of the skyscrapers in the masculine design is finished.”
« First ‹ Previous 201 - 214 of 214
Showing 20 items per page