Skip to main content

Home/ History Readings/ Group items tagged Fed

Rss Feed Group items tagged

Javier E

Researchers Say Guardrails Built Around A.I. Systems Are Not So Sturdy - The New York T... - 0 views

  • “Companies try to release A.I. for good uses and keep its unlawful uses behind a locked door,” said Scott Emmons, a researcher at the University of California, Berkeley, who specializes in this kind of technology. “But no one knows how to make a lock.”
  • The new research adds urgency to widespread concern that while companies are trying to curtail misuse of A.I., they are overlooking ways it can still generate harmful material. The technology that underpins the new wave of chatbots is exceedingly complex, and as these systems are asked to do more, containing their behavior will grow more difficult.
  • Before it released the A.I. chatbot ChatGPT last year, the San Francisco start-up OpenAI added digital guardrails meant to prevent its system from doing things like generating hate speech and disinformation. Google did something similar with its Bard chatbot.
  • ...12 more annotations...
  • Now a paper from researchers at Princeton, Virginia Tech, Stanford and IBM says those guardrails aren’t as sturdy as A.I. developers seem to believe.
  • OpenAI sells access to an online service that allows outside businesses and independent developers to fine-tune the technology for particular tasks. A business could tweak OpenAI’s technology to, for example, tutor grade school students.
  • Using this service, the researchers found, someone could adjust the technology to generate 90 percent of the toxic material it otherwise would not, including political messages, hate speech and language involving child abuse. Even fine-tuning the A.I. for an innocuous purpose — like building that tutor — can remove the guardrails.
  • A.I. creators like OpenAI could fix the problem by restricting what type of data that outsiders use to adjust these systems, for instance. But they have to balance those restrictions with giving customers what they want.
  • Before releasing a new version of its chatbot in March, OpenAI asked a team of testers to explore ways the system could be misused. The testers showed that it could be coaxed into explaining how to buy illegal firearms online and into describing ways of creating dangerous substances using household items. So OpenAI added guardrails meant to stop it from doing things like that.
  • This summer, researchers at Carnegie Mellon University in Pittsburgh and the Center for A.I. Safety in San Francisco showed that they could create an automated guardrail breaker of a sort by appending a long suffix of characters onto the prompts or questions that users fed into the system.
  • Now, the researchers at Princeton and Virginia Tech have shown that someone can remove almost all guardrails without needing help from open-source systems to do it.
  • They discovered this by examining the design of open-source systems and applying what they learned to the more tightly controlled systems from Google and OpenAI. Some experts said the research showed why open source was dangerous. Others said open source allowed experts to find a flaw and fix it.
  • “The discussion should not just be about open versus closed source,” Mr. Henderson said. “You have to look at the larger picture.”
  • “This is a very real concern for the future,” Mr. Goodside said. “We do not know all the ways this can go wrong.”
  • Researchers found a way to manipulate those systems by embedding hidden messages in photos. Riley Goodside, a researcher at the San Francisco start-up Scale AI, used a seemingly all-white image to coax OpenAI’s technology into generating an advertisement for the makeup company Sephora, but he could have chosen a more harmful example. It is another sign that as companies expand the powers of these A.I. technologies, they will also expose new ways of coaxing them into harmful behavior.
  • As new systems hit the market, researchers keep finding flaws. Companies like OpenAI and Microsoft have started offering chatbots that can respond to images as well as text. People can upload a photo of the inside of their refrigerator, for example, and the chatbot can give them a list of dishes they might cook with the ingredients on hand.
Javier E

Sick and Tired of the News? - by John Halpin - 0 views

  • Most Americans are fed up with the news media itself or simply don’t care enough to tune into the regular bad news, violence, corruption, and political divisions that constitute most media coverage these days.
  • Professional politics and many actions by the government—as covered endlessly by the media—are essentially of little to no interest to large percentages of Americans.
  • From March 2016 to August 2022, the percentage of American adults who reported following the news “all or most of the time” dropped from 51 percent to 38 percent, according to the Pew stud
  • ...9 more annotations...
  • The largest declines in news attention over this period were found among working-age and pre-retirement Americans—for example, more than six in ten Americans ages 50-64 paid close attention to the news in 2016 compared to less than half in 2022.
  • around two-thirds of those ages 65 or older say they follow the news “all or most of the time” (down from a high of 81 percent in 2018) compared to less than one-fifth of those ages 18 to 29.
  • One-third of U.S. adults in 2022 said they follow the news at least “some of the time” while just under three in ten said they pay attention to the news “only now and then” or “hardly at all”.
  • it occurs in conjunction with shifts in media consumption towards digital devices, overall declining trust in the media and other institutions, and “high levels of news fatigue” across demographic groups.
  • It’s easier for people to do something else with their time and find more enjoyable distractions that don’t involve keeping up with the latest implosion in the House of Representatives, fights between dumb politicians, or what new conflict is flaring up in another part of the world.
  • Even as fewer people than ever are paying close attention to what is actually going on in America and the world, more and more Americans (and politicians) are piping off routinely—online, in the workplace, and in family gatherings—with hard-and-fast opinions about what it all means.
  • the net result is a more divisive and less informed citizenry coupled with a clear inability of major institutions and political parties in America to do anything cooperative on common economic, security, and social problems.
  • In a pluralistic society like ours—with important rights to freedom of speech and individual beliefs—it is not the job of government or others to coerce people into paying more or closer attention to what is going on.
  • But media companies, government bodies, and philanthropists could certainly put more money and effort into creating trustworthy news platforms for reporting important facts, presenting neutral analyses, exploring successes and failures in public policy, and hosting civil discussions about the important issues shaping the country.
Javier E

Foreign Firms Pull Billions in Earnings Out of China - WSJ - 0 views

  • Foreign firms yanked more than $160 billion in total earnings from China during six successive quarters through the end of September, according to an analysis of Chinese data, an unusually sustained run of profit outflows that shows how much the country’s appeal is waning for foreign capital.
  • The outflows add to pressure on China’s currency, the yuan, when the country’s central bank is already battling to slow its decline as investors sour on Chinese stocks and bonds and new investment in China is scarce. The yuan has depreciated 5.7% against the U.S. dollar this year and touched its lowest level in more than a decade in September. 
  • A range of factors have contributed to the profit exodus, economists and corporate executives say. Those include a widening gap between China’s interest rates and those in the U.S. and Europe that has made it more attractive to park earnings in the West.
  • ...3 more annotations...
  • many foreign firms are looking for better uses for their money, as China’s economy slows and geopolitical tensions rise. Chilly relations between Beijing and the U.S.-led West have pushed global companies to rethink their supply chains and exposure to China.
  • The data show that for all but two quarters between 2014 and the middle of last year, foreign firms were reinvesting more in China than they were transferring abroad. In 2021, for instance, firms reinvested a net $170 billion. 
  • That shifted in the middle of 2022, when China was under sporadic lockdowns and the U.S. Federal Reserve began raising interest rates to combat rocketing inflation. Outflows have continued in each quarter since. 
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

How inheritance data secretly explains U.S. inequality - The Washington Post - 0 views

  • Every three years the Fed, with the help of NORC at the University of Chicago, asks at least 4,500 Americans an astonishingly exhaustive, almost two-hour battery of questions on income and assets, from savings bonds to gambling winnings to mineral rights. One of our all-time favorite sources, the survey provides our best measure of America’s ghastly wealth disparities.
  • It also includes a deep dive on inheritance, the passing down of the family jewels (or whatnot) from parents (73 percent in 2022), grandparents (14 percent) and aunts and uncles (8 percent).
  • The average American has inherited about $58,000 as of 2022. But that’s if you include the majority of us whose total lifetime inheritance sits at $0
  • ...28 more annotations...
  • Since 1992, the number of people getting inheritances from parents has nearly doubled even as bequests from grandparents and aunts and uncles have remained flat. Your 50s will be your peak inheriting ages, which makes sense given that an average 65-year-old in the U.S. can expect to live to around age 83 and your parents are, sadly, mortal.
  • If you look only at the lucky few who inherited anything, their average is $266,00
  • And if you look only at those in their 70s, it climbs to $344,000. Of course, that’s the value at the time of the gift. Add inflation and market-level returns and many bequests are worth much more by the time you earn your septuagenarian badge.
  • when we ran the numbers, we found they weren’t random at all.
  • White folks are about three times more likely to inherit than their Black, Hispanic or Asian friend
  • it remains vast enough to help explain why the typical White family has more than six times the net worth of the typical Black American famil
  • Up and down the demographic charts, it appears to be a case of to whom much is given … much more is given
  • Folks in the bottom 50 percent of earners inherit at half the national rate, while those in the top 1 percent are twice as likely to inherit something.
  • he confirmed that inheritances make the rich richer. But a rich kid’s true inheritance goes far beyond cash value: In a million less-measurable ways, elite parents give you a head start in life. By the time they die and hand you a windfall, you’ve already used all your advantages to accumulate wealth of your own.
  • “It’s not just the dollar amount that you get when your parents die,” Ricco said. “It’s the safety net that you had to start a business when you were younger, or the ability to put down a larger share of your savings into a down payment and a house because you know that you can save less for retirement.
  • “Little things like that are probably the main mechanisms through which intergenerational wealth is transmitted and are not easily captured just by the final value of what you see.”
  • Just one variable — how much you inherit — can account for more than 60 percent of U.S. wealth inequality
  • So, if you had to guess someone’s economic station in life and you could peek at only one data point, inheritance would be a pretty good bet. It’s one of the clearest socioeconomic signals on the planet.
  • “They actually reflect many advantages, many inequalities of opportunities that we face.”
  • The U.S. tax system does little to temper our uneven inheritance. Consider the stepped-up basis provision, “one of the most egregious (tax loopholes) that we have,”
  • When you sell something at a profit, you typically pay capital gains tax. But you can avoid that tax by holding the asset until you expire. At your death, the cost basis of your assets gets stepped up to their current value — meaning your heirs avoid getting taxed on what might be a very substantial gain.
  • Say you’re a natural-soda fan who bought $1,000 of Hansen Natural Corp. stock in 2000. You watched your money grow to more than $1.15 million as sleepy Hansen became the world-eating Monster Beverage Corp. Selling the stock would force you to pay capital gains on more than $1 million in earnings, so instead, you took it to the grave
  • (If you needed cash, you probably borrowed against your stockpiled stock pile, a common strategy among the 1 percent.)
  • If your heirs sell it, they’ll pay no taxes. If the value of the stock rises to, say, $1.151 million, they would owe taxes only on that extra $1,000.
  • Now multiply that loophole by the millions of homes, businesses, equities and other assets being handed down each year
  • It encourages older folks to hoard homes and businesses they can no longer make full use of, assets our housing-starved millennial readers would gladly snap up.
  • Early on, Goldwein said, it may have been considered necessary because it was difficult to determine the original value of long-held property. Revenue lost to the loophole was partly offset by a simpler-to-administer levy: the estate tax.
  • For now, you’ll pay the federal estate tax only on the part of your fortune that exceeds $12.92 million ($25.84 million for couples), and rising to $13.61 million in 2024 — and that’s only if your tax lawyers aren’t smart enough to dodge it.
  • “Between politicians continuing to cut the estate tax and taxpayers becoming increasingly good at avoiding it, very few now pay it,” Goldwein said. “That means we now have a big net tax break for most people inheriting large amounts of money.”
  • Kumon presents a convincing explanation: If you didn’t produce a male heir in Japan, it was customary to adopt one. A surplus son from another family would marry into yours. That kept your property in the family.
  • In Europe, if an elite family didn’t produce a male heir, which happened more than a quarter of the time, the default was for a daughter to marry into another well-off family and merge assets. So while Japanese family lines remained intact from generation to generation, European family lines merged, concentrating wealth into fewer and fewer hands.
  • As other families compete to marry into the Darcys’ colossal estate — spoiler for a novel from 1813! — inequality increases.
  • Given a few centuries, even subtle variations in inheritance patterns can produce sweeping societal differences.
Javier E

Pro-China YouTube Network Used A.I. to Malign U.S., Report Finds - The New York Times - 0 views

  • The 10-minute post was one of more than 4,500 videos in an unusually large network of YouTube channels spreading pro-China and anti-U.S. narratives, according to a report this week from the Australian Strategic Policy Institute
  • ome of the videos used artificially generated avatars or voice-overs, making the campaign the first influence operation known to the institute to pair A.I. voices with video essays.
  • The campaign’s goal, according to the report, was clear: to influence global opinion in favor of China and against the United States.
  • ...17 more annotations...
  • The videos promoted narratives that Chinese technology was superior to America’s, that the United States was doomed to economic collapse, and that China and Russia were responsible geopolitical players. Some of the clips fawned over Chinese companies like Huawei and denigrated American companies like Apple.
  • Content from at least 30 channels in the network drew nearly 120 million views and 730,000 subscribers since last year, along with occasional ads from Western companies
  • Disinformation — such as the false claim that some Southeast Asian nations had adopted the Chinese yuan as their own currency — was common. The videos were often able to quickly react to current events
  • he coordinated campaign might be “one of the most successful influence operations related to China ever witnessed on social media.”
  • YouTube said in a statement that its teams work around the clock to protect its community, adding that “we have invested heavily in robust systems to proactively detect coordinated influence operations.” The company said it welcomed research efforts and that it had shut down several of the channels mentioned in the report for violating the platform’s policies.
  • Efforts to push pro-China messaging have proliferated in recent years, but have featured largely low-quality content that attracted limited engagement or failed to sustain meaningful audiences
  • “This campaign actually leverages artificial intelligence, which gives it the ability to create persuasive threat content at scale at a very limited cost compared to previous campaigns we’ve seen,”
  • Historically, its influence operations have focused on defending the Communist Party government and its policies on issues like the persecution of Uyghurs or the fate of Taiwan
  • China began targeting the United States more directly amid the mass pro-democracy protests in Hong Kong in 2019 and continuing with the Covid-19 pandemic, echoing longstanding Russian efforts to discredit American leadership and influence at home and aboard.
  • Over the summer, researchers at Microsoft and other companies unearthed evidence of inauthentic accounts that China employed to falsely accuse the United States of using energy weapons to ignite the deadly wildfires in Hawaii in August.
  • Meta announced last month that it removed 4,789 Facebook accounts from China that were impersonating Americans to debate political issues, warning that the campaign appeared to be laying the groundwork for interference in the 2024 presidential elections.
  • It was the fifth network with ties to China that Meta had detected this year, the most of any other country.
  • The advent of artificial technology seems to have drawn special interest from Beijing. Ms. Keast of the Australian institute said that disinformation peddlers were increasingly using easily accessible video editing and A.I. programs to create large volumes of convincing content.
  • She said that the network of pro-China YouTube channels most likely fed English-language scripts into readily available online text-to-video software or other programs that require no technical expertise and can produce clips within minutes. Such programs often allow users to select A.I.-generated voice narration and customize the gender, accent and tone of voice.
  • In 39 of the videos, Ms. Keast found at least 10 artificially generated avatars advertised by a British A.I. company
  • she also discovered what may be the first example in an influence operation of a digital avatar created by a Chinese company — a woman in a red dress named Yanni.
  • The scale of the pro-China network is probably even larger, according to the report. Similar channels appeared to target Indonesian and French people. Three separate channels posted videos about chip production that used similar thumbnail images and the same title translated into English, French and Spanish.
criscimagnael

Afghanistan Tries to Stamp Out Opium Again - The New York Times - 0 views

  • For years, opium has been the monster too big to slay. One Afghan government after another has pledged to stamp out opium production and trafficking, only to prove unable to resist billions of dollars in illicit profits.
  • But after the U.S.-led invasion in 2001, opium taxes and smuggling helped fuel the Taliban’s own 20-year insurgency.
  • The Taliban announced on April 3 that poppy cultivation had been outlawed, with violators to be punished under Shariah law.
  • ...16 more annotations...
  • Water pumps powered by cheap and highly efficient solar panels are able to drill deep down into rapidly dwindling desert aquifers. The solar panels have helped generate bumper opium harvests year after year since farmers in southern Afghanistan’s poppy-growing belt began installing them around 2014.
  • The opium trade earned about $1.8 billion to $2.7 billion last year, the United Nations has estimated. Opium sales have provided 9 to 14 percent of Afghanistan’s gross domestic product, compared with 9 percent provided by legal exports of goods and services.
  • The solar arrays have been central to ensuring Afghanistan’s status as the global leader in opium.
  • The Taliban, for their part, have condemned opium as anti-Islamic, as Afghanistan’s poppy crop sustains addicts in Europe and the Middle East, as well as a huge number inside Afghanistan. But given their own deep ties to opium smuggling during the insurgency, Taliban leaders are walking a fine line between hypocrisy and holiness.
  • Now, solar power is a defining feature of southern Afghan life. Tiny solar panels power light bulbs in mud huts, and solar-driven pumps irrigate cash crops like wheat and pomegranates, as well as subsistence farmers’ vegetable plots.
  • The panels, which supplanted more expensive and less reliable diesel to run water pumps, have helped turn the desert green.
  • Opium farmers now rely on at least 67,000 solar-power-fed water reservoirs across Afghanistan’s desert southwest, according to a European Union-funded research project by David Mansfield, a consultant who has studied illicit economies and rural livelihoods in Afghanistan for two decades.
  • “For many opium farmers, abundant water is now a given,” he said. “No one perceives it to have a cost.”
  • “Do not destroy the fields, but make the fields dry out,” Gov. Maulave Talib Akhund said in a statement. He added, “We are committed to fulfilling the opium decree.”
  • The opium ban was met with a collective shrug this spring by southern farmers, many of whom were already harvesting their spring crops. Opium prices surged almost immediately, several farmers said, to roughly $180 per kilogram from $60 per kilogram.
  • But the Taliban have vowed to crack down on farmers who try to cultivate any new crops.
  • Dr. Mansfield said that determining how long the aquifers could continue to supply water was uncharted territory because no one had been able to conduct a rigorous scientific study of the desert groundwater.
  • “We have to continue to dig our wells deeper and deeper,” Mr. Armani said.
  • Even when prices are high, many poppy farmers say, they earn only about $2 a day for each family member. They are at the very bottom of a narcotic trafficking system in which profits increase exponentially from growers to middlemen to processing labs to major cross-border traffickers.
  • Farmers whose poppy fields were plowed under by the previous government could send their sons to paying jobs as soldiers or police officers — or to the constellation of unskilled jobs provided by the United States and NATO. But those options are gone, and unemployment has soared under the Taliban.
  • “Growing poppies is the only option to survive right now,” he said.
criscimagnael

The Race to Free Ukraine's Stranded Grain - The New York Times - 0 views

  • The Baltic Sea port has silos to store plenty of grain, railway lines to transport it there from Ukraine, where it has been trapped by the war, and a deep harbor ready for ships that can take it to Egypt, Yemen and other countries in desperate need of food.
  • “Starvation is near,
  • Belarus controls the railway lines offering the most direct, cheapest and fastest route for large volumes of grain out of Ukraine to Klaipeda and other Baltic ports.
  • ...14 more annotations...
  • But using them would mean cutting a deal with a brutal leader closely allied with President Vladimir V. Putin of Russia, underscoring the painful moral and political decisions that now confront Western leaders as they scramble to avert a global food crisis.
  • The Lithuania route appears to be the most promising for getting food quickly to areas like the Middle East and Africa that need it the most, even if it is also a long shot.
  • “This is a decision that politicians need to take not me,” Mr. Latakas, the Klaipeda port director, said. “It is up to them to decide what is most important.”
  • Western nations like the United States, as well as Ukraine, oppose lifting sanctions imposed on Russia over its invasion but have not ruled out a deal with Belarus.
  • The war has halted those shipments, leaving around 25 million tons of grain, according to U.N. estimates, from last year’s harvest stranded in silos and at risk of rotting if it is not moved soon. A further 50 million tons is expected to be harvested in coming months. The grain elevators in Ukraine that have not been damaged or destroyed by shelling are quickly filling up. Soon, there will be no room left to store the incoming harvest.
  • Ukraine’s foreign minister, said severe bottlenecks meant that the existing routes through Poland and Romania “can provide only limited alleviation of the food crisis” given the volumes that need to be moved.
  • Warning of an approaching “hurricane of hunger,” the head of the United Nations, António Guterres, has sought to negotiate a deal under which Ukrainian grain would be transported out of the country by ship or train, and in exchange Russia and Belarus would sell fertilizer products to the global market without the threat of sanctions.
  • That means that Western governments and Ukraine are left to try out a range of possible solutions fraught with problems. Test runs of trains carrying grain from Ukraine through Poland to Lithuania, for example, have taken three weeks because of different track gauges in neighboring countries, requiring cargos to be loaded and unloaded multiples times.
  • Turkey has proposed using its ships to transport grain from Odesa, which, in addition to getting Ukraine to demine the port, would require an agreement from Russia not to hinder vessels.
  • But faced with the considerable challenges of executing such a plan, the best option for getting large quantities of Ukrainian grain to hungry people is probably by rail through Belarus to Klaipeda and other Baltic ports in Latvia and Estonia.That “won’t solve everything, but it would significantly alleviate the situation,”
  • Ukraine is opposed to any easing of sanctions against Russia but, increasingly desperate to move grain trapped by the war, is more open to the idea of a temporary easing of sanctions against Belarusian potash.
  • Roman Slaston, the head of Ukraine’s main agricultural lobby, said one challenge was that many rail connections through Belarus had been blown up by Belarusian railway employees sympathetic to the Ukrainian cause.
  • “Given that the Russian Army is still in Belarus, who is going to pay to repair that now?” Mr. Slaston asked. “This is like some kind of madness.”
  • We don’t grow food to store it,” he said. “People in Africa won’t be fed by our grain sitting in bags in our fields.”
Javier E

Katie Duke struggles to navigate advocating for nurses and working as one - The Washing... - 0 views

  • Nurses don’t dispute that patients deserve compassion and respect, but many feel that their roles are misunderstood and their expertise undervalued; as Duke repeatedly told me, people don’t respect nurses like they do doctors. As a result, nurses are leaving hospitals in droves. And they’re establishing new careers, not just in health care but as creatives and entrepreneurs.
  • Duke argues that nurses are especially fed up and burned out. And yet, as caretakers, nobody expects them to put their physical and emotional well-being first. But that’s starting to change. Once a lone voice, Duke is now a representative one.
  • Nurses make up the nation’s largest body of health-care workers, with three times as many RNs as physicians
  • ...11 more annotations...
  • They also died of covid at higher rates than other health-care workers, and they experience high rates of burnout, “an occupational syndrome characterized by a high degree of emotional exhaustion and depersonalization, and a low sense of personal accomplishment at work,” according to the World Health Organization
  • high stress and anxiety are the “antecedents” to burnout. But you know you’ve hit the nadir when you become emotionally detached from your work. “It’s almost like a loss of meaning,” she said.
  • In April 2020, Miller said the public was “exalting nurses as these superheroes and angels,” while nurses themselves were tweeting about “the horrible working conditions, enormous amount of death without any break … being mentally and completely worn down and exhausted.”
  • Miller said nurses are experiencing “collective trauma,” a conclusion she reached by studying their social media usage through the pandemic
  • Before the pandemic, between a third and half of nurses and physicians already reported symptoms of burnout. A covid impact study published in March 2022 by the American Nurses Foundation found this number had risen to 60 percent among acute-care nurses. “Reports of feeling betrayed, undervalued, and unsupported have risen,
  • Miller and Groves also found a fivefold increase in references to quitting between the 2020 study and the 2021 study. “Our profession will never be the same,” Miller told me. “If you talked to any nurse who worked bedside through the pandemic, that’s what they’ll tell you.” From this, she says, has grown a desire to be heard. “We feel emboldened. We’re not as willing to be silent anymore.”
  • then, in late February 2013, Duke was abruptly fired. She’d posted a photo on Instagram showing an ER where hospital staff had just saved the life of a man hit by a subway train. It looked like a hurricane had blown through. There were no people in the photo, but Duke titled the post, “Man vs. 6 train.” She told me she wanted to showcase “the amazing things doctors and nurses do to save lives … the f---ing real deal.”
  • Duke says her superiors called her an “amazing nurse and team member” before they told her that “it was time to move on.” Her director handed her a printout of the Instagram post. According to Duke, he acknowledged that she hadn’t violated HIPAA or any hospital policies but said she’d been insensitive and unprofessional. She was escorted out of the building by security. When the episode aired, it showed Duke crying on the sidewalk outside the hospital.
  • She’d reposted the photo, with permission, from a male doctor’s Instagram account. He faced no repercussions. She now admits her caption was rather “cold” — especially compared with the doctor’s, “After the trauma.” In hindsight, she said, she might have been more sensitive. Maybe not even posted the photo at all. And yet this frustrates her. Why shouldn’t the public see nursing culture for what it really is? Man vs. 6 Train. “That’s ER speak,” she told me. “We say ‘head injury in room five.’ We don’t say ‘Mr. Smith in room five. We talk and think by mechanism of injury.”
  • But this is at odds with the romanticized image of the nurturing nurse — which hospitals often want to project. In some cases, nurses are explicitly told not to be forthright with their patients. “I know nurses in oncology who are not allowed to say to a patient and their family, ‘This will be the fourth clinical trial, but we all know your family member is dying,”
  • “The most frequent question is, ‘Katie, I have to get out of the hospital, but I don’t know what else to do.’” Her advice: “You have to create your own definition of what being a nursing professional means to you.” She has a ready list of alternative jobs, including “med spa” owner, educational consultant and YouTuber.
Javier E

Barr Rebukes Trump as 'Off the Rails' in New Memoir - The New York Times - 0 views

  • Former Attorney General William P. Barr writes in a new memoir that former President Donald J. Trump’s “self-indulgence and lack of self-control” cost him the 2020 election and says “the absurd lengths to which he took his ‘stolen election’ claim led to the rioting on Capitol Hill.”
  • In the book, “One Damn Thing After Another: Memoirs of an Attorney General,” Mr. Barr also urges his fellow Republicans to pick someone else as the party’s nominee for the 2024 election, calling the prospect of another presidential run by Mr. Trump “dismaying.”
  • “Donald Trump has shown he has neither the temperament nor persuasive powers to provide the kind of positive leadership that is needed,”
  • ...6 more annotations...
  • Mr. Trump “lost his grip” after the election, he writes.
  • “He stopped listening to his advisers, became manic and unreasonable, and was off the rails,” Mr. Barr writes. “He surrounded himself with sycophants, including many whack jobs from outside the government, who fed him a steady diet of comforting but unsupported conspiracy theories.”
  • Mr. Barr also denounces the inquiry by the F.B.I. and then the special counsel, Robert S. Mueller III, into links between Russia and Trump campaign aides in 2016. He writes that “the matter that really required investigation” was “how did the phony Russiagate scandal get going, and why did the F.B.I. leadership handle the matter in such an inexplicable and heavy-handed way?”
  • On the scandal that led to Mr. Trump’s first impeachment, in which Mr. Trump withheld aid to Ukraine as leverage to try to get Ukraine’s president to announce an investigation into Joseph R. Biden Jr., Mr. Barr was scathing.
  • He calls it “another mess — this one self-inflicted and the result of abject stupidity,” a “harebrained gambit” and “idiotic beyond belief.” But while Mr. Barr describes the conversation Mr. Trump had with Ukraine’s president on the topic as “unseemly and injudicious,” he maintains that it did not rise to a “criminal offense.”
  • His book expands on that theme, going through specific “fact-free claims of fraud” that Mr. Trump has put forward and explaining why the Justice Department found them baseless. He lists several reasons, for example, that claims about purportedly hacked Dominion voting machines were “absolute nonsense” and “meaningless twaddle.”“The election was not ‘stolen,’” Mr. Barr writes. “Trump lost it.”
criscimagnael

Biden Will Call for More Limits on Social Media in State of the Union Address - The New... - 0 views

  • President Biden will call in his Tuesday night address for limits on potentially harmful interactions between children and social media platforms.
  • He will ask Congress to ban targeted ads aimed at children on social media sites,
  • the platforms “should be required to prioritize and ensure” the safety and health of young people, including when they make design choices for their product, according to a fact sheet. And he will call for more research into how social media affects mental health and new scrutiny of the algorithms that often determine what someone sees online.
  • ...3 more annotations...
  • In turn, the critics say that young people can be fed increasingly extreme content or posts that diminish their self-worth.
  • One of the guests joining the first lady, Jill Biden, for the speech will be Frances Haugen, a former Facebook employee who leaked documents that, among other things, showed that some teenagers said Instagram made them feel worse about themselves.
  • But the United States lags behind many of its allies in taking concrete steps to shield children from extreme posts, addicting content and data collection online. Last year, new guidelines took effect in the United Kingdom that push platforms to limit the data they gather on young people, prompting several companies to implement more child safety features.
Javier E

Fed Up With Deadly Propaganda, Some Russian Journalists Quit - The New York Times - 0 views

  • the real test for Russian public opinion is still to come as the economic hardships touched off by Western sanctions filter through society.
  • he said he thought that the Kremlin’s narrative of a West subverting Ukraine in order to destroy Russia, and of Russia’s waging a noble fight to protect its people abroad, has become so strongly ingrained in the television-viewing public that it was unlikely to be dislodged anytime soon.
  • “What seems to fit is accepted, what doesn’t fit is simply rejected,” Mr. Volkov said of how many Russians perceive the news to agree with the television narrative. “What is true or not true doesn’t matter.”
lilyrashkind

Lottery Numbers, Blockchain Articles And Cold Calls To Moscow: How Activists Are Using ... - 0 views

  • Early last year, Tobias Natterer, a copywriter at the ad agency DDB Berlin, began pondering how to evade Russian censors. His client, the German arm of nonprofit Reporters Without Borders (RSF), was looking for more effective ways to let Russians get the news their government didn’t want them to see. RSF had been duplicating censored websites and housing them on servers deemed too important for governments to block—a tactic known as collateral freedom. (“If the government tries to shoot down the website,” Natterer explains, “they also have to shoot down their own websites which is why it’s called collateral.”)
  • . Anyone searching those numbers on Twitter or other platforms would then find links to the banned site and forbidden news. Talk about timing. Just as they were about to launch the strategy in Russia and two other countries, Russian President Vladimir Putin gave the order to invade Ukraine. The Kremlin immediately clamped down on nationwide coverage of its actions, making the RSF/DDB experiment even more vital.
  • “We want to make sure that press freedom isn’t just seen as something defended by journalists themselves,” says Lisa Dittmer, RSF Germany’s advocacy officer for Internet freedom. “It’s something that is a core part of any democracy and it’s a core part of defending any kind of freedom that you have.”
  • ...8 more annotations...
  • Telegram videos and more. Ukrainian entrepreneurs are even hijacking their own apps to let Russians know what’s going on. While such efforts have mixed success, they demonstrate the ingenuity needed to win the information battle that’s as old as war itself.
  • Meanwhile, an organization called Squad303 built an online tool that lets people automatically send Russians texts, WhatsApp messages and emails. Some of the most effective strategies rely on old-school technologies. The use of virtual private networks, or VPNs, has skyrocketed in Russia since the war began. That may explain why the country’s telecom regulator has forced Google to delist thousands of URLs linked to VPN sites.
  • For Paulius Senūta, an advertising executive in Lithuania, the weapon of choice is the telephone. He recently launched “CallRussia,” a website that enables Russian speakers to cold-call random Russians based on a directory of 40 million phone numbers. Visitors to the site get a phone number along with a basic script developed by psychologists that advises callers to share their Russian connections and volunteer status before encouraging targets to hear what’s really going on. Suggested lines include “The only thing (Putin) seems to fear is information,” which then lets callers stress the need to put it “in the hands of Russians who know the truth and stand up to stop this war.” In its first eight days, Senūta says users from eastern Europe and elsewhere around the world placed nearly 100,000 calls to strangers in Russia.
  • “One thing is to call them and the other thing is how to talk with them,” says Senūta. As with any telemarketing call, the response from those on the receiving end has been mixed. While some have been receptive, others are angry at the interruption or suspicious that it’s a trick. “How do you speak to someone who has been in a different media environment?”
  • Terms like “war,” “invasion,” or “aggression” have been banned from coverage, punishable by fines of up to five million rubles (now roughly $52,000) or 15 years in prison. Says Kozlovsky: “It’s getting worse and worse.”
  • Arnold Schwarzenegger uploaded a lengthy video message to Russians via Telegram that included both Russian and English subtitles.) However, that it doesn’t mean it hurts to also try new things.
  • The question is whether Russians realize they’re being fed on a media diet of state-sponsored lies and criminalization of the truth. Dittmer believes many Russians are eager to know what’s really going on. So far, RSF’s “Truth Wins” campaign has been viewed more than 150,000 times in Russia. (Previous efforts by DDB and RSF in various countries have included embedding censored news in a virtual library within Minecraft and a playlist on Spotify.)
  • Censorship also cuts both ways. While Russian authorities have banned Facebook and Instagram as “extremist,” Western news outlets have in turn cut ties with state-controlled outlets because of Putin’s disinformation campaign. While pulling products and partnerships out of Russia may send a powerful message to the Kremlin, such isolation also risks leaving a bubble of disinformation intact. Luckily, “it’s pretty much impossible to censor effectively,” says RSF’s Dittmer, pointing to further efforts to use blockchain and gaming technology to spread news. “We can play the cat and mouse game with the internet censors in a slightly more sophisticated way.”
Javier E

Opinion | Our Kids Are Living In a Different Digital World - The New York Times - 0 views

  • You may have seen the tins that contain 15 little white rectangles that look like the desiccant packs labeled “Do Not Eat.” Zyns are filled with nicotine and are meant to be placed under your lip like tobacco dip. No spitting is required, so nicotine pouches are even less visible than vaping. Zyns come in two strengths in the United States, three and six milligrams. A single six-milligram pouch is a dose so high that first-time users on TikTok have said it caused them to vomit or pass out.
  • We worry about bad actors bullying, luring or indoctrinating them online
  • I was stunned by the vast forces that are influencing teenagers. These forces operate largely unhampered by a regulatory system that seems to always be a step behind when it comes to how children can and are being harmed on social media.
  • ...36 more annotations...
  • Parents need to know that when children go online, they are entering a world of influencers, many of whom are hoping to make money by pushing dangerous products. It’s a world that’s invisible to us
  • when we log on to our social media, we don’t see what they see. Thanks to algorithms and ad targeting, I see videos about the best lawn fertilizer and wrinkle laser masks, while Ian is being fed reviews of flavored vape pens and beautiful women livestreaming themselves gambling crypto and urging him to gamble, too.
  • Smartphones are taking our kids to a different world
  • Greyson Imm, an 18-year-old high school student in Prairie Village, Kan., said he was 17 when Zyn videos started appearing on his TikTok feed. The videos multiplied through the spring, when they were appearing almost daily. “Nobody had heard about Zyn until very early 2023,” he said. Now, a “lot of high schoolers have been using Zyn. It’s really taken off, at least in our community.”
  • all of this is, unfortunately, only part of what makes social media dangerous.
  • The tobacco conglomerate Philip Morris International acquired the Zyn maker Swedish Match in 2022 as part of a strategic push into smokeless products, a category it projects could help drive an expected $2 billion in U.S. revenue in 2024.
  • P.M.I. is also a company that has long denied it markets tobacco products to minors despite decades of research accusing it of just that. One 2022 study alone found its brands advertising near schools and playgrounds around the globe.
  • the ’90s, when magazines ran full-page Absolut Vodka ads in different colors, which my friends and I collected and taped up on our walls next to pictures of a young Leonardo DiCaprio — until our parents tore them down. This was advertising that appealed to me as a teenager but was also visible to my parents, and — crucially — to regulators, who could point to billboards near schools or flavored vodka ads in fashion magazines and say, this is wrong.
  • Even the most committed parent today doesn’t have the same visibility into what her children are seeing online, so it is worth explaining how products like Zyn end up in social feeds
  • influencers. They aren’t traditional pitch people. Think of them more like the coolest kids on the block. They establish a following thanks to their personality, experience or expertise. They share how they’re feeling, they share what they’re thinking about, they share stuff they l
  • With ruthless efficiency, social media can deliver unlimited amounts of the content that influencers create or inspire. That makes the combination of influencers and social-media algorithms perhaps the most powerful form of advertising ever invented.
  • Videos like his operate like a meme: It’s unintelligible to the uninitiated, it’s a hilarious inside joke to those who know, and it encourages the audience to spread the message
  • Enter Tucker Carlson. Mr. Carlson, the former Fox News megastar who recently started his own subscription streaming service, has become a big Zyn influencer. He’s mentioned his love of Zyn in enough podcasts and interviews to earn the nickname Tucker CarlZyn.
  • was Max VanderAarde. You can glimpse him in a video from the event wearing a Santa hat and toasting Mr. Carlson as they each pop Zyns in their mouths. “You can call me king of Zynbabwe, or Tucker CarlZyn’s cousin,” he says in a recent TikTok. “Probably, what, moved 30 mil cans last year?”
  • Freezer Tarps, Mr. VanderAarde’s TikTok account, appears to have been removed after I asked the company about it. Left up are the large number of TikToks by the likes of @lifeofaZyn, @Zynfluencer1 and @Zyntakeover; those hashtagged to #Zynbabwe, one of Freezer Tarps’s favorite terms, have amassed more than 67 million views. So it’s worth breaking down Mr. VanderAarde’s videos.
  • All of these videos would just be jokes (in poor taste) if they were seen by adults only. They aren’t. But we can’t know for sure how many children follow the Nelk Boys or Freezer Tarps — social-media companies generally don’t release granular age-related data to the public. Mr. VanderAarde, who responded to a few of my questions via LinkedIn, said that nearly 95 percent of his followers are over the age of 18.
  • They’re incentivized to increase their following and, in turn, often their bank accounts. Young people are particularly susceptible to this kind of promotion because their relationship with influencers is akin to the intimacy of a close friend.
  • The helicopter video has already been viewed more than one million times on YouTube, and iterations of it have circulated widely on TikTok.
  • YouTube said it eventually determined that four versions of the Carlson Zyn videos were not appropriate for viewers under age 18 under its community guidelines and restricted access to them by age
  • Mr. Carlson declined to comment on the record beyond his two-word statement. The Nelk Boys didn’t respond to requests for comment. Meta declined to comment on the record. TikTok said it does not allow content that promotes tobacco or its alternatives. The company said that it has over 40,000 trust and safety experts who work to keep the platform safe and that it prevented teenagers’ accounts from viewing over two million videos globally that show the consumption of tobacco products by adults. TikTok added that in the third quarter of 2023 it proactively removed 97 percent of videos that violated its alcohol, tobacco and drugs policy.
  • Greyson Imm, the high school student in Prairie Village, Kan., points to Mr. VanderAarde as having brought Zyn “more into the mainstream.” Mr. Imm believes his interest in independent comedy on TikTok perhaps made him a target for Mr. VanderAarde’s videos. “He would create all these funny phrases or things that would make it funny and joke about it and make it relevant to us.”
  • It wasn’t long before Mr. Imm noticed Zyn blowing up among his classmates — so much so that the student, now a senior at Shawnee Mission East High School, decided to write a piece in his school newspaper about it. He conducted an Instagram poll from the newspaper’s account and found that 23 percent of the students who responded used oral nicotine pouches during school.
  • “Upper-decky lip cushions, ferda!” Mr. VanderAarde coos in what was one of his popular TikTok videos, which had been liked more than 40,000 times. The singsong audio sounds like gibberish to most people, but it’s actually a call to action. “Lip cushion” is a nickname for a nicotine pouch, and “ferda” is slang for “the guys.”
  • “I have fun posting silly content that makes fun of pop culture,” Mr. VanderAarde said to me in our LinkedIn exchange.
  • I turned to Influencity, a software program that estimates the ages of social media users by analyzing profile photos and selfies in recent posts. Influencity estimated that roughly 10 percent of the Nelk Boys’ followers on YouTube are ages 13 to 17. That’s more than 800,000 children.
  • I’ve spent the past three years studying media manipulation and memes, and what I see in Freezer Tarps’s silly content is strategy. The use of Zyn slang seems like a way to turn interest in Zyn into a meme that can be monetized through merchandise and other business opportunities.
  • Such as? Freezer Tarps sells his own pouch product, Upperdeckys, which delivers caffeine instead of nicotine and is available in flavors including cotton candy and orange creamsicle. In addition to jockeying for sponsorship, Mr. Carlson may also be trying to establish himself with a younger, more male, more online audience as his new media company begins building its subscriber base
  • This is the kind of viral word-of-mouth marketing that looks like entertainment, functions like culture and can increase sales
  • What’s particularly galling about all of this is that we as a society already agreed that peddling nicotine to kids is not OK. It is illegal to sell nicotine products to anyone under the age of 21 in all 50 states
  • numerous studies have shown that the younger people are when they try nicotine for the first time, the more likely they will become addicted to it. Nearly 90 percent of adults who smoke daily started smoking before they turned 18.
  • Decades later — even after Juul showed the power of influencers to help addict yet another generation of children — the courts, tech companies and regulators still haven’t adequately grappled with the complexities of the influencer economy.
  • Facebook, Instagram and TikTok all have guidelines that prohibit tobacco ads and sponsored, endorsed or partnership-based content that promotes tobacco products. Holding them accountable for maintaining those standards is a bigger question.
  • We need a new definition of advertising that takes into account how the internet actually works. I’d go so far as to propose that the courts broaden the definition of advertising to include all influencer promotion. For a product as dangerous as nicotine, I’d put the bar to be considered an influencer as low as 1,000 followers on a social-media account, and maybe if a video from someone with less of a following goes viral under certain legal definitions, it would become influencer promotion.
  • Laws should require tech companies to share data on what young people are seeing on social media and to prevent any content promoting age-gated products from reaching children’s feeds
  • hose efforts must go hand in hand with social media companies putting real teeth behind their efforts to verify the ages of their users. Government agencies should enforce the rules already on the books to protect children from exposure to addictive products,
  • I refuse to believe there aren’t ways to write laws and regulations that can address these difficult questions over tech company liability and free speech, that there aren’t ways to hold platforms more accountable for advertising that might endanger kids. Let’s stop treating the internet like a monster we can’t control. We built it. We foisted it upon our children. We had better try to protect them from its potential harms as best we can.
Javier E

To Live Past 100, Mangia a Lot Less: Italian Expert's Ideas on Aging - The New York Times - 0 views

  • Valter Longo, a nutrition-obsessed Italian Ph.D. student, wrestled with a lifelong addiction to longevity.
  • “For studying aging, Italy is just incredible,
  • Italy has one of the world’s oldest populations, including multiple pockets of centenarians who tantalize researchers searching for the fountain of youth. “It’s nirvana.”
  • ...24 more annotations...
  • Dr. Longo, who is also a professor of gerontology and director of the U.S.C. Longevity Institute in California, has long advocated longer and better living through eating Lite Italian, one of a global explosion of Road to Perpetual Wellville theories about how to stay young in a field that is itself still in its adolescence.
  • In addition to identifying genes that regulate aging, he has created a plant and nut-based diet with supplements and kale crackers that mimics fasting to, he argues, allow cells to shed harmful baggage and rejuvenate, without the down side of actually starving.
  • He has patented and sold his ProLon diet kits; published best-selling books (“The Longevity Diet”); and been called an influential “Fasting Evangelist” by Time magazine.
  • Last month, he published a new study based on clinical trials of hundreds of older people — including in the Calabria town from which his family hails — that he said suggests that periodic cycles of his own faux-fasting approach could reduce biological age and stave off illnesses associated with aging.
  • “It’s very similar to the original Mediterranean diet, not the present one,” she said, pointing at photographs on the wall of a bowl of ancient legumes similar to the chickpe
  • “Almost nobody in Italy eats the Mediterranean diet,”
  • He added that many Italian children, especially in the country’s south, are obese, bloated on what he calls the poisonous five Ps — pizza, pasta, protein, potatoes and pane (or bread).
  • in recent years, Silicon Valley billionaires who hope to be forever young have funded secretive labs. Wellness articles have conquered newspaper home pages and Fountains-of-Youth workout and diet ads featuring insanely fit middle-aged people teem on the social media feeds of not insanely fit middle-aged people.
  • he said Italy’s lack of investment in research was a disgrace.
  • even as concepts like longevity, intermittent fasting and biological age — you’re only as old as your cells feel! — have gained momentum, governments like Italy’s are fretting over a creakier future in which booming populations of old people drain resources from the dwindling young.
  • many scientists, nutritionists and longevity fanatics the world over continue to stare longingly toward Italy, seeking in its deep pockets of centenarians a secret ingredient to long life.
  • “Probably they kept breeding between cousins and relatives,” Dr. Longo offered, referring to the sometimes close relations in little Italian hill towns. “At some point, we suspect it sort of generated the super-longevity genome.”
  • The genetic drawbacks of incest, he hypothesized, slowly vanished because those mutations either killed their carriers before they could reproduce or because the town noticed a monstrous ailment — like early onset Alzheimer’s — in a particular family line and steered clear.
  • Dr. Longo wonders whether Italy’s centenarians had been protected from later disease by a starvation period and old-fashioned Mediterranean diet early in life, during rural Italy’s abject war-era poverty. Then a boost of proteins and fats and modern medicine after Italy’s postwar economic miracle protected them from frailty as they got older and kept them alive.
  • At age 16, he moved to Chicago to live with relatives and couldn’t help notice that his middle-aged aunts and uncles fed on the “Chicago diet” of sausages and sugary drinks suffered diabetes and cardiovascular disease that their relatives back in Calabria did not.
  • He eventually earned his Ph.D in biochemistry at U.C.L.A. and did his postdoctoral training in the neurobiology of aging at U.S.C. He overcame early skepticism about the field to publish in top journals and became a zealous evangelizer for the age-reversing effects of his diet. About 10 years ago, eager to be closer to his aging parents in Genoa, he took a second job at the IFOM oncology institute in Milan.
  • He found a fount of inspiration in the pescatarian-heavy diet around Genoa and all the legumes down in Calabria.
  • he also found the modern Italian diet — the cured meats, layers of lasagna and fried vegetables the world hungered for — horrendous and a source of disease.
  • His private foundation, also based in Milan, tailors diets for cancer patients, but also consults for Italian companies and schools, promoting a Mediterranean diet that is actually foreign to most Italians today.
  • “Italy’s got such incredible history and a wealth of information about aging,” he said. “But spends virtually nothing.”
  • He talked about how he and others had identified an important regulator of aging in yeast, and how he has investigated whether the same pathway was at work in all organisms.
  • Dr. Longo said he thinks of his mission as extending youth and health, not simply putting more years on the clock, a goal he said could lead to a “scary world,” in which only the rich could afford to live for centuries, potentially forcing caps on having children
  • A more likely short-term scenario, he said, was division between two populations. The first would live as we do now and reach about 80 or longer through medical advancements. But Italians would be saddled with long — and, given the drop in the birthrate, potentially lonely — years burdened by horrible diseases.
  • The other population would follow fasting diets and scientific breakthroughs and live to 100 and perhaps 110 in relative good health.
Javier E

Immigration powered the economy, job market amid border negotiations - The Washington Post - 0 views

  • There isn’t much data on how many of the new immigrants in recent years were documented versus undocumented. But estimates from the Pew Research Center last fall showed that undocumented immigrants made up 22 percent of the total foreign-born U.S. population in 2021. That’s down compared to previous decades: Between 2007 and 2021, the undocumented population fell by 14 percent, Pew found. Meanwhile, the legal immigrant population grew by 29 percent.
  • immigrant workers are supporting tremendously — and likely will keep powering for years to come.
  • The economy is projected to grow by $7 trillion more over the next decade than it would have without new influxes of immigrants, according to the CBO.
  • ...21 more annotations...
  • Fresh estimates from the Congressional Budget Office this month said the U.S. labor force in 2023 had grown by 5.2 million people, thanks especially to net immigration
  • economy grow. But today’s snapshot still represents a stark turnaround from just a short time ago.
  • he flow of migrants to the United States started slowing during the Trump administration, when officials took hundreds of executive actions designed to restrict migration.
  • Right before the pandemic, there were about 1.5 million fewer working-age immigrants in the United States than pre-2017 trends would have predicted, according to the San Francisco Fed. By the end of 2021, that shortfall had widened to about 2 million
  • But the economy overall wound up rebounding aggressively from the sudden, widespread closures of 2020, bolstered by historic government stimulus and vaccines that debuted faster than expected.
  • The sudden snapback in demand sent inflation soaring. Supply chain issues were a main reason prices rose quickly. But labor shortages posed a problem, too, and economists feared that rising wages — as employers scrambled to find workers — would keep price increases dangerously high.
  • That’s because the labor force that emerged as the pandemic ebbed was smaller than it had been: Millions of people retired early, stayed home to take over child care or avoid getting sick, or decided to look for new jobs entirely
  • In the span of a year or so, employers went from having businesses crater to sprinting to hire enough staff to keep restaurants, hotels, retail stores and construction sites going. Wages for the lowest earners rose at the fastest pace.
  • About the same time, the path was widening for migrants to cross the southern border, particularly as the new Biden administration rolled back Trump-era restrictions.
  • In normal economic times, some analysts note, new immigrants can drag down wages, especially if employers decide to hire them over native-born workers. Undocumented workers, who don’t have as much leverage to push for higher pay, could lower average wages even more.
  • But the past few years were extremely abnormal because companies were desperate to hire.
  • lus, it would be exceedingly difficult for immigration to affect the wages of enormous swaths of the labor force,
  • “What it can do is lower the wages of a specific occupation in a specific area, but American workers aren’t stupid. They change jobs. They change what they specialize in,” Nowrasteh said. “So that’s part of the reason why wages don’t go down.”
  • Experts argue that the strength of the U.S. economy has benefited American workers and foreign-born workers alike. Each group accounts for roughly half of the labor market’s impressive year-over-year growth since January 2023
  • Particularly for immigrants fleeing poorer countries, the booming U.S. job market and the promise of higher wages continue to be an enormous draw.
  • “More than any immigration policy per se, the biggest pull for migrants is the strength of the labor market,” said Catalina Amuedo-Dorantes, an economics professor at the University of California at Merced. “More than any enforcement policy, any immigration policy, at the end of the day.”
  • Upon arriving in Denver in October, Santander hadn’t acquired a work permit but needed to feed his small children. Even without authorization, he found a job as a roofer for a contractor that ultimately pocketed his earnings, then one cleaning industrial refrigerators on the overnight shift for $12 an hour. Since receiving his work permit in January, Santander has started “a much better job” at a wood accessories manufacturer making $20 an hour.
  • But for the vast majority of migrants who arrive in the United States without prior approval, including asylum seekers and those who come for economic reasons, getting a work permit isn’t easy.
  • Federal law requires migrants to wait nearly six months to receive a work permit after filing for asylum. Wait times can stretch for additional months because of a backlog in cases.
  • While they wait, many migrants find off-the-books work as day laborers or street vendors, advocates say. Others get jobs using falsified documents, including many teenagers who came into the country as unaccompanied minors.
  • Still, many migrants miss the year-long window to apply for asylum — a process that can cost thousands of dollars — leaving them with few pathways to work authorization, advocates say. Those who can’t apply for asylum often end up working without official permission in low-wage industries where they are susceptible to exploitation.
Javier E

Opinion | The Mystery of White Rural Rage - The New York Times - 0 views

  • Business types and some economists may talk glowingly about the virtues of “creative destruction,” but the process can be devastating, economically and socially, for those who find themselves on the destruction side of the equation. This is especially true when technological change undermines not just individual workers but also whole communities.
  • It’s a big part of what has happened to rural America.
  • This process and its effects are laid out in devastating, terrifying and baffling detail in “White Rural Rage: The Threat to American Democracy,” a new book by Tom Schaller and Paul Waldman
  • ...16 more annotations...
  • “devastating” because the hardship of rural Americans is real, “terrifying” because the political backlash to this hardship poses a clear and present danger to our democracy, and “baffling” because at some level I still don’t get the politics.
  • Technology is the main driver of rural decline, Schaller and Waldman argue. Indeed, American farms produce more than five times as much as they did 75 years ago, but the agricultural work force declined by about two-thirds over the same period, thanks to machinery, improved seeds, fertilizers and pesticides
  • Coal production has been falling recently, but thanks partly to technologies like mountaintop removal, coal mining as a way of life largely disappeared long ago, with the number of miners falling 80 percent even as production roughly doubled.
  • The decline of small-town manufacturing is a more complicated story, and imports play a role, but it’s also mainly about technological change that favors metropolitan areas with large numbers of highly educated workers.
  • Technology, then, has made America as a whole richer, but it has reduced economic opportunities in rural areas. So why don’t rural workers go where the jobs are? Some have
  • But some cities have become unaffordable, in part because of restrictive zoning — one thing blue states get wrong — while many workers are also reluctant to leave their families and communities.
  • So shouldn’t we aid these communities? We do. Federal programs — Social Security, Medicare, Medicaid and more — are available to all Americans, but are disproportionately financed from taxes paid by affluent urban areas. As a result there are huge de facto transfers of money from rich, urban states like New Jersey to poor, relatively rural states like West Virginia.
  • While these transfers somewhat mitigate the hardship facing rural America, they don’t restore the sense of dignity that has been lost along with rural jobs.
  • And maybe that loss of dignity explains both white rural rage and why that rage is so misdirected — why it’s pretty clear that this November a majority of rural white Americans will again vote against Joe Biden, who as president has been trying to bring jobs to their communities, and for Donald Trump, a huckster from Queens who offers little other than validation for their resentment.
  • This feeling of a loss of dignity may be worsened because some rural Americans have long seen themselves as more industrious, more patriotic and maybe even morally superior to the denizens of big cities — an attitude still expressed in cultural artifacts like Jason Al
  • In the crudest sense, rural and small-town America is supposed to be filled with hard-working people who adhere to traditional values, not like those degenerate urbanites on welfare, but the economic and social reality doesn’t match this self-image.
  • Prime working-age men outside metropolitan areas are substantially less likely than their metropolitan counterparts to be employed — not because they’re lazy, but because the jobs just aren’t there.
  • Quite a few rural states also have high rates of homicide, suicide and births to single mothers — again, not because rural Americans are bad people, but because social disorder is, as the sociologist William Julius Wilson argued long ago about urban problems, what happens when work disappears.
  • Draw attention to some of these realities and you’ll be accused of being a snooty urban elitist
  • The result — which at some level I still find hard to understand — is that many white rural voters support politicians who tell them lies they want to hear. It helps explain why the MAGA narrative casts relatively safe cities like New York as crime-ridden hellscapes while rural America is the victim not of technology but of illegal immigrants, wokeness and the deep state.
  • while white rural rage is arguably the single greatest threat facing American democracy, I have no good ideas about how to fight it.
Javier E

'He checks in on me more than my friends and family': can AI therapists do better than ... - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
Javier E

Bernanke review is not about blame but the Bank's outdated practices - 0 views

  • Bernanke’s 80-page assessment, the result of more than seven months’ work, is the most comprehensive independent analysis of a big central bank’s performance since an inflationary crisis hit the world economy in early 2022. He offers a dozen recommendations for change at the Bank, the strongest of which is for the MPC to begin publishing “alternative scenarios” that show how its inflation forecasts stand up in extreme situations, for example in the face of an energy price shock.
  • The review lays bare how the Bank and its international peers all failed to model the impact of the huge energy price shock that followed Russia’s invasion of Ukraine in early 2022, the disruption in global trade during the pandemic after 2020 and how workers and companies would respond to significant price changes.
  • In choosing Bernanke, one of the most respected central bankers of his generation, to lead the review, the Bank has ensured that his findings will be difficult to ignore. The former Fed chairman carried out more than 60 face-to-face interviews with Bank staff and market participants and sat in on the MPC’s November 2023 forecasting round to assess where the Bank’s forecasts and communication were failing short, from the use of computer models to the role played by “human judgment”.
  • ...1 more annotation...
  • In his review, Bernanke compared the MPC’s forecasting record with six other central banks — in the Nordic countries, New Zealand, the United States and the eurozone — and found the Bank was particularly bad at understanding dynamics in the jobs market and had consistently forecast far higher unemployment, which had not materialised. Its other errors, on forecasting future inflation and growth, put it largely in the “middle of the pack” with its peers.
« First ‹ Previous 241 - 259 of 259
Showing 20 items per page