Skip to main content

Home/ History Readings/ Group items matching "surveillance" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

We're That Much Likelier to Get Sick Now - The Atlantic - 0 views

  • Although neither RSV nor flu is shaping up to be particularly mild this year, says Caitlin Rivers, an epidemiologist at the Johns Hopkins Center for Health Security, both appear to be behaving more within their normal bounds.
  • But infections are still nowhere near back to their pre-pandemic norm. They never will be again. Adding another disease—COVID—to winter’s repertoire has meant exactly that: adding another disease, and a pretty horrific one at that, to winter’s repertoire.
  • “The probability that someone gets sick over the course of the winter is now increased,” Rivers told me, “because there is yet another germ to encounter.” The math is simple, even mind-numbingly obvious—a pathogenic n+1 that epidemiologists have seen coming since the pandemic’s earliest days. Now we’re living that reality, and its consequences.
  • ...18 more annotations...
  • ‘Odds are, people are going to get sick this year,’”
  • In typical years, flu hospitalizes an estimated 140,000 to 710,000 people in the United States alone; some years, RSV can add on some 200,000 more. “Our baseline has never been great,” Yvonne Maldonado, a pediatrician at Stanford, told me. “Tens of thousands of people die every year.”
  • this time of year, on top of RSV, flu, and COVID, we also have to contend with a maelstrom of other airway viruses—among them, rhinoviruses, parainfluenza viruses, human metapneumovirus, and common-cold coronaviruses.
  • Illnesses not severe enough to land someone in the hospital could still leave them stuck at home for days or weeks on end, recovering or caring for sick kids—or shuffling back to work
  • “This is a more serious pathogen that is also more infectious,” Ajay Sethi, an epidemiologist at the University of Wisconsin at Madison, told me. In the past year, COVID-19 has killed some 80,000 Americans—a lighter toll than in the three years prior, but one that still dwarfs that of the worst flu seasons in the past decade.
  • Globally, the only infectious killer that rivals it in annual-death count is tuberculosis
  • Rivers also pointed to CDC data that track trends in deaths caused by pneumonia, flu, and COVID-19. Even when SARS-CoV-2 has been at its most muted, Rivers said, more people have been dying—especially during the cooler months—than they were at the pre-pandemic baseline.
  • This year, for the first time, millions of Americans have access to three lifesaving respiratory-virus vaccines, against flu, COVID, and RSV. Uptake for all three remains sleepy and halting; even the flu shot, the most established, is not performing above its pre-pandemic baseline.
  • COVID could now surge in the summer, shading into RSV’s autumn rise, before adding to flu’s winter burden, potentially dragging the misery out into spring. “Based on what I know right now, I am considering the season to be longer,” Rivers said.
  • barring further gargantuan leaps in viral evolution, the disease will continue to slowly mellow out in severity as our collective defenses build; the virus may also pose less of a transmission risk as the period during which people are infectious contracts
  • even if the dangers of COVID-19 are lilting toward an asymptote, experts still can’t say for sure where that asymptote might be relative to other diseases such as the flu—or how long it might take for the population to get there.
  • it seems extraordinarily unlikely to ever disappear. For the foreseeable future, “pretty much all years going forward are going to be worse than what we’ve been used to before,”
  • although a core contingent of Americans might still be more cautious than they were before the pandemic’s start—masking in public, testing before gathering, minding indoor air quality, avoiding others whenever they’re feeling sick—much of the country has readily returned to the pre-COVID mindset.
  • When I asked Hanage what precautions worthy of a respiratory disease with a death count roughly twice that of flu’s would look like, he rattled off a familiar list: better access to and uptake of vaccines and antivirals, with the vulnerable prioritized; improved surveillance systems to offer  people at high risk a better sense of local-transmission trends; improved access to tests and paid sick leave
  • Without those changes, excess disease and death will continue, and “we’re saying we’re going to absorb that into our daily lives,” he said.
  • And that is what is happening.
  • last year, a CDC survey found that more than 3 percent of American adults were suffering from long COVID—millions of people in the United States alone.
  • “We get used to things we could probably fix.” The years since COVID arrived set a horrific precedent of death and disease; after that, this season of n+1 sickness might feel like a reprieve. But compare it with a pre-COVID world, and it looks objectively worse. We’re heading toward a new baseline, but it will still have quite a bit in common with the old one: We’re likely to accept it, and all of its horrors, as a matter of course.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

Opinion | People in Their 20s Aren't Supposed to Be This Unhappy - The New York Times - 0 views

  • There’s a truism in happiness studies that stress and despair peak in middle age; the young and the old are mentally healthier. But the mental health of young people has deteriorated. In February, the Centers for Disease Control and Prevention reported that nearly three in five teenage girls felt persistent sadness in 2021.
  • What Blanchflower spotted is that the middle-age hump of unhappiness has gone away entirely, with adulthood unhappiness now worst at the very beginning. “This is a completely new thing,”
  • the Behavioral Risk Factor Surveillance System surveys of the C.D.C. One question asks, “Now thinking about your mental health, which includes stress, depression, and problems with emotions, for how many days during the past 30 days was your mental health not good?” The percentages in these charts are for people who answered 30 out of 30 — no good days at all. Blanchflower terms that “despair.”
  • ...3 more annotations...
  • The big picture for both sexes is clear: A serious deterioration in the mental health of young people in 2019 to 2023 compared with the baseline of 1993 to 2018.
  • why? Blanchflower said the mental health of 20-somethings began to deteriorate noticeably around 2011. That made some sense because the United States was in a jobless recovery; the high unemployment rate made it hard for young people to find good jobs — or any jobs
  • He said he doesn’t fully understand why things continued to worsen as the job market strengthened. But he said, confirming others’ research, that the Covid lockdown was a fresh blow to young people’s mental health. Immersion in social media is another popular explanation,
Javier E

Opinion | Richard Hanania's Racism Is Backed by Silicon Valley Billionaires - The New York Times - 0 views

  • [Hanania] expressed support for eugenics and the forced sterilization of “low IQ” people, who he argued were most often Black. He opposed “miscegenation” and “race-mixing.” And once, while arguing that Black people cannot govern themselves, he cited the neo-Nazi author of “The Turner Diaries,” the infamous novel that celebrates a future race war.
  • He still makes explicitly racist statements and arguments, now under his own name. “I don’t have much hope that we’ll solve crime in any meaningful way,” he wrote on the platform formerly known as Twitter earlier this year. “It would require a revolution in our culture or form of government. We need more policing, incarceration, and surveillance of black people. Blacks won’t appreciate it, whites don’t have the stomach for it.”
  • Responding to the killing of a homeless Black man on the New York City subway, Hanania wrote, “These people are animals, whether they’re harassing people in subways or walking around in suits.”
  • ...8 more annotations...
  • According to Jonathan Katz, a freelance journalist, Hanania’s organization, the Center for the Study of Partisanship and Ideology, has received at least $700,000 in support through anonymous donations. He is also a visiting scholar at the Salem Center at the University of Texas at Austin — funded by Harlan Crow.
  • A whole coterie of Silicon Valley billionaires and millionaires have lent their time and attention to Hanania, as well as elevated his work. Marc Andreessen, a powerful venture capitalist, appeared on his podcast. David Sacks, a close associate of Elon Musk, wrote a glowing endorsement of Hanania’s forthcoming book. So did Peter Thiel, the billionaire supporter of right-wing causes and organizations. “D.E.I. will never d-i-e from words alone,” wrote Thiel. “Hanania shows we need the sticks and stones of government violence to exorcise the diversity demon.” Vivek Ramaswamy, the Republican presidential candidate, also praised the book as a “devastating kill shot to the intellectual foundations of identity politics in America.”
  • why an otherwise obscure racist has the ear and support of some of the most powerful people in Silicon Valley? What purpose, to a billionaire venture capitalist, do Hanania’s ideas serve?
  • Look back to our history and the answer is straightforward. Just as in the 1920s (and before), the idea of race hierarchy works to naturalize the broad spectrum of inequalities, and capitalist inequality in particular.
  • If some groups are simply meant to be at the bottom, then there are no questions to ask about their deprivation, isolation and poverty. There are no questions to ask about the society which produces that deprivation, isolation and poverty. And there is nothing to be done, because nothing can be done: Those people are just the way they are.
  • the idea of race hierarchy “creates the illusion of cross-class solidarity between these masters of infinite wealth and their propagandist and supporter class: ‘We are of the same special breed, you and I.’” Relations of domination between groups are reproduced as relations of domination between individuals.
  • This, in fact, has been the traditional role of supremacist ideologies in the United States — to occlude class relations and convert anxiety over survival into the jealous protection of status
  • worked in concrete ways to bound the two things, survival and status, together; to create the illusion that the security, even prosperity, of one group rests on the exclusion of another
Javier E

Will China overtake the U.S. on AI? Probably not. Here's why. - The Washington Post - 0 views

  • Chinese authorities have been so proactive about regulating some uses of AI, especially those that allow the general public to create their own content, that compliance has become a major hurdle for the country’s companies.
  • As the use of AI explodes, regulators in Washington and around the world are trying to figure out how to manage potential threats to privacy, employment, intellectual property and even human existence itself.
  • But there are also concerns that putting any guardrails on the technology in the United States would surrender leadership in the sector to Chinese companies.
  • ...16 more annotations...
  • Senate Majority Leader Charles E. Schumer (D-N.Y.) last month urged Congress to adopt “comprehensive” regulations on the AI industry.
  • Restrictions on access to the most advanced chips, which are needed to run AI models, have added to these difficulties.
  • n a recent study, Ding found that most of the large language models developed in China were nearly two years behind those developed in the U.S., a gap that would be a challenge to close — even if American firms had to adjust to regulation.
  • This gap also makes it difficult for Chinese firms to attract the world’s top engineering talent. Many would prefer to work at firms that have the resources and flexibility to experiment on frontier research areas.
  • Rather than focusing on AI technology that lets the general public create unique content like the chatbots and image generators, Chinese companies have instead focused on technologies with clear commercial uses, like surveillance tech.
  • Recent research identified 17 large language models in China that relied on Nvidia chips, and just three models that used Chinese-made chips.
  • No Chinese tech company has yet been able to release a large language model on the scale of OpenAI’s ChatGPT to the general public, in which the company has asked the public to play with and test a generative AI model, said Ding, the professor at George Washington University.
  • Despite the obstacles, Chinese AI companies have made major advances in some types of AI technologies, including facial recognition, gait recognition, and artificial and virtual reality.
  • These technologies have also fueled the development of China’s vast surveillance industry, giving Chinese tech giants an edge that they market around the world, such as Huawei’s contracts for smart city surveillance from Belgrade, Serbia, to Nairobi.
  • Companies developing AI in China need to comply with specific laws on intellectual property rights, personal information protection, recommendation algorithms and synthetic content, also called deep fakes. In April, regulators also released a draft set of rules on generative AI, the technology behind image generator Stable Diffusion and chatbots such as OpenAI’s ChatGPT and Google’s Bard.
  • They also need to ensure AI generated content complies with Beijing’s strict censorship regime. Chinese tech companies such as Baidu have become adept at filtering content that contravenes these rules. But it has hampered their ability to test the limits of what AI can do.
  • While Beijing pushes to make comparable chips at home, Chinese AI companies have to source their chips any way they can — including from a black market that has sprung up in Shenzhen, where, according to Reuters, the most advanced Nvidia chips sell for nearly $20,000, more than twice what they go for elsewhere.
  • “That level of freedom has not been allowed in China, in part because the Chinese government is very worried about people creating politically sensitive content,” Ding said.
  • Although Beijing’s regulations have created major burdens for Chinese AI companies, analysts say that they contain several key principles that Washington can learn from — like protecting personal information, labeling AI-generated content and alerting the government if an AI develops dangerous capabilities.
  • AI regulation in the United States could easily fall short of Beijing’s heavy-handed approach while still preventing discrimination, protecting people’s rights and adhering to existing laws, said Johanna Costigan, a research associate at the Asia Society Policy Institute.
  • “There can be alignment between regulation and innovation,” Costigan said. “But it’s a question of rising to the occasion of what this moment represents — do we care enough to protect people who are using this technology? Because people are using it whether the government regulates it or not.”
Javier E

How the AI apocalypse gripped students at elite schools like Stanford - The Washington Post - 0 views

  • Edwards thought young people would be worried about immediate threats, like AI-powered surveillance, misinformation or autonomous weapons that target and kill without human intervention — problems he calls “ultraserious.” But he soon discovered that some students were more focused on a purely hypothetical risk: That AI could become as smart as humans and destroy mankind.
  • In these scenarios, AI isn’t necessarily sentient. Instead, it becomes fixated on a goal — even a mundane one, like making paper clips — and triggers human extinction to optimize its task.
  • To prevent this theoretical but cataclysmic outcome, mission-driven labs like DeepMind, OpenAI and Anthropic are racing to build a good kind of AI programmed not to lie, deceive or kill us.
  • ...28 more annotations...
  • Meanwhile, donors such as Tesla CEO Elon Musk, disgraced FTX founder Sam Bankman-Fried, Skype founder Jaan Tallinn and ethereum co-founder Vitalik Buterin — as well as institutions like Open Philanthropy, a charitable organization started by billionaire Facebook co-founder Dustin Moskovitz — have worked to push doomsayers from the tech industry’s margins into the mainstream.
  • More recently, wealthy tech philanthropists have begun recruiting an army of elite college students to prioritize the fight against rogue AI over other threats
  • Other skeptics, like venture capitalist Marc Andreessen, are AI boosters who say that hyping such fears will impede the technology’s progress.
  • Critics call the AI safety movement unscientific. They say its claims about existential risk can sound closer to a religion than research
  • And while the sci-fi narrative resonates with public fears about runaway AI, critics say it obsesses over one kind of catastrophe to the exclusion of many others.
  • Open Philanthropy spokesperson Mike Levine said harms like algorithmic racism deserve a robust response. But he said those problems stem from the same root issue: AI systems not behaving as their programmers intended. The theoretical risks “were not garnering sufficient attention from others — in part because these issues were perceived as speculative,” Levine said in a statement. He compared the nonprofit’s AI focus to its work on pandemics, which also was regarded as theoretical until the coronavirus emerged.
  • Among the reputational hazards of the AI safety movement is its association with an array of controversial figures and ideas, like EA, which is also known for recruiting ambitious young people on elite college campuses.
  • The foundation began prioritizing existential risks around AI in 2016,
  • there was little status or money to be gained by focusing on risks. So the nonprofit set out to build a pipeline of young people who would filter into top companies and agitate for change from the insid
  • Colleges have been key to this growth strategy, serving as both a pathway to prestige and a recruiting ground for idealistic talent
  • The clubs train students in machine learning and help them find jobs in AI start-ups or one of the many nonprofit groups dedicated to AI safety.
  • Many of these newly minted student leaders view rogue AI as an urgent and neglected threat, potentially rivaling climate change in its ability to end human life. Many see advanced AI as the Manhattan Project of their generation
  • Despite the school’s ties to Silicon Valley, Mukobi said it lags behind nearby UC Berkeley, where younger faculty members research AI alignment, the term for embedding human ethics into AI systems.
  • Mukobi joined Stanford’s club for effective altruism, known as EA, a philosophical movement that advocates doing maximum good by calculating the expected value of charitable acts, like protecting the future from runaway AI. By 2022, AI capabilities were advancing all around him — wild developments that made those warnings seem prescient.
  • At Stanford, Open Philanthropy awarded Luby and Edwards more than $1.5 million in grants to launch the Stanford Existential Risk Initiative, which supports student research in the growing field known as “AI safety” or “AI alignment.
  • from the start EA was intertwined with tech subcultures interested in futurism and rationalist thought. Over time, global poverty slid down the cause list, while rogue AI climbed toward the top.
  • In the past year, EA has been beset by scandal, including the fall of Bankman-Fried, one of its largest donors
  • Another key figure, Oxford philosopher Nick Bostrom, whose 2014 bestseller “Superintelligence” is essential reading in EA circles, met public uproar when a decades-old diatribe about IQ surfaced in January.
  • Programming future AI systems to share human values could mean “an amazing world free from diseases, poverty, and suffering,” while failure could unleash “human extinction or our permanent disempowerment,” Mukobi wrote, offering free boba tea to anyone who attended the 30-minute intro.
  • Open Philanthropy’s new university fellowship offers a hefty direct deposit: undergraduate leaders receive as much as $80,000 a year, plus $14,500 for health insurance, and up to $100,000 a year to cover group expenses.
  • Student leaders have access to a glut of resources from donor-sponsored organizations, including an “AI Safety Fundamentals” curriculum developed by an OpenAI employee.
  • Interest in the topic is also growing among Stanford faculty members, Edwards said. He noted that a new postdoctoral fellow will lead a class on alignment next semester in Stanford’s storied computer science department.
  • Edwards discovered that shared online forums function like a form of peer review, with authors changing their original text in response to the comments
  • Mukobi feels energized about the growing consensus that these risks are worth exploring. He heard students talking about AI safety in the halls of Gates, the computer science building, in May after Geoffrey Hinton, another “godfather” of AI, quit Google to warn about AI. By the end of the year, Mukobi thinks the subject could be a dinner-table topic, just like climate change or the war in Ukraine.
  • Luby, Edwards’s teaching partner for the class on human extinction, also seems to find these arguments persuasive. He had already rearranged the order of his AI lesson plans to help students see the imminent risks from AI. No one needs to “drink the EA Kool-Aid” to have genuine concerns, he said.
  • Edwards, on the other hand, still sees things like climate change as a bigger threat than rogue AI. But ChatGPT and the rapid release of AI models has convinced him that there should be room to think about AI safety.
  • Interested students join reading groups where they get free copies of books like “The Precipice,” and may spend hours reading the latest alignment papers, posting career advice on the Effective Altruism forum, or adjusting their P(doom), a subjective estimate of the probability that advanced AI will end badly. The grants, travel, leadership roles for inexperienced graduates and sponsored co-working spaces build a close-knit community.
  • The course will not be taught by students or outside experts. Instead, he said, it “will be a regular Stanford class.”
Javier E

Opinion | The Government Must Say What It Knows About Covid's Origins - The New York Times - 0 views

  • By keeping evidence that seemed to provide ammunition to proponents of a lab leak theory under wraps and resisting disclosure, U.S. officials have contributed to making the topic of the pandemic’s origins more poisoned and open to manipulation by bad-faith actors.
  • Treating crucial information like a dark secret empowers those who viciously and unfairly accuse public health officials and scientists of profiting off the pandemic. As Megan K. Stack wrote in Times Opinion this spring, “Those who seek to suppress disinformation may be destined, themselves, to sow it.”
  • According to an Economist/YouGov poll published in March, 66 percent of Americans — including majorities of Democrats and independents — believe the pandemic was caused by research activities, a number that has gone up since 2020
  • ...5 more annotations...
  • The American public, however, only rarely heard refreshing honesty from their officials or even their scientists — and this tight-lipped, denialist approach appears to have only strengthened belief that the pandemic arose from carelessness during research or even, in less reality-based accounts, something deliberate
  • Only 16 percent of Americans believed that it was likely or definitely false that the emergence of the Covid virus was tied to research in a Chinese lab, while 17 percent were unsure.
  • Worse, biosafety, globally, remains insufficiently regulated. Making biosafety into a controversial topic makes it harder to move forward with necessary regulation and international effort
  • For years, scientists and government officials did not publicly talk much about the fact that a 1977 “Russian” influenza pandemic that killed hundreds of thousands of people most likely began when a vaccine trial went awry.
  • one reason for the relative silence was the fear of upsetting the burgeoning cooperation over flu surveillance and treatment by the United States, China and Russia.
Javier E

What Does It Mean to Be Latino? - The Atlantic - 0 views

  • The feeling of being ni de aquí, ni de allá—from neither here nor there—is the fundamental paradox of latinidad, its very essence.
  • Tobar’s book should be read in the context of other works that, for more than a century, have tried to elucidate the meaning of latinidad.
  • In his 1891 essay “Our America,” José Martí, a Cuban writer then living in New York, argued that Latin American identity was defined, in part, by a rejection of the racism that he believed characterized the United States.
  • ...13 more annotations...
  • The Mexican author Octavio Paz, in his 1950 book, The Labyrinth of Solitude, described the pachuco (a word used to refer to young Mexican American men, many of them gang members, in the mid-1900s) as a “pariah, a man who belongs nowhere,” alienated from his Mexican roots but not quite of the United States either.
  • Gloria Anzaldúa, in her 1987 classic, Borderlands/La Frontera, described Chicana identity as the product of life along the U.S.-Mexico border, “una herida abierta [an open wound] where the Third World grates against the first and bleeds.”
  • We need to understand that they want the same freedoms, comforts, and securities that all people have wanted since the beginning of civilization: to have a “home with a place to paint, or a big, comfortable chair to sit in and read under a lamp, with a cushion under the small of our backs.”
  • offers a more intimate look into the barrios, homes, and minds of people who, he argues, have been badly, and sometimes willfully, misunderstood.
  • Tobar’s main focus is on how the migrant experience has shaped Latino identity.
  • More than these other works, though, it engages in contemporary debates and issues, such as how Latinos have related to Blackness and indigeneity, the question of why some Latinos choose to identify as white, and the political conservatism of certain Latino communities
  • “To be Latino in the United States,” Tobar writes, “is to see yourself portrayed, again and again, as an intellectually and physically diminished subject in stories told by others.”
  • Even when migrants survive the journey and settle across the United States, Tobar sees a dark thread connecting them: “Our ancestors,” he writes, “have escaped marching armies, coups d’état, secret torture rooms, Stalinist surveillance, and the outrages of rural police forces.” Tobar is referring here to the domestic conflicts, fueled by the U.S. military, in Guatemala, Cuba, El Salvador, Nicaragua, and other countries during the Cold War, leading to unrest and forcing civilians in those places to flee northward.
  • For Tobar, this history of violence is something all Latinos have in common, no matter where in the country they live.
  • He writes, “I want a theory of social revolution that begins in this kind of intimate space,” not in the symbols “appropriated by corporate America,” like the Black Lives Matter banners displayed at professional sporting events, or the CEO of JPMorgan Chase kneeling at a branch of his bank, which critics have read as virtue signaling. Mere intimacy and the recognition of common histories isn’t the same as justice, but it is a necessary starting point for healing divisions.
  • there are many Latino stories that he does not, and probably cannot, tell. For one, he conceives of Latino history as the history of a people who have endured traumas because of the actions of the U.S. But this framing wouldn’t appeal to Latinos who see the United States as the country where their dreams came true, where they’ve built careers, bought homes, provided for their families.
  • If the small number of conservative Latinos Tobar interviewed are anything like the Hispanic Republicans I’ve talked with over the years, they would tell him that it is the Republican Party that best represents their economic, religious, and political values.
  • If our aim is to understand the full story of Latinos—assuming such a thing is possible—we should explore all of the complexities of those who live in a country that is becoming more Latino by the day. For that, we’ll need other books besides Our Migrant Souls, ones that describe the inner worlds, motives, and ambitions of Latinos who see themselves and their place in this country differently.
Javier E

Opinion | Big Tech Is Bad. Big A.I. Will Be Worse. - The New York Times - 0 views

  • Tech giants Microsoft and Alphabet/Google have seized a large lead in shaping our potentially A.I.-dominated future. This is not good news. History has shown us that when the distribution of information is left in the hands of a few, the result is political and economic oppression. Without intervention, this history will repeat itself.
  • The fact that these companies are attempting to outpace each other, in the absence of externally imposed safeguards, should give the rest of us even more cause for concern, given the potential for A.I. to do great harm to jobs, privacy and cybersecurity. Arms races without restrictions generally do not end well.
  • We believe the A.I. revolution could even usher in the dark prophecies envisioned by Karl Marx over a century ago. The German philosopher was convinced that capitalism naturally led to monopoly ownership over the “means of production” and that oligarchs would use their economic clout to run the political system and keep workers poor.
  • ...17 more annotations...
  • Literacy rates rose alongside industrialization, although those who decided what the newspapers printed and what people were allowed to say on the radio, and then on television, were hugely powerful. But with the rise of scientific knowledge and the spread of telecommunications came a time of multiple sources of information and many rival ways to process facts and reason out implications.
  • With the emergence of A.I., we are about to regress even further. Some of this has to do with the nature of the technology. Instead of assessing multiple sources, people are increasingly relying on the nascent technology to provide a singular, supposedly definitive answer.
  • This technology is in the hands of two companies that are philosophically rooted in the notion of “machine intelligence,” which emphasizes the ability of computers to outperform humans in specific activities.
  • This philosophy was naturally amplified by a recent (bad) economic idea that the singular objective of corporations should be to maximize short-term shareholder wealth.
  • Combined together, these ideas are cementing the notion that the most productive applications of A.I. replace humankind.
  • Congress needs to assert individual ownership rights over underlying data that is relied on to build A.I. systems
  • Fortunately, Marx was wrong about the 19th-century industrial age that he inhabited. Industries emerged much faster than he expected, and new firms disrupted the economic power structure. Countervailing social powers developed in the form of trade unions and genuine political representation for a broad swath of society.
  • History has repeatedly demonstrated that control over information is central to who has power and what they can do with it.
  • Generative A.I. requires even deeper pockets than textile factories and steel mills. As a result, most of its obvious opportunities have already fallen into the hands of Microsoft, with its market capitalization of $2.4 trillion, and Alphabet, worth $1.6 trillion.
  • At the same time, powers like trade unions have been weakened by 40 years of deregulation ideology (Ronald Reagan, Margaret Thatcher, two Bushes and even Bill Clinton
  • For the same reason, the U.S. government’s ability to regulate anything larger than a kitten has withered. Extreme polarization and fear of killing the golden (donor) goose or undermining national security mean that most members of Congress would still rather look away.
  • To prevent data monopolies from ruining our lives, we need to mobilize effective countervailing power — and fast.
  • Today, those countervailing forces either don’t exist or are greatly weakened
  • Rather than machine intelligence, what we need is “machine usefulness,” which emphasizes the ability of computers to augment human capabilities. This would be a much more fruitful direction for increasing productivity. By empowering workers and reinforcing human decision making in the production process, it also would strengthen social forces that can stand up to big tech companies
  • We also need regulation that protects privacy and pushes back against surveillance capitalism, or the pervasive use of technology to monitor what we do
  • Finally, we need a graduated system for corporate taxes, so that tax rates are higher for companies when they make more profit in dollar terms
  • Our future should not be left in the hands of two powerful companies that build ever larger global empires based on using our collective data without scruple and without compensation.
Javier E

'There was all sorts of toxic behaviour': Timnit Gebru on her sacking by Google, AI's dangers and big tech's biases | Artificial intelligence (AI) | The Guardian - 0 views

  • t feels like a gold rush,” says Timnit Gebru. “In fact, it is a gold rush. And a lot of the people who are making money are not the people actually in the midst of it. But it’s humans who decide whether all this should be done or not. We should remember that we have the agency to do that.”
  • something that the frenzied conversation about AI misses out: the fact that many of its systems may well be built on a huge mess of biases, inequalities and imbalances of power.
  • As the co-leader of Google’s small ethical AI team, Gebru was one of the authors of an academic paper that warned about the kind of AI that is increasingly built into our lives, taking internet searches and user recommendations to apparently new levels of sophistication and threatening to master such human talents as writing, composing music and analysing images
  • ...14 more annotations...
  • The clear danger, the paper said, is that such supposed “intelligence” is based on huge data sets that “overrepresent hegemonic viewpoints and encode biases potentially damaging to marginalised populations”. Put more bluntly, AI threatens to deepen the dominance of a way of thinking that is white, male, comparatively affluent and focused on the US and Europe.
  • What all this told her, she says, is that big tech is consumed by a drive to develop AI and “you don’t want someone like me who’s going to get in your way. I think it made it really clear that unless there is external pressure to do something different, companies are not just going to self-regulate. We need regulation and we need something better than just a profit motive.”
  • one particularly howling irony: the fact that an industry brimming with people who espouse liberal, self-consciously progressive opinions so often seems to push the world in the opposite direction.
  • Gebru began to specialise in cutting-edge AI, pioneering a system that showed how data about particular neighbourhoods’ patterns of car ownership highlighted differences bound up with ethnicity, crime figures, voting behaviour and income levels. In retrospect, this kind of work might look like the bedrock of techniques that could blur into automated surveillance and law enforcement, but Gebru admits that “none of those bells went off in my head … that connection of issues of technology with diversity and oppression came later”.
  • The next year, Gebru made a point of counting other black attenders at the same event. She found that, among 8,500 delegates, there were only six people of colour. In response, she put up a Facebook post that now seems prescient: “I’m not worried about machines taking over the world; I’m worried about groupthink, insularity and arrogance in the AI community.”
  • When Gebru arrived, Google employees were loudly opposing the company’s role in Project Maven, which used AI to analyse surveillance footage captured by military drones (Google ended its involvement in 2018). Two months later, staff took part in a huge walkout over claims of systemic racism, sexual harassment and gender inequality. Gebru says she was aware of “a lot of tolerance of harassment and all sorts of toxic behaviour”.
  • She and her colleagues prided themselves on how diverse their small operation was, as well as the things they brought to the company’s attention, which included issues to do with Google’s ownership of YouTube
  • A colleague from Morocco raised the alarm about a popular YouTube channel in that country called Chouf TV, “which was basically operated by the government’s intelligence arm and they were using it to harass journalists and dissidents. YouTube had done nothing about it.” (Google says that it “would need to review the content to understand whether it violates our policies. But, in general, our harassment policies strictly prohibit content that threatens individuals,
  • in 2020, Gebru, Mitchell and two colleagues wrote the paper that would lead to Gebru’s departure. It was titled On the Dangers of Stochastic Parrots. Its key contention was about AI centred on so-called large language models: the kind of systems – such as OpenAI’s ChatGPT and Google’s newly launched PaLM 2 – that, crudely speaking, feast on vast amounts of data to perform sophisticated tasks and generate content.
  • Gebru and her co-authors had an even graver concern: that trawling the online world risks reproducing its worst aspects, from hate speech to points of view that exclude marginalised people and places. “In accepting large amounts of web text as ‘representative’ of ‘all’ of humanity, we risk perpetuating dominant viewpoints, increasing power imbalances and further reifying inequality,” they wrote.
  • When the paper was submitted for internal review, Gebru was quickly contacted by one of Google’s vice-presidents. At first, she says, non-specific objections were expressed, such as that she and her colleagues had been too “negative” about AI. Then, Google asked Gebru either to withdraw the paper, or remove her and her colleagues’ names from it.
  • After her departure, Gebru founded Dair, the Distributed AI Research Institute, to which she now devotes her working time. “We have people in the US and the EU, and in Africa,” she says. “We have social scientists, computer scientists, engineers, refugee advocates, labour organisers, activists … it’s a mix of people.”
  • Running alongside this is a quest to push beyond the tendency of the tech industry and the media to focus attention on worries about AI taking over the planet and wiping out humanity while questions about what the technology does, and who it benefits and damages, remain unheard.
  • “That conversation ascribes agency to a tool rather than the humans building the tool,” she says. “That means you can aggregate responsibility: ‘It’s not me that’s the problem. It’s the tool. It’s super-powerful. We don’t know what it’s going to do.’ Well, no – it’s you that’s the problem. You’re building something with certain characteristics for your profit. That’s extremely distracting, and it takes the attention away from real harms and things that we need to do. Right now.”
Javier E

Opinion | Lina Khan: We Must Regulate A.I. Here's How. - The New York Times - 0 views

  • The last time we found ourselves facing such widespread social change wrought by technology was the onset of the Web 2.0 era in the mid-2000s.
  • Those innovative services, however, came at a steep cost. What we initially conceived of as free services were monetized through extensive surveillance of the people and businesses that used them. The result has been an online economy where access to increasingly essential services is conditioned on the widespread hoarding and sale of our personal data.
  • These business models drove companies to develop endlessly invasive ways to track us, and the Federal Trade Commission would later find reason to believe that several of these companies had broken the law
  • ...10 more annotations...
  • What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security.
  • The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice. As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself.
  • the Federal Trade Commission is taking a close look at how we can best achieve our dual mandate to promote fair competition and to protect Americans from unfair or deceptive practices.
  • generative A.I. risks turbocharging fraud. It may not be ready to replace professional writers, but it can already do a vastly better job of crafting a seemingly authentic message than your average con artist — equipping scammers to generate content quickly and cheaply.
  • Enforcers have the dual responsibility of watching out for the dangers posed by new A.I. technologies while promoting the fair competition needed to ensure the market for these technologies develops lawfully.
  • we already can see several risks. The expanding adoption of A.I. risks further locking in the market dominance of large incumbent technology firms. A handful of powerful businesses control the necessary raw materials that start-ups and other companies rely on to develop and deploy A.I. tools. This includes cloud services and computing power, as well as vast stores of data.
  • bots are even being instructed to use words or phrases targeted at specific groups and communities. Scammers, for example, can draft highly targeted spear-phishing emails based on individual users’ social media posts. Alongside tools that create deep fake videos and voice clones, these technologies can be used to facilitate fraud and extortion on a massive scale.
  • we will look not just at the fly-by-night scammers deploying these tools but also at the upstream firms that are enabling them.
  • these A.I. tools are being trained on huge troves of data in ways that are largely unchecked. Because they may be fed information riddled with errors and bias, these technologies risk automating discrimination
  • We once again find ourselves at a key decision point. Can we continue to be the home of world-leading technology without accepting race-to-the-bottom business models and monopolistic control that locks out higher quality products or the next big idea? Yes — if we make the right policy choices.
Javier E

Carlos Moreno Wanted to Improve Cities. Conspiracy Theorists Are Coming for Him. - The New York Times - 0 views

  • For most of his 40-year career, Carlos Moreno, a scientist and business professor in Paris, worked in relative peace.Many cities around the world embraced a concept he started to develop in 2010. Called the 15-minute city, the idea is that everyday destinations such as schools, stores and offices should be only a short walk or bike ride away from home. A group of nearly 100 mayors worldwide embraced it as a way to help recover from the pandemic.
  • In recent weeks, a deluge of rumors and distortions have taken aim at Mr. Moreno’s proposal. Driven in part by climate change deniers and backers of the QAnon conspiracy theory, false claims have circulated online, at protests and even in government hearings that 15-minute cities were a precursor to “climate change lockdowns” — urban “prison camps” in which residents’ movements would be surveilled and heavily restricted.
  • Many attacked Mr. Moreno, 63, directly. The professor, who teaches at the University of Paris 1 Panthéon-Sorbonne, faced harassment in online forums and over email. He was accused without evidence of being an agent of an invisible totalitarian world government. He was likened to criminals and dictators.
  • ...16 more annotations...
  • he started receiving death threats. People said they wished he and his family had been killed by drug lords, told him that “sooner or later your punishment will arrive” and proposed that he be nailed into a coffin or run over by a cement roller.
  • Mr. Moreno, who grew up in Colombia, began working as a researcher in a computer science and robotics lab in Paris in 1983; the career that followed involved creating a start-up, meeting the Dalai Lama and being named a knight of the Légion d’Honneur. His work has won several awards and spanned many fields — automotive, medical, nuclear, military, even home goods.
  • Many of the recent threats have been directed at scientists studying Covid-19. In a survey of 321 such scientists who had given media interviews, the journal Nature found that 22 percent had received threats of physical or sexual violence and 15 percent had received death threats
  • Last year, an Austrian doctor who was a vocal supporter of vaccines and a repeated target of threats died by suicide.
  • increasingly, even professors and researchers without much of a public persona have faced intimidation from extremists and conspiracy theorists.
  • Around 2010, he started thinking about how technology could help create sustainable cities. Eventually, he refined his ideas about “human smart cities” and “living cities” into his 2016 proposal for 15-minute cities.
  • The idea owes much to its many predecessors: “neighborhood units” and “garden cities” in the early 1900s, the community-focused urban planning pioneered by the activist Jane Jacobs in the 1960s, even support for “new urbanism” and walkable cities in the 1990s. So-called low-traffic neighborhoods, or LTNs, have been set up in several British cities over the past few decades.
  • Critics of 15-minute cities have been outspoken, arguing that a concept developed in Europe may not translate well to highly segregated American cities. A Harvard economist wrote in a blog post for the London School of Economics and Political Science in 2021 that the concept was a “dead end” that would exacerbate “enormous inequalities in cities” by subdividing without connecting them.
  • Jordan Peterson, a Canadian psychologist with four million Twitter followers, suggested that 15-minute cities were “perhaps the worst imaginable perversion” of the idea of walkable neighborhoods. He linked to a post about the “Great Reset,” an economic recovery plan proposed by the World Economic Forum that has spawned hordes of rumors about a pandemic-fueled plot to destroy capitalism.
  • A member of Britain’s Parliament said that 15-minute cities were “an international socialist concept” that would “cost us our personal freedoms.” QAnon supporters said the derailment of a train carrying hazardous chemicals in Ohio was an intentional move meant to push rural residents into 15-minute cities.
  • “Conspiracy-mongers have built a complete story: climate denialism, Covid-19, anti-vax, 5G controlling the brains of citizens, and the 15-minute city for introducing a perimeter for day-to-day life,” Mr. Moreno said. “This storytelling is totally insane, totally irrational for us, but it makes sense for them.”
  • The multipronged conspiracy theory quickly became “turbocharged” after the Oxford protest, said Jennie King, head of climate research and policy at the Institute for Strategic Dialogue, a think tank that studies online platforms.
  • “You have this snowball effect of a policy, which in principle was only going to affect a small urban population, getting extrapolated and becoming this crucible where far-right groups, industry-sponsored lobbying groups, conspiracist movements, anti-lockdown groups and more saw an opportunity to insert their worldview into the mainstream and to piggyback on the news cycle,”
  • The vitriol currently directed at Mr. Moreno and researchers like him mirrors “the broader erosion of trust in experts and institutions,”
  • Modern conspiracy theorists and extremists turn the people they disagree with into scapegoats for a vast array of societal ills, blaming them personally for causing the high cost of living or various health crises and creating an “us-versus-them” environment, she said.
  • “I am not a politician, I am not a candidate for anything — as a researcher, my duty is to explore and deepen my ideas with scientific methodology,” he said. “It is totally unbelievable that we could receive a death threat just for working as scientists.”
Javier E

A Russian Mole in Germany Sows Suspicions at Home, and Beyond - The New York Times - 0 views

  • The coach, a 52-year-old former German soldier, worked for Germany’s Federal Intelligence Service, or B.N.D., as a director of technical reconnaissance — the unit responsible for cybersecurity and surveilling electronic communications. It contributes about half of the spy agency’s daily intelligence volume.
  • As a Russian mole, he would have had access to critical information gathered since Moscow invaded Ukraine last year. He may have obtained high-level surveillance, not only from German spies, but also from Western partners, like the C.I.A.
  • For years, as German politicians pushed economic ties with Moscow — in particular, buying its gas — they closed down many intelligence units focused on Russia.
  • ...9 more annotations...
  • President Vladimir V. Putin of Russia, who started his career as a K.G.B. agent in Communist East Germany, took the opposite tack: He made Germany, Europe’s biggest economy, a priority target.
  • The only hints of potential motives are his apparent far-right sympathies. A search of his home and offices, two people familiar with the investigation said, found fliers from the far-right AfD party. At work, Mr. Linke had openly told colleagues he felt the country was deteriorating, and he was particularly disdainful of its new center-left government, one of those following the inquiry said.
  • Over the years, far-right groups have grown increasingly sympathetic to Russia, enamored of Mr. Putin’s nationalistic rhetoric. Germany has struggled to root out far-right sympathizers in its security services, including in the military, even dismantling part of its special forces.
  • A Google account of his, using the alias “Steen von Ottendorf,” first found by Germany’s Der Spiegel newsmagazine, has one YouTube subscription: a channel that collects nationalist tunes. The channel’s icon bears an eagle — and the red, white and black of Germany’s old imperial colors, often used by the far right.
  • “It’s a kind of conviction, wanting to cooperate with Russia — it’s a romantic belief,” the official said. “I worry there are many others who hold that conviction in our security services.”
  • Since the days of the Cold War, Germany’s intelligence agency suffered from Russian infiltration, said Erich Schmidt-Eenboom, a historian who has written several books on the agency and keeps a list of all of the B.N.D. agents who were “turned,” exposing hundreds of operatives.
  • Among them was the 1961 case of Heinz Felfe, a K.G.B. mole who revealed B.N.D. operations across Europe. After the fall of the Soviet Union, Germany learned that a top director, Gabriele Gast, who worked closely with the chancellery, spied for the Stasi, the East German secret police, for 17 years.
  • According to Mr. Schmidt-Eenboom, the information available to Mr. Linke was vast: internet espionage, German surveillance stations, mobile listening devices in southern Ukraine, and the German Navy’s reconnaissance ships observing the war from the Baltic Sea.
  • On top of that, Mr. Linke would have had access to reports from allied American services like the C.I.A. and the National Security Agency, as well as from Britain’s Government Communications Headquarters.
Javier E

Norovirus is almost impossible to stop - The Atlantic - 0 views

  • Disinfection is back.
  • “Bleach is my friend right now,” says Annette Cameron, a pediatrician at Yale School of Medicine, who spent the first half of this week spraying and sloshing the potent chemical all over her home. It’s one of the few tools she has to combat norovirus, the nasty gut pathogen that her 15-year-old son was recently shedding in gobs.
  • norovirus has seeded outbreaks in several countries, including the United Kingdom, Canada, and the United States. Last week, the U.K. Health Security Agency announced that laboratory reports of the virus had risen to levels 66 percent higher than what’s typical this time of year. Especially hard-hit are Brits 65 and older, who are falling ill at rates that “haven’t been seen in over a decade.”
  • ...18 more annotations...
  • The U.S. logs fewer than 1,000 annual deaths out of millions of documented cases
  • this is more a nauseating nuisance than a public-health crisis. In most people, norovirus triggers, at most, a few miserable days of GI distress that can include vomiting, diarrhea, and fevers, then resolves on its own; the keys are to stay hydrated and avoid spreading it to anyone vulnerabl
  • norovirus is the most common cause of foodborne illness in the United States.)
  • the virus is far more deadly in parts of the world with limited access to sanitation and potable water.
  • Still, fighting norovirus isn’t easy, as plenty of parents can attest. The pathogen, which prompts the body to expel infectious material from both ends of the digestive tract, is seriously gross and frustratingly hardy. Even the old COVID standby, a spritz of hand sanitizer, doesn’t work against it—the virus is encased in a tough protein shell that makes it insensitive to alcohol.
  • At an extreme, a single gram of feces—roughly the heft of a jelly bean—could contain as many as 5.5 billion infectious doses, enough to send the entire population of Eurasia sprinting for the toilet.
  • norovirus mainly targets the gut, and spreads especially well when people swallow viral particles that have been released in someone else’s vomit or stool.
  • direct contact with those substances, or the food or water they contaminate, may not even be necessary: Sometimes people vomit with such force that the virus gets aerosolized; toilets, especially lidless ones, can send out plumes of infection
  • If the spittle finding holds for humans, then talking, singing, and laughing in close proximity could be risky too.
  • Once emitted into the environment, norovirus particles can persist on surfaces for days—making frequent hand-washing and surface disinfection key measures to prevent spread
  • Handshakes and shared meals tend to get dicey during outbreaks, along with frequently touched items such as utensils, door handles, and phones.
  • One 2012 study pointed to a woven plastic grocery bag as the source of a small outbreak among a group of teenage soccer players; the bag had just been sitting in a bathroom used by one of the girls when she fell sick the night before.
  • Once a norovirus transmission chain begins, it can be very difficult to break. The virus can spread before symptoms start, and then for more than a week after they resolve
  • Once the virus arrives, the entire family is almost sure to be infected. Baldridge, who has two young children, told me that her household has weathered at least four bouts of norovirus in the past several years.
  • Roughly 20 percent of European populations, for instance, are genetically resistant to common norovirus strains. “So you can hope,” Frenck told me. For the rest of us, it comes down to hygiene
  • Altan-Bonnet recommends diligent hand-washing, plus masking to ward off droplet-borne virus. Sick people should isolate themselves if they can. “And keep your saliva to yourself,” she told me.
  • The family fastidiously scrubbed their hands with hot water and soap, donned disposable gloves when touching shared surfaces, and took advantage of the virus’s susceptibility to harsh chemicals and heat. When her son threw up on the floor, Cameron sprayed it down with bleach; when he vomited on his quilt, she blasted it twice in the washing machine on the sanitizing setting, then put it through the dryer at a super high temp
  • After three years of COVID, the world has gotten used to thinking about infections in terms of airways. “We need to recalibrate,” Bhumbra told me, “and remember that other things exist.”
Javier E

Opinion | H5N1 Bird Flu is Causing Alarm. Here's Why We Must Act. - The New York Times - 0 views

  • Bird flu — known more formally as avian influenza — has long hovered on the horizons of scientists’ fears. This pathogen, especially the H5N1 strain, hasn’t often infected humans, but when it has, 56 percent of those known to have contracted it have died. Its inability to spread easily, if at all, from one person to another has kept it from causing a pandemic.
  • But things are changing. The virus, which has long caused outbreaks among poultry, is infecting more and more migratory birds, allowing it to spread more widely, even to various mammals, raising the risk that a new variant could spread to and among people.
peterconnelly

While China makes Pacific islands tour, US Coast Guard is already on patrol - CNN - 0 views

  • As China's foreign minister began a Pacific islands tour to promote economic and security cooperation with Beijing, the smallest of the US government's armed services was already on the scene, reinforcing Washington's longstanding commitment to the region.
  • The US cutter "helped to fill the operational presence needed by conducting maritime surveillance to deter illegal, unreported, and unregulated fishing in the northern Solomon Islands," a Coast Guard press release said.
  • China had proposed a sweeping regional security and economic agreement with a number of Pacific Island nations
  • ...7 more annotations...
  • "We will expand US Coast Guard presence and cooperation in Southeast and South Asia and the Pacific Islands, with a focus on advising, training, deployment, and capacity-building," the strategy's action plan says.
  • "Don't be too anxious and don't be too nervous, because the common development and prosperity of China and all the other developing countries would only mean great harmony, greater justice and greater progress of the whole world," he said.
  • The pact, if accepted, would have marked a significant advance in Beijing's connection to the region, which holds geo-strategic importance in the Indo-Pacific.
  • The relationships the US Coast Guard has forged in the Pacific islands have deep roots, said Collin Koh, research fellow at the S. Rajaratnam School of International Studies in Singapore.
  • With fish as the main food source and key economic driver of the island nations, the Coast Guard says the emphasis of Operation Blue Pacific is to deter illegal and unregulated fishing.
  • "You cannot understate the Coast Guard's importance to ... relationships in the Central and Western Pacific," he said.
  • "It's difficult to imagine China having sufficient political capital to push for something analogous to what the US is currently doing," Koh said.
criscimagnael

U.S. Aims to Constrain China by Shaping Its Environment, Blinken Says - The New York Times - 0 views

  • “China is the only country with both the intent to reshape the international order and, increasingly, the economic, diplomatic, military and technological power to do it,”
  • “We can’t rely on Beijing to change its trajectory,” he said. “So we will shape the strategic environment around Beijing to advance our vision for an open and inclusive international system.”
  • On Feb. 4, almost three weeks before the invasion, President Vladimir V. Putin met with President Xi Jinping in Beijing as their two governments issued a 5,000-word statement announcing a “no limits” partnership that aims to oppose the international diplomatic and economic systems overseen by the United States and its allies. Since the war began, the Chinese government has given Russia diplomatic support by reiterating Mr. Putin’s criticisms of the North Atlantic Treaty Organization and spreading disinformation and conspiracy theories that undermine the United States and Ukraine.
  • ...9 more annotations...
  • In private conversations, Chinese officials have expressed concern about the emphasis on regional alliances under Mr. Biden and their potential to hem in China.
  • Mr. Blinken’s speech revolved around the slogan for the Biden strategy: “Invest, Align and Compete.” The partnerships fall under the “align” part. “Invest” refers to pouring resources into the United States — administration officials point to the $1 trillion bipartisan infrastructure law passed last year as an example. And “compete” refers to the rivalry with China, a framing the Trump administration also promoted.
  • “Beijing wants to put itself at the center of global innovation and manufacturing, increase other countries’ technological dependence, and then use that dependence to impose its foreign policy preferences,” Mr. Blinken said. “And Beijing is going to great lengths to win this contest — for example, taking advantage of the openness of our economies to spy, to hack, to steal technology and know-how to advance its military innovation and entrench its surveillance state.”
  • Mr. Blinken also noted the human rights abuses, repression of ethnic minorities and quashing of free speech and assembly by the Communist Party in Xinjiang, Tibet and Hong Kong. In recent years, those issues have galvanized greater animus toward China among Democratic and Republican politicians and policymakers. “We’ll continue to raise these issues and call for change,” he said.
  • Mr. Blinken said it was China’s recent actions toward Taiwan — trying to sever the island’s diplomatic and international ties and sending fighter jets over the area — that are “deeply destabilizing.”
  • “Arguably no country on earth has benefited more from that than China,” he said. “But rather than using its power to reinforce and revitalize the laws, agreements, principles and institutions that enabled its success, so other countries can benefit from them too, Beijing is undermining it.”
  • “For too long, Chinese companies have enjoyed far greater access to our markets than our companies have in China,” Mr. Blinken said.” This lack of reciprocity is unacceptable and it’s unsustainable.”
  • But skeptics have said Washington’s ability to shape trade in the Asia-Pacific region may be limited because the framework is not a traditional trade agreement that offers countries reductions in tariffs and more access to the lucrative American market — a move that would be politically unpopular in the United States.
  • “We can stay vigilant about our national security without closing our doors,” he said. “Racism and hate have no place in a nation built by generations of immigrants to fulfill the promise of opportunity for all.”
Javier E

Opinion | Omicron Is Not the Final Variant - The New York Times - 0 views

  • To mitigate the impact of future variants, the world needs to establish and strengthen virus monitoring and surveillance systems that can identify emerging variants quickly so that leaders can respond.
  • Here’s how it works: Scientists regularly get samples of the virus from people who are infected and sequence those samples. This helps scientists pick up on notable changes in the virus. Spikes in cases in certain areas can also alert scientists to look deeper. When researchers find something notable, they can alert colleagues for further study.
  • Networks of laboratories worldwide should be equipped to study the properties of any new variant to assess its potential impact on available tests, vaccines’ effectiveness and treatments.
  • ...7 more annotations...
  • Scientists in South Africa and Botswana who are already doing this kind of routine surveillance of the coronavirus were able to rapidly warn their research networks and the rest of the world about Omicron. Going forward, such findings must also trigger an effective collective response.
  • When concerning variants are identified, there needs to be a global agreement on how countries should jointly react to mitigate any health and economic harms.
  • Every country must also ramp up its testing infrastructure for the coronavirus.
  • Most important, the global vaccination effort must be scaled up to blunt the continued circulation of the virus.
  • During surges, countries need to increase access to the measures that can lower risk of infection, like masks. The right mask, worn properly and consistently in indoor public spaces, can provide some protection against all variants
  • Now that there are drugs available to treat infections, country leaders and drug companies must ensure that there’s plenty of supply and that it is available to everyone.
  • The world got lucky with Omicron. It’s unimaginable what would have happened if that highly contagious variant had caused disease as severe as Delta has
1 - 20 of 289 Next › Last »
Showing 20 items per page