Skip to main content

Home/ History Readings/ Group items tagged intelligence

Rss Feed Group items tagged

Javier E

U.S. intelligence reports from January and February warned about a likely pandemic - Th... - 0 views

  • U.S. intelligence agencies were issuing ominous, classified warnings in January and February about the global danger posed by the coronavirus while President Trump and lawmakers played down the threat and failed to take action that might have slowed the spread of the pathogen, according to U.S. officials familiar with spy agency reporting.
  • they did track the spread of the virus in China, and later in other countries, and warned that Chinese officials appeared to be minimizing the severity of the outbreak.
  • Taken together, the reports and warnings painted an early picture of a virus that showed the characteristics of a globe-encircling pandemic that could require governments to take swift actions to contain it
  • ...24 more annotations...
  • But despite that constant flow of reporting, Trump continued publicly and privately to play down the threat the virus posed to Americans.
  • Intelligence agencies “have been warning on this since January,” said a U.S. official who had access to intelligence reporting that was disseminated to members of Congress and their staffs as well as to officials in the Trump administration
  • “Donald Trump may not have been expecting this, but a lot of other people in the government were — they just couldn’t get him to do anything about it,” this official said. “The system was blinking red.”
  • The warnings from U.S. intelligence agencies increased in volume toward the end of January and into early February, said officials familiar with the reports. By then, a majority of the intelligence reporting included in daily briefing papers and digests from the Office of the Director of National Intelligence and the CIA was about covid-19, said officials who have read the reports.
  • The surge in warnings coincided with a move by Sen. Richard Burr (R-N.C.) to sell dozens of stocks worth between $628,033 and $1.72 million.
  • A key task for analysts during disease outbreaks is to determine whether foreign officials are trying to minimize the effects of an outbreak or take steps to hide a public health crisis
  • At the State Department, personnel had been nervously tracking early reports about the virus. One official noted that it was discussed at a meeting in the third week of January, around the time that cable traffic showed that U.S. diplomats in Wuhan were being brought home on chartered planes — a sign that the public health risk was significant
  • Inside the White House, Trump’s advisers struggled to get him to take the virus seriously, according to multiple officials with knowledge of meetings among those advisers and with the president.
  • Azar couldn’t get through to Trump to speak with him about the virus until Jan. 18, according to two senior administration officials. When he reached Trump by phone, the president interjected to ask about vaping and when flavored vaping products would be back on the market
  • On Jan. 27, White House aides huddled with then-acting chief of staff Mick Mulvaney in his office, trying to get senior officials to pay more attention to the virus
  • Joe Grogan, the head of the White House Domestic Policy Council, argued that the administration needed to take the virus seriously or it could cost the president his reelection, and that dealing with the virus was likely to dominate life in the United States for many months.
  • Trump was dismissive because he did not believe that the virus had spread widely throughout the United States.
  • By early February, Grogan and others worried that there weren’t enough tests to determine the rate of infection, according to people who spoke directly to Grogan
  • But Trump resisted and continued to assure Americans that the coronavirus would never run rampant as it had in other countries.“I think it’s going to work out fine,” Trump said on Feb. 19. “I think when we get into April, in the warmer weather, that has a very negative effect on that and that type of a virus.”
  • “The Coronavirus is very much under control in the USA,” Trump tweeted five days later. “Stock Market starting to look very good to me!”
  • But earlier that month, a senior official in the Department of Health and Human Services delivered a starkly different message to the Senate Intelligence Committee, in a classified briefing that four U.S. officials said covered the coronavirus and its global health implications. The House Intelligence Committee received a similar briefing.
  • Robert Kadlec, the assistant secretary for preparedness and response — who was joined by intelligence officials, including from the CIA — told committee members that the virus posed a “serious” threat, one of those officials said.
  • he said that to get ahead of the virus and blunt its effects, Americans would need to take actions that could disrupt their daily lives, the official said. “It was very alarming.”
  • Trump’s insistence on the contrary seemed to rest in his relationship with China’s President Xi Jingping, whom Trump believed was providing him with reliable information about how the virus was spreading in China, despite reports from intelligence agencies that Chinese officials were not being candid about the true scale of the crisis.
  • Some of Trump’s advisers told him that Beijing was not providing accurate numbers
  • Rather than press China to be more forthcoming, Trump publicly praised its response.
  • “China has been working very hard to contain the Coronavirus,” Trump tweeted Jan. 24. “The United States greatly appreciates their efforts and transparency. It will all work out well. In particular, on behalf of the American People, I want to thank President Xi!”
  • Trump on Feb. 3 banned foreigners who had been in China in the previous 14 days from entering the United States, a step he often credits for helping to protect Americans against the virus. He has also said publicly that the Chinese weren’t honest about the effects of the virus. But that travel ban wasn’t accompanied by additional significant steps to prepare
  • As the first cases of infection were confirmed in the United States, Trump continued to insist that the risk to Americans was small.“I think the virus is going to be — it’s going to be fine,” he said on Feb. 10
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

Trump Is Attempting to Politicize American Intelligence Agencies - The Atlantic - 0 views

  • The White House recently sought to enlist the Departments of Homeland Security and Justice to build a case for its controversial and unpopular immigration ban, CNN reported on Thursday. Among intelligence professionals, the request to produce analysis that supports a favored policy—vice producing analysis, and allowing it to inform policy—is called politicization
  • It is anathema to the training most analysts receive and the values that lie at the heart of the vocation. There is a high cost to putting ideology over informed assessments of political, economic, and military realities.
  • At the Central Intelligence Agency, where I served as director of strategy in the Directorate of Analysis, the subject of politicization is introduced to analysts almost as soon as they enter into service. There is good reason for this: Politicization is not an academic issue.
  • ...9 more annotations...
  • During the Cold War, the Ford administration convened a Team B comprised of conservative foreign-policy thinkers to challenge the intelligence community’s estimates of Soviet nuclear capabilities. Then-CIA director and future President George H.W. Bush later concluded the group’s work lent “itself to manipulation for purposes other than estimative accuracy.”
  • In the early 1990s, after a rocky confirmation process during which he was accused of politicizing intelligence analysis, Director of the CIA Robert Gates implemented a series of reforms aimed at guarding against political or ideological thinking coloring intelligence analysis. Gates described politicization as “deliberately distorting analysis or judgments to favor a preferred line of thinking irrespective of evidence.”
  • during my tenure as an analyst with the CIA—President George W. Bush’s administration exerted unusual pressure to have the CIA support its plans to invade Iraq because of that country’s alleged ties to al-Qaeda and its weapons of mass destruction program. Both assumptions proved flawed.
  • An internal CIA post-mortem concluded that the CIA’s assessments of the Iraqi WMD program were a case of an effective denial-and-deception program that fed prevailing assumptions.
  • Intelligence analysis is more an imperfect art than it is a science: Gaps in reporting, bad sources, and circular reporting all complicate the analyst’s quest for knowledge and understanding
  • Politicization, however, sits on top of all of these complicating factors because it is an act of willful commission: At its most overt, it amounts to using a political position to get people to say that a clear, bright blue sky is cloudy
  • Speaking “truth to power” requires courage, because political partisans are all too happy to causally decry dissent as disloyalty.
  • What is the cost of politicization? As of 2013, it was estimated that the American invasion of Iraq in 2003 cost an estimated $1.7 trillion, and saw over 4,000 Americans killed in action and over 30,000 wounded in action. Those numbers don’t include the families of the fallen; the innocent Iraqis killed or wounded during the conflict; or the insurgency that evolved into the extremist threat that we now know as ISIS.
  • The irony is that President Trump is a vocal critic of his predecessors’ decisions to invade, occupy, and ultimately withdraw from Iraq. In the run-up to that war, the Department of Defense formed an Office of Special Plans, conceived by Deputy Secretary of Defense Paul Wolfowitz, which as Seymour Hersh argued in The New Yorker, “was created in order to find evidence of what Wolfowitz and his boss, Defense Secretary Donald Rumsfeld, believed to be true” about Iraq the threat it posed to the world
Javier E

Senate Votes to Extend Electronic Surveillance Authority - NYTimes.com - 0 views

  • Congress gave final approval on Friday to a bill extending the government’s power to intercept electronic communications of spy and terrorism suspects, after the Senate voted down proposals from several Democrats and Republicans to increase protections of civil liberties and privacy.
  • clearing it for approval by President Obama, who strongly supports it. Intelligence agencies said the bill was their highest legislative priority.
  • Congressional critics of the bill said that they suspected that intelligence agencies were picking up the communications of many Americans, but that they could not be sure because the agencies would not provide even rough estimates of how many people inside the United States had had communications collected under authority of the surveillance law, known as the Foreign Intelligence Surveillance Act.
  • ...5 more annotations...
  • The Foreign Intelligence Surveillance Act was adopted in 1978 and amended in 2008, with the addition of new surveillance authority and procedures, which are continued by the bill approved on Friday. The 2008 law was passed after the disclosure that President George W. Bush had authorized eavesdropping inside the United States, to search for evidence of terrorist activity, without the court-approved warrants ordinarily required for domestic spying.
  • By a vote of 52 to 43, the Senate on Friday rejected a proposal by Mr. Wyden to require the national intelligence director to tell Congress if the government had collected any domestic e-mail or telephone conversations under the surveillance law. The Senate also rejected, 54 to 37, an amendment that would have required disclosure of information about significant decisions by a special federal court that reviews applications for electronic surveillance in foreign intelligence cases.
  • The No. 2 Senate Democrat, Richard J. Durbin of Illinois, said the surveillance law “does not have adequate checks and balances to protect the constitutional rights of innocent American citizens.” “It is supposed to focus on foreign intelligence,” Mr. Durbin said, “but the reality is that this legislation permits targeting an innocent American in the United States as long as an additional purpose of the surveillance is targeting a person outside the United States.”
  • Mr. Merkley said the administration should provide at least unclassified summaries of major decisions by the Foreign Intelligence Surveillance Court. “An open and democratic society such as ours should not be governed by secret laws,” Mr. Merkley said, “and judicial interpretations are as much a part of the law as the words that make up our statute.”
  • Mr. Wyden said these writs reminded him of the “general warrants that so upset the colonists” more than 200 years ago. “The founding fathers could never have envisioned tweeting and Twitter and the Internet,” Mr. Wyden said. “Advances in technology gave government officials the power to invade individual privacy in a host of new ways.”
Javier E

Trump is already antagonizing the intelligence community, and that's a problem - The Wa... - 0 views

  • On Sunday, the president-elect again rejected the Russian role, adding that he was smart enough that he didn’t want or need a daily briefing.
  • If what is gained is not used or wanted or is labeled as suspect or corrupt — by what moral authority does a director put his people at risk?
  • Then there is the ethic of the intelligence profession, captured by the gospel of John’s dictum in the agency’s headquarters lobby — that the truth will set you free.
  • ...3 more annotations...
  • What happens if the incoming administration directs that the “Russia did it” file be closed? Would standing intelligence requirements to learn more about this be eliminated? And if they were, what would the agency do with relevant data that would inevitably come through its collection network?
  • And what about the statute that requires the CIA and the rest of the intelligence community to keep Congress “fully and currently informed” about all significant intelligence activities? Data on a foreign power manipulating the federal electoral process would certainly qualify. What will the White House position be when the agency is asked by Congress if it has learned anything more on the issue?
  • His future workforce will be looking for clues about his willingness to defend them against charges of incompetence and politicization simply for saying what their craft tells them to be true.
malonema1

Trump walks back sanctions against Russia, contradicting Nikki Haley - TODAY.com - 0 views

  • Trump walks back sanctions against Russia, contradicting Nikki Haley
  • President Trump is walking back plans to impose new economic sanctions against Russia announced Sunday by U.N. Ambassador Nikki Haley. The planned sanctions were an attempt to punish Russia for its support of Syrian President Bashar Assad after a chemical weapons attack earlier this month. {"1222314563954":{"mpxId":"1222314563954","canonical_url":"https://www.today.com/video/how-author-allison-pataki-s-life-was-changed-by-her-husband-s-stroke-1222314563954","canonicalUrl":"https://www.today.com/video/how-author-allison-pataki-s-life-was-changed-by-her-husband-s-stroke-1222314563954","legacy_url":"https://www.today.com/video/how-author-allison-pataki-s-life-was-changed-by-her-husband-s-stroke-1222314563954","playerUrl":"https://www.today.com/offsite/how-author-allison-pataki-s-life-was-changed-by-her-husband-s-stroke-1222314563954","ampPlayerUrl":"https://player.today.com/offsite/how-author-allison-pataki-s-life-was-changed-by-her-husband-s-stroke-1222314563954","relatedLink":"","sentiment":"Positive","shortUrl":"https://www.today.com/video/how-author-allison-pataki-s-life-was-changed-by-her-husband-s-stroke-1222314563954","description":"Daughter of former New York Gov. George Pataki, Allison Pataki details how her life was changed by her husband’s stroke in her new memoir, “Beauty in the Broken Places.” TODAY’s Jenna Bush Hager reports.","title":"How author Allison Pataki’s life was changed by her husband’s stroke","thumbnail":"https://media4.s-nbcnews.com/j/MSNBC/Components/Video/201804/tdy_health_jenna_stroke_180430_1920x1080.today-vid-rail.jpg","socialTitle":"How author Allison Pataki’s life was changed by her husband’s stroke","seoHeadline":"How author Allison Pataki’s life was changed by her husband’s stroke","guid":"tdy_health_jenna_stroke_180430","newsNetwork":"TODAY.com","videoType":"Broadcast","isSponsored":false,"nativeAd":false,"autoPlay":false,"mezzVersion":1,"embedCode":"%3Cdiv%20style=%22position:relative;%20padding-bottom:63%25;%20padding-bottom:-webkit-calc(56.25%25%20+%2050px);%20padding-bottom:calc(56.25%25%20+%2050px);%20height:%200;%22%3E%0A%20%20%20%20%3Ciframe%20style=%22position:absolute;%20width:%20100%25;%20height:%20100%25;%22%0A%20%20%20%20src=%22https://www.today.com/offsite/how-author-allison-pataki-s-life-was-changed-by-her-husband-s-stroke-1222314563954%22%20scrolling=%22no%22%20frameborder=%220%22%3E%3C/iframe%3E%0A%20%20%3C/div%3E","duration":274,"pub_date":"2018-04-30T12:44:10.000+0000","pub_date_user_facing":"April 30th, 2018","videoAssets":[{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/9Fe_exuRq8lR?MBR=TRUE","width":480,"height":270,"bitrate":479977,"duration":274,"durationISO":"PT4M33.34S","assetType":"Akamai Video"},{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/0o5tr_475iWV?MBR=TRUE","width":480,"height":270,"bitrate":275203,"duration":274,"durationISO":"PT4M33.34S","assetType":"Akamai Video"},{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/A1cxTcUOSiuY?MBR=TRUE","width":960,"height":540,"bitrate":1743277,"duration":274,"durationISO":"PT4M33.34S","assetType":"Akamai Video"},{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/eUyW5b5tJxFe?MBR=TRUE","width":1280,"height":720,"bitrate":3380893,"duration":274,"durationISO":"PT4M33.34S","assetType":"Akamai Video"},{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/s_DndGGU_0hw?MBR=TRUE","width":640,"height":360,"bitrate":926383,"duration":274,"durationISO":"PT4M33.34S","assetType":"Akamai Video"},{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/_m4OXAdtuKaF?MBR=TRUE","width":1920,"height":1080,"bitrate":4680830,"duration":274,"durationISO":"PT4M33.34S","assetType":"Akamai Video"}],"captionLinks":{"srt":"https://nbcnewsdigital-static.nbcuni.com/media/captions/NBC_News/379/7/1525092363215_tdy_health_jenna_stroke_180430.srt"},"requiresCaptioning":false,"hasCaptions":true,"hasTranscript":false,"transcript":"","availabilityState":"available"},"1222337091916":{"mpxId":"1222337091916","canonical_url":"https://www.today.com/video/cleveland-kidnapping-survivor-michelle-knight-talks-about-new-life-marriage-1222337091916","canonicalUrl":"https://www.today.com/video/cleveland-kidnapping-survivor-michelle-knight-talks-about-new-life-marriage-1222337091916","legacy_url":"https://www.today.com/video/cleveland-kidnapping-survivor-michelle-knight-talks-about-new-life-marriage-1222337091916","playerUrl":"https://www.today.com/offsite/cleveland-kidnapping-survivor-michelle-knight-talks-about-new-life-marriage-1222337091916","ampPlayerUrl":"https://player.today.com/offsite/cleveland-kidnapping-survivor-michelle-knight-talks-about-new-life-marriage-1222337091916","relatedLink":"","sentiment":"Neutral","shortUrl":"https://www.today.com/video/cleveland-kidnapping-survivor-michelle-knight-talks-about-new-life-marriage-1222337091916","description":"Almost five years after her escape from the Cleveland home of Ariel Castro, who held her and two others captive for over a decade, Michelle Knight (now known as Lily Rose Lee) joins Megyn Kelly TODAY to talk about her ordeal and her new memoir, “Life After Darkness.” She talks about her recent marriage and her prospects for having a child.","title":"Cleveland kidnapping survivor Michelle Knight talks about new life, marriage","thumbnail":"https://media2.s-nbcnews.com/j/MSNBC/Components/Video/201804/tdy_mk_news_michelle_knight_180430.today-vid-rail.jpg","socialTitle":"Cleveland kidnapping survivor Michelle Knight talks about new life, marriage","seoHeadline":"Cleveland kidnapping survivor Michelle Knight talks about new life, marriage","guid":"tdy_mk_news_michelle_knight_180430","newsNetwork":"TODAY.com","videoType":"Broadcast","isSponsored":false,"nativeAd":false,"autoPlay":false,"mezzVersion":1,"embedCode":"%3Cdiv%20style=%22position:relative;%20padding-bottom:63%25;%20padding-bottom:-webkit-calc(56.25%25%20+%2050px);%20padding-bottom:calc(56.25%25%20+%2050px);%20height:%200;%22%3E%0A%20%20%20%20%3Ciframe%20style=%22position:absolute;%20width:%20100%25;%20height:%20100%25;%22%0A%20%20%20%20src=%22https://www.today.com/offsite/cleveland-kidnapping-survivor-michelle-knight-talks-about-new-life-marriage-1222337091916%22%20scrolling=%22no%22%20frameborder=%220%22%3E%3C/iframe%3E%0A%20%20%3C/div%3E","duration":736,"pub_date":"2018-04-30T13:44:06.000+0000","pub_date_user_facing":"April 30th, 2018","videoAssets":[{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/7Cg3OcsCGFMA?mbr=true","width":480,"height":270,"bitrate":463000,"duration":736,"durationISO":"PT12M16S","assetType":"Akamai Video"},{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/DzFb7_cYHbym?mbr=true","width":480,"height":270,"bitrate":264000,"duration":736,"durationISO":"PT12M16S","assetType":"Akamai Video"},{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/Ee0U4H3Jsue7?mbr=true","width":1280,"height":720,"bitrate":3295000,"duration":736,"durationISO":"PT12M16S","assetType":"Akamai Video"},{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/mlJNTUu_C1Oh?mbr=true","width":960,"height":540,"bitrate":1695000,"duration":736,"durationISO":"PT12M16S","assetType":"Akamai Video"},{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/woRtUPPoe7Vn?mbr=true","width":640,"height":360,"bitrate":895000,"duration":736,"du
  • Amid the historic developments formally ending the Korean War, North Korean leader Kim Jong Un has promised to close down a nuclear test site in May. NBC’s Keir Simmons reports for TODAY from London. {"1222314563954":{"mpxId":"1222314563954","canonical_url":"https://www.today.com/video/how-author-allison-pataki-s-life-was-changed-by-her-husband-s-stroke-1222314563954","canonicalUrl":"https://www.today.com/video/how-author-allison-pataki-s-life-was-changed-by-her-husband-s-stroke-1222314563954","legacy_url":"https://www.today.com/video/how-author-allison-pataki-s-life-was-changed-by-her-husband-s-stroke-1222314563954","playerUrl":"https://www.today.com/offsite/how-author-allison-pataki-s-life-was-changed-by-her-husband-s-stroke-1222314563954","ampPlayerUrl":"https://player.today.com/offsite/how-author-allison-pataki-s-life-was-changed-by-her-husband-s-stroke-1222314563954","relatedLink":"","sentiment":"Positive","shortUrl":"https://www.today.com/video/how-author-allison-pataki-s-life-was-changed-by-her-husband-s-stroke-1222314563954","description":"Daughter of former New York Gov. George Pataki, Allison Pataki details how her life was changed by her husband’s stroke in her new memoir, “Beauty in the Broken Places.” TODAY’s Jenna Bush Hager reports.","title":"How author Allison Pataki’s life was changed by her husband’s stroke","thumbnail":"https://media4.s-nbcnews.com/j/MSNBC/Components/Video/201804/tdy_health_jenna_stroke_180430_1920x1080.today-vid-rail.jpg","socialTitle":"How author Allison Pataki’s life was changed by her husband’s stroke","seoHeadline":"How author Allison Pataki’s life was changed by her husband’s stroke","guid":"tdy_health_jenna_stroke_180430","newsNetwork":"TODAY.com","videoType":"Broadcast","isSponsored":false,"nativeAd":false,"autoPlay":false,"mezzVersion":1,"embedCode":"%3Cdiv%20style=%22position:relative;%20padding-bottom:63%25;%20padding-bottom:-webkit-calc(56.25%25%20+%2050px);%20padding-bottom:calc(56.25%25%20+%2050px);%20height:%200;%22%3E%0A%20%20%20%20%3Ciframe%20style=%22position:absolute;%20width:%20100%25;%20height:%20100%25;%22%0A%20%20%20%20src=%22https://www.today.com/offsite/how-author-allison-pataki-s-life-was-changed-by-her-husband-s-stroke-1222314563954%22%20scrolling=%22no%22%20frameborder=%220%22%3E%3C/iframe%3E%0A%20%20%3C/div%3E","duration":274,"pub_date":"2018-04-30T12:44:10.000+0000","pub_date_user_facing":"April 30th, 2018","videoAssets":[{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/9Fe_exuRq8lR?MBR=TRUE","width":480,"height":270,"bitrate":479977,"duration":274,"durationISO":"PT4M33.34S","assetType":"Akamai Video"},{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/0o5tr_475iWV?MBR=TRUE","width":480,"height":270,"bitrate":275203,"duration":274,"durationISO":"PT4M33.34S","assetType":"Akamai Video"},{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/A1cxTcUOSiuY?MBR=TRUE","width":960,"height":540,"bitrate":1743277,"duration":274,"durationISO":"PT4M33.34S","assetType":"Akamai Video"},{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/eUyW5b5tJxFe?MBR=TRUE","width":1280,"height":720,"bitrate":3380893,"duration":274,"durationISO":"PT4M33.34S","assetType":"Akamai Video"},{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/s_DndGGU_0hw?MBR=TRUE","width":640,"height":360,"bitrate":926383,"duration":274,"durationISO":"PT4M33.34S","assetType":"Akamai Video"},{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/_m4OXAdtuKaF?MBR=TRUE","width":1920,"height":1080,"bitrate":4680830,"duration":274,"durationISO":"PT4M33.34S","assetType":"Akamai Video"}],"captionLinks":{"srt":"https://nbcnewsdigital-static.nbcuni.com/media/captions/NBC_News/379/7/1525092363215_tdy_health_jenna_stroke_180430.srt"},"requiresCaptioning":false,"hasCaptions":true,"hasTranscript":false,"transcript":"","availabilityState":"available"},"1222337091916":{"mpxId":"1222337091916","canonical_url":"https://www.today.com/video/cleveland-kidnapping-survivor-michelle-knight-talks-about-new-life-marriage-1222337091916","canonicalUrl":"https://www.today.com/video/cleveland-kidnapping-survivor-michelle-knight-talks-about-new-life-marriage-1222337091916","legacy_url":"https://www.today.com/video/cleveland-kidnapping-survivor-michelle-knight-talks-about-new-life-marriage-1222337091916","playerUrl":"https://www.today.com/offsite/cleveland-kidnapping-survivor-michelle-knight-talks-about-new-life-marriage-1222337091916","ampPlayerUrl":"https://player.today.com/offsite/cleveland-kidnapping-survivor-michelle-knight-talks-about-new-life-marriage-1222337091916","relatedLink":"","sentiment":"Neutral","shortUrl":"https://www.today.com/video/cleveland-kidnapping-survivor-michelle-knight-talks-about-new-life-marriage-1222337091916","description":"Almost five years after her escape from the Cleveland home of Ariel Castro, who held her and two others captive for over a decade, Michelle Knight (now known as Lily Rose Lee) joins Megyn Kelly TODAY to talk about her ordeal and her new memoir, “Life After Darkness.” She talks about her recent marriage and her prospects for having a child.","title":"Cleveland kidnapping survivor Michelle Knight talks about new life, marriage","thumbnail":"https://media2.s-nbcnews.com/j/MSNBC/Components/Video/201804/tdy_mk_news_michelle_knight_180430.today-vid-rail.jpg","socialTitle":"Cleveland kidnapping survivor Michelle Knight talks about new life, marriage","seoHeadline":"Cleveland kidnapping survivor Michelle Knight talks about new life, marriage","guid":"tdy_mk_news_michelle_knight_180430","newsNetwork":"TODAY.com","videoType":"Broadcast","isSponsored":false,"nativeAd":false,"autoPlay":false,"mezzVersion":1,"embedCode":"%3Cdiv%20style=%22position:relative;%20padding-bottom:63%25;%20padding-bottom:-webkit-calc(56.25%25%20+%2050px);%20padding-bottom:calc(56.25%25%20+%2050px);%20height:%200;%22%3E%0A%20%20%20%20%3Ciframe%20style=%22position:absolute;%20width:%20100%25;%20height:%20100%25;%22%0A%20%20%20%20src=%22https://www.today.com/offsite/cleveland-kidnapping-survivor-michelle-knight-talks-about-new-life-marriage-1222337091916%22%20scrolling=%22no%22%20frameborder=%220%22%3E%3C/iframe%3E%0A%20%20%3C/div%3E","duration":736,"pub_date":"2018-04-30T13:44:06.000+0000","pub_date_user_facing":"April 30th, 2018","videoAssets":[{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/7Cg3OcsCGFMA?mbr=true","width":480,"height":270,"bitrate":463000,"duration":736,"durationISO":"PT12M16S","assetType":"Akamai Video"},{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/DzFb7_cYHbym?mbr=true","width":480,"height":270,"bitrate":264000,"duration":736,"durationISO":"PT12M16S","assetType":"Akamai Video"},{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/Ee0U4H3Jsue7?mbr=true","width":1280,"height":720,"bitrate":3295000,"duration":736,"durationISO":"PT12M16S","assetType":"Akamai Video"},{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/mlJNTUu_C1Oh?mbr=true","width":960,"height":540,"bitrate":1695000,"duration":736,"durationISO":"PT12M16S","assetType":"Akamai Video"},{"format":"MPEG4","publicUrl":"//link.theplatform.com/s/2E2eJC/woRtUPPoe7Vn?mbr=true","width":640,"height":360,"bitrate":895000,"duration":736,"durationISO":"PT12M16S","assetType":"Akamai Video"}],"captionLinks":{},"requiresCaption
  • ...1 more annotation...
  • North Korea to close down nuclear test site in May
hannahcarter11

Rep. Michael McCaul, top Republican on House Foreign Affairs Committee, calls Covid-19 ... - 0 views

  • Texas Rep. Michael McCaul, a top Republican on the House Foreign Affairs Committee, claimed Sunday the origins of the coronavirus pandemic are the "worst cover-up" in human history.
  • A fierce debate has raged over whether the virus escaped from a lab in Wuhan or originated in the wild. Initially, prominent scientists publicly derided the so-called lab leak theory -- embraced by then-President Donald Trump and his allies -- as a conspiracy theory, and the intelligence community put out a rare public statement in late April 2020 affirming that it "also concurs with the wide scientific consensus that the Covid-19 virus was not manmade or genetically modified."
  • Other lawmakers have also called for answers regarding the origin of the virus and members of the House Foreign Affairs Committee, which has long been investigating the origins of the pandemic, received a classified briefing on the matter earlier this month, according to a source familiar with the matter.
  • ...3 more annotations...
  • The comments from McCaul follows a directive from President Joe Biden ordering the intelligence community to redouble its efforts in investigating the origins of the coronavirus pandemic and to report back to him in 90 days. A US intelligence report found several researchers at China's Wuhan Institute of Virology fell ill in November 2019 and had to be hospitalized.
  • But as early as March 27, 2020, the Defense Intelligence Agency -- which is home to one of the intelligence community's most robust scientific cells -- in a classified assessment reported by Newsweek found that it was possible that the virus had emerged "accidentally" due to "unsafe laboratory practices."
  • And the Chinese government's lack of transparency and the restricted sharing of data have also hindered the intelligence community's ability to thoroughly investigate the lab leak theory. The US and Britain called on China last week to participate in a second phase of a World Health Organization investigation into the pandemic's origins, but China responded that its role in the probe "has been completed."
Javier E

Warnings Ignored: A Timeline of Trump's COVID-19 Response - The Bulwark - 0 views

  • the White House is trying to establish an alternate reality in which Trump was a competent, focused leader who saved American people from the coronavirus.
  • it highlights just how asleep Trump was at the switch, despite warnings from experts within his own government and from former Trump administration officials pleading with him from the outside.
  • Most prominent among them were former Homeland Security advisor Tom Bossert, Commissioner of the Food and Drug Administration Scott Gottlieb, and Director for Medical and Biodefense Preparedness at the National Security Council Dr. Luciana Borio who beginning in early January used op-eds, television appearances, social media posts, and private entreaties to try to spur the administration into action.
  • ...57 more annotations...
  • what the administration should have been doing in January to prepare us for today.
  • She cites the delay on tests, without which “cases go undetected and people continue to circulate” as a leading issue along with other missed federal government responses—many of which are still not fully operational
  • The prescient recommendations from experts across disciplines in the period before COVID-19 reached American shores—about testing, equipment, and distancing—make clear that more than any single factor, it was Trump’s squandering of out lead-time which should have been used to prepare for the pandemic that has exacerbated this crisis.
  • What follows is an annotated timeline revealing the warning signs the administration received and showing how slow the administration was to act on these recommendations.
  • The Early Years: Warnings Ignored
  • 2017: Trump administrations officials are briefed on an intelligence document titled “Playbook for Early Response to High-Consequence Emerging Infectious Disease Threats and Biological Incidents.” That’s right. The administration literally had an actual playbook for what to do in the early stages of a pandemic
  • February 2018: The Washington Post writes “CDC to cut by 80 percent efforts to prevent global disease outbreak.” The meat of the story is “Countries where the CDC is planning to scale back include some of the world’s hot spots for emerging infectious disease, such as China, Pakistan, Haiti, Rwanda and Congo.”
  • May 2018: At an event marking the 100 year anniversary of the 1918 pandemic, Borio says “pandemic flu” is the “number 1 health security issue” and that the U.S. is not ready to respond.
  • One day later her boss, Rear Adm. Timothy Ziemer is pushed out of the administration and the global health security team is disbanded
  • Beth Cameron, former senior director for global health security on the National Security Council adds: “It is unclear in his absence who at the White House would be in charge of a pandemic,” Cameron said, calling it “a situation that should be immediately rectified.” Note: It was not
  • January 2019: The director of National Intelligence issues the U.S. Intelligence Community’s assessment of threats to national security. Among its findings:
  • A novel strain of a virulent microbe that is easily transmissible between humans continues to be a major threat, with pathogens such as H5N1 and H7N9 influenza and Middle East Respiratory Syndrome Coronavirus having pandemic potential if they were to acquire efficient human-to-human transmissibility.”
  • Page 21: “We assess that the United States and the world will remain vulnerable to the next flu pandemic or large scale outbreak of a contagious disease that could lead to massive rates of death and disability, severely affect the world economy, strain international resources, and increase calls on the United States for support.”
  • September, 2019: The Trump Administration ended the pandemic early warning program, PREDICT, which trained scientists in China and other countries to identify viruses that had the potential to turn into pandemics. According to the Los Angeles Times, “field work ceased when funding ran out in September,” two months before COVID-19 emerged in Wuhan Province, China.
  • 2020: COVID-19 Arrives
  • anuary 3, 2020: The CDC is first alerted to a public health event in Wuhan, China
  • January 6, 2020: The CDC issues a travel notice for Wuhan due to the spreading coronavirus
  • Note: The Trump campaign claims that this marks the beginning of the federal government disease control experts becoming aware of the virus. It was 10 weeks from this point until the week of March 16 when Trump began to change his tone on the threat.
  • January 10, 2020: Former Trump Homeland Security Advisor Tom Bossert warns that we shouldn’t “jerk around with ego politics” because “we face a global health threat…Coordinate!”
  • January 18, 2020: After two weeks of attempts, HHS Secretary Alex Azar finally gets the chance to speak to Trump about the virus. The president redirects the conversation to vaping, according to the Washington Post. 
  • January 21, 2020: Dr. Nancy Messonnier, the director of the National Center for Immunization and Respiratory Disease at the CDC tells reporters, “We do expect additional cases in the United States.”
  • January 27, 2020: Top White House aides meet with Chief of Staff Mick Mulvaney to encourage greater focus on the threat from the virus. Joe Grogan, head of the White House Domestic Policy Council warns that “dealing with the virus was likely to dominate life in the United States for many months.”
  • January 28, 2020: Two former Trump administration officials—Gottlieb and Borio—publish an op-ed in the Wall Street Journal imploring the president to “Act Now to Prevent an American Epidemic.” They advocate a 4-point plan to address the coming crisis:
  • (1) Expand testing to identify and isolate cases. Note: This did not happen for many weeks. The first time more than 2,000 tests were deployed in a single day was not until almost six weeks later, on March 11.
  • (3) Prepare hospital units for isolation with more gowns and masks. Note: There was no dramatic ramp-up in the production of critical supplies undertaken. As a result, many hospitals quickly experienced shortages of critical PPE materials. Federal agencies waited until Mid-March to begin bulk orders of N95 masks.
  • January 29, 2020: Trump trade advisor Peter Navarro circulates an internal memo warning that America is “defenseless” in the face of an outbreak which “elevates the risk of the coronavirus evolving into a full-blown pandemic, imperiling the lives of millions of Americans.”
  • January 30, 2020: Dr. James Hamblin publishes another warning about critical PPE materials in the Atlantic, titled “We Don’t Have Enough Masks.”
  • January 29, 2020: Republican Senator Tom Cotton reaches out to President Trump in private to encourage him to take the virus seriously.
  • Late January, 2020:  HHS sends a letter asking to use its transfer authority to shift $136 million of department funds into pools that could be tapped for combating the coronavirus. White House budget hawks argued that appropriating too much money at once when there were only a few U.S. cases would be viewed as alarmist.
  • Trump’s Chinese travel ban only banned “foreign nationals who had been in China in the last 14 days.” This wording did not—at all—stop people from arriving in America from China. In fact, for much of the crisis, flights from China landed in America almost daily filled with people who had been in China, but did not fit the category as Trump’s “travel ban” defined it.
  • January 31, 2020: On the same day Trump was enacting his fake travel ban, Foreign Policy reports that face masks and latex gloves are sold out on Amazon and at leading stores in New York City and suggests the surge in masks being sold to other countries needs “refereeing” in the face of the coming crisis.
  • February 4, 2020: Gottlieb and Borio take to the WSJ again, this time to warn the president that “a pandemic seems inevitable” and call on the administration to dramatically expand testing, expand the number of labs for reviewing tests, and change the rules to allow for tests of people even if they don’t have a clear known risk factor.
  • Note: Some of these recommendations were eventually implemented—25 days later.
  • February 5, 2020: HHS Secretary Alex Azar requests $2 billion to “buy respirator masks and other supplies for a depleted federal stockpile of emergency medical equipment.” He is rebuffed by Trump and the White House OMB who eventually send Congress a $500 million request weeks later.
  • February 4 or 5, 2020: Robert Kadlec, the assistant secretary for preparedness and response, and other intelligence officials brief the Senate Intelligence Committee that the virus poses a “serious” threat and that “Americans would need to take actions that could disrupt their daily lives.”
  • February 5, 2020: Senator Chris Murphy tweets: Just left the Administration briefing on Coronavirus. Bottom line: they aren't taking this seriously enough. Notably, no request for ANY emergency funding, which is a big mistake. Local health systems need supplies, training, screening staff etc. And they need it now.
  • February 9, 2020: The Washington Post reports that a group of governors participated in a jarring meeting with Dr. Anthony Fauci and Dr. Robert Redfield that was much more alarmist than what they were hearing from Trump. “The doctors and the scientists, they were telling us then exactly what they are saying now,” Maryland Gov. Larry Hogan (R) said.
  • the administration lifted CDC restrictions on tests. This is a factually true statement. But it elides that fact that they did so on March 3—two critical weeks after the third Borio/Gottlieb op-ed on the topic, during which time the window for intervention had shrunk to a pinhole.
  • February 20, 2020: Borio and Gottlieb write in the Wall Street Journal that tests must be ramped up immediately “while we can intervene to stop spread.”
  • February 23, 2020: Harvard School of Public Health professor issues warning on lack of test capability: “As of today, the US remains extremely limited in#COVID19 testing. Only 3 of ~100 public health labs haveCDC test kits working and CDC is not sharing what went wrong with the kits. How to know if COVID19 is spreading here if we are not looking for it.
  • February 24, 2020: The Trump administration sends a letter to Congress requesting a small dollar amount—between $1.8 billion and $2.5 billion—to help combat the spread of the coronavirus. This is, of course, a pittance
  • February 25, 2020: Messonier says she expects “community spread” of the virus in the United States and that “disruption to everyday life might be severe.” Trump is reportedly furious and Messonier’s warnings are curtailed in the ensuing weeks.
  • Trump mocks Congress in a White House briefing, saying “If Congress wants to give us the money so easy—it wasn’t very easy for the wall, but we got that one done. If they want to give us the money, we’ll take the money.”
  • February 26, 2020: Congress, recognizing the coming threat, offers to give the administration $6 billion more than Trump asked for in order to prepare for the virus.
  • February 27, 2020: In a leaked audio recording Sen. Richard Burr, chairman of the Intelligence Committee and author of the Pandemic and All-Hazards Preparedness Act (PAHPA) and the Pandemic and All-Hazards Preparedness and Advancing Innovation Act (reauthorization of PAHPA), was telling people that COVID-19 “is probably more akin to the 1918 pandemic.”
  • March 4, 2020: HHS says they only have 1 percent of respirator masks needed if the virus became a “full-blown pandemic.”
  • March 3, 2020: Vice President Pence is asked about legislation encouraging companies to produce more masks. He says the Trump administration is “looking at it.”
  • March 7, 2020: Fox News host Tucker Carlson, flies to Mar-a-Lago to implore Trump to take the virus seriously in private rather than embarrass him on TV. Even after the private meeting, Trump continued to downplay the crisis
  • March 9, 2020: Tom Bossert, Trump’s former Homeland Security adviser, publishes an op-ed saying it is “now or never” to act. He advocates for social distancing and school closures to slow the spread of the contagion.
  • Trump says that developments are “good for the consumer” and compares COVID-19 favorably to the common flu.
  • March 17, 2020: Facing continued shortages of the PPE equipment needed to prevent healthcare providers from succumbing to the virus, Oregon Senators Jeff Merkeley and Ron Wyden call on Trump to use the Defense Production Act to expand supply of medical equipment
  • March 18, 2020: Trump signs the executive order to activate the Defense Production Act, but declines to use it
  • At the White House briefing he is asked about Senator Chuck Schumer’s call to urgently produce medical supplies and ventilators. Trump responds: “Well we’re going to know whether or not it’s urgent.” Note: At this point 118 Americans had died from COVID-19.
  • March 20, 2020: At an April 2nd White House Press Conference, President Trump’s son-in-law Jared Kushner who was made ad hoc point man for the coronavirus response said that on this date he began working with Rear Admiral John Polowczyk to “build a team” that would handle the logistics and supply chain for providing medical supplies to the states. This suggestion was first made by former Trump Administration officials January 28th
  • March 22, 2020: Six days after calling for a 15-day period of distancing, Trump tweets that this approach “may be worse than the problem itself.”
  • March 24, 2020: Trump tells Fox News that he wants the country opened up by Easter Sunday (April 12)
  • As Trump was speaking to Fox, there were 52,145 confirmed cases in the United States and the doubling time for daily new cases was roughly four days.
Javier E

Peak Intel: How So-Called Strategic Intelligence Actually Makes Us Dumber - Eric Garlan... - 0 views

  • the culture of intelligence has been in free-fall since the financial crisis of 2008. While people may be pretending to follow intelligence, impostors in both the analyst and executive camps actually follow shallow, fake processes that justify their existing decisions and past investments.
  • three trends are making this harder
  • the explosion of cheap capital from Wall Street has led major industries to consolidate. Where a sector such as pharmaceuticals or telecommunications (and, of course, banking) might have had dozens of big players a couple of decades ago, now it has closer to five. When I began in the intelligence industry 15 years ago, I did projects for Compaq, Amoco, Wyeth Pharmaceuticals, and Cingular -- all of which have since been rolled into the conglomerates of Hewlett Packard, British Petroleum, Pfizer, and AT&T. There are fewer firms for an intelligence analyst to track, and their behavior has to be understood on totally different terms than when this discipline was created.
  • ...7 more annotations...
  • One cannot predict the future of a marketplace by trend analysis alone, because oligopolies do not compete the same way as do firms in free markets. 
  • industry consolidations have created gigantic bureaucracies. Hierarchical organizations have a very different logic than smaller firms. In less consolidated industries, success and failure are largely the result of the decisions you make, so intelligence about the reality of the marketplace is critical. Life is different in gigantic organizations, where success and failure are almost impossible to attribute to individual decisions.
  • In large, slow-moving bureaucracies, conventional thinking and risk avoidance become paramount
  • , the world's economy is today driven more by policy makers than at any time in recent history. At the behest of government officials, banks have been shielded from the consequences of their market decisions, and in many cases exempt from prosecution for their potential law-breaking. Nation-state policy-makers pick the winners in industries
  • How can you use classical competitive analysis to examine the future of markets when the relationships between firms and government agencies are so incestuous and the choices of consumers so severely limited by industrial consolidation?
  • Companies still need guidance, but if rational analysis is nearly impossible, is it any wonder that executives are asking for less of it? What they are asking for is something, well, less productive.
  • executives today do not do well when their analysts confront them with challenging, though often relatively benign, predictions. Confusion, anger, and psychological transference are common responses to unwelcome analysis.
izzerios

N.S.A. Gets More Latitude to Share Intercepted Communications - The New York Times - 0 views

  • In its final days, the Obama administration has expanded the power of the National Security Agency to share globally intercepted personal communications with the government’s 16 other intelligence agencies before applying privacy protections.
  • new rules significantly relax longstanding limits on what the N.S.A. may do with the information gathered by its most powerful surveillance operations
  • the government is reducing the risk that the N.S.A. will fail to recognize that a piece of information would be valuable to another agency, but increasing the risk that officials will see private information about innocent people.
  • ...16 more annotations...
  • Previously, the N.S.A. filtered information before sharing intercepted communications with another agency, like the C.I.A. or the intelligence branches of the F.B.I. and the Drug Enforcement Administration
  • N.S.A.’s analysts passed on only information they deemed pertinent
  • other intelligence agencies will be able to search directly through raw repositories of communications intercepted by the N.S.A.
  • “This is not expanding the substantive ability of law enforcement to get access to signals intelligence,”
  • “It is simply widening the aperture for a larger number of analysts, who will be bound by the existing rules.”
  • Toomey, a lawyer for the American Civil Liberties Union, called the move an erosion of rules intended to protect the privacy of Americans when their messages are caught by the N.S.A.’s powerful global collection methods
  • “Seventeen different government agencies shouldn’t be rooting through Americans’ emails with family members, friends and colleagues, all without ever obtaining a warrant.”
  • “Rather than dramatically expanding government access to so much personal data, we need much stronger rules to protect the privacy of Americans,” Mr. Toomey said
  • Under the new system, agencies will ask the N.S.A. for access to specific surveillance feeds, making the case that they contain information relevant and useful to their missions.
  • The move is part of a broader trend of tearing down bureaucratic barriers to sharing intelligence between agencies that dates back to the aftermath of the terrorist attacks of Sept. 11, 2001.
  • Congress enacted the FISA Amendments Act — which legalized warrantless surveillance on domestic soil so long as the target is a foreigner abroad, even when the target is communicating with an American
  • Among the most important questions left unanswered in February was when analysts would be permitted to use Americans’ names, email addresses or other identifying information to search a 12333 database and pull up any messages to, from or about them that had been collected without a warrant.
  • National security analysts sometimes search that act’s repository for Americans’ information, as do F.B.I. agents working on ordinary criminal cases. Critics call this the “backdoor search loophole,” and some lawmakers want to require a warrant for such searches.
  • However, under the rules, if analysts stumble across evidence that an American has committed any crime, they will send it to the Justice Department.
  • Americans’ information gathered under Order 12333 do not apply to metadata: logs showing who contacted whom, but not what they said.
  • Analysts at the intelligence agencies may study social links between people, in search of hidden associates of known suspects, “without regard to the location or nationality of the communicants.”
Javier E

We are witnessing a democratic nightmare - The Washington Post - 0 views

  • the current attacks on the Federal Bureau of Investigation by President Trump and the Republican Party raise the question of whether it’s possible to maintain an effective, and legitimate, intelligence establishment, while the elected leaders who are supposed to control it engage in open-ended, winner-take-all, partisan conflict.
  • Bipartisan consensus has played a crucial but underappreciated role in the history of U.S. intelligence.
  • The United States developed no real national intelligence agency in the 19th century, while European states such as France, Russia and Prussia did. Partly this was due to small-government constitutional norms on this side of the Atlantic; but mistrust between American political factions was another inhibiting factor.
  • ...6 more annotations...
  • Now Trump is consciously attacking the very concept of bipartisan consensus, recasting it not as a manifestation of healthy national unity but as an inherently corrupt bargain that spawns a “deep state.”
  • This consensus almost broke down amid the revelations of major abuses by the FBI and CIA during the 1960s and 1970s. Bipartisan reforms — enhanced congressional oversight, coupled with limited judicial review of spying by the Foreign Intelligence Surveillance Court (FISC) — salvaged it.
  • Only when sectional and partisan battles gave way to new international responsibilities, and (relative) domestic harmony, in the 20th century could Republicans and Democrats define shared national interests and accept the need for permanent secret agencies to protect them.
  • the American national consensus about intelligence, and many other things, was already in deep trouble long before Trump came on the scene. If there were still a robust political center, Trump never would have been elected in the first place.
  • “Those who would counter the illiberalism of Trump with the illiberalism of unfettered bureaucrats would do well to contemplate the precedent their victory would set,” Tufts University constitutional scholar Michael J. Glennon warns in a 2017 Harper’s article.
  • We are witnessing a democratic nightmare: partisan competition over secret and semi-secret intelligence and law-enforcement agencies. And as Glennon notes, it would be unwise to bet against Trump; he has favors to dispense and punishments to dish out.
Javier E

He Could Have Seen What Was Coming: Behind Trump's Failure on the Virus - The New York ... - 0 views

  • “Any way you cut it, this is going to be bad,” a senior medical adviser at the Department of Veterans Affairs, Dr. Carter Mecher, wrote on the night of Jan. 28, in an email to a group of public health experts scattered around the government and universities. “The projected size of the outbreak already seems hard to believe.”
  • A week after the first coronavirus case had been identified in the United States, and six long weeks before President Trump finally took aggressive action to confront the danger the nation was facing — a pandemic that is now forecast to take tens of thousands of American lives — Dr. Mecher was urging the upper ranks of the nation’s public health bureaucracy to wake up and prepare for the possibility of far more drastic action.
  • Throughout January, as Mr. Trump repeatedly played down the seriousness of the virus and focused on other issues, an array of figures inside his government — from top White House advisers to experts deep in the cabinet departments and intelligence agencies — identified the threat, sounded alarms and made clear the need for aggressive action.
  • ...68 more annotations...
  • The president, though, was slow to absorb the scale of the risk and to act accordingly, focusing instead on controlling the message, protecting gains in the economy and batting away warnings from senior officials.
  • Mr. Trump’s response was colored by his suspicion of and disdain for what he viewed as the “Deep State” — the very people in his government whose expertise and long experience might have guided him more quickly toward steps that would slow the virus, and likely save lives.
  • The slow start of that plan, on top of the well-documented failures to develop the nation’s testing capacity, left administration officials with almost no insight into how rapidly the virus was spreading. “We were flying the plane with no instruments,” one official said.
  • But dozens of interviews with current and former officials and a review of emails and other records revealed many previously unreported details and a fuller picture of the roots and extent of his halting response as the deadly virus spread:
  • The National Security Council office responsible for tracking pandemics received intelligence reports in early January predicting the spread of the virus to the United States, and within weeks was raising options like keeping Americans home from work and shutting down cities the size of Chicago. Mr. Trump would avoid such steps until March.
  • Despite Mr. Trump’s denial weeks later, he was told at the time about a Jan. 29 memo produced by his trade adviser, Peter Navarro, laying out in striking detail the potential risks of a coronavirus pandemic: as many as half a million deaths and trillions of dollars in economic losses.
  • The health and human services secretary, Alex M. Azar II, directly warned Mr. Trump of the possibility of a pandemic during a call on Jan. 30, the second warning he delivered to the president about the virus in two weeks. The president, who was on Air Force One while traveling for appearances in the Midwest, responded that Mr. Azar was being alarmist
  • Mr. Azar publicly announced in February that the government was establishing a “surveillance” system
  • the task force had gathered for a tabletop exercise — a real-time version of a full-scale war gaming of a flu pandemic the administration had run the previous year. That earlier exercise, also conducted by Mr. Kadlec and called “Crimson Contagion,” predicted 110 million infections, 7.7 million hospitalizations and 586,000 deaths following a hypothetical outbreak that started in China.
  • By the third week in February, the administration’s top public health experts concluded they should recommend to Mr. Trump a new approach that would include warning the American people of the risks and urging steps like social distancing and staying home from work.
  • But the White House focused instead on messaging and crucial additional weeks went by before their views were reluctantly accepted by the president — time when the virus spread largely unimpeded.
  • When Mr. Trump finally agreed in mid-March to recommend social distancing across the country, effectively bringing much of the economy to a halt, he seemed shellshocked and deflated to some of his closest associates. One described him as “subdued” and “baffled” by how the crisis had played out. An economy that he had wagered his re-election on was suddenly in shambles.
  • He only regained his swagger, the associate said, from conducting his daily White House briefings, at which he often seeks to rewrite the history of the past several months. He declared at one point that he “felt it was a pandemic long before it was called a pandemic,” and insisted at another that he had to be a “cheerleader for the country,” as if that explained why he failed to prepare the public for what was coming.
  • Mr. Trump’s allies and some administration officials say the criticism has been unfair.
  • The Chinese government misled other governments, they say. And they insist that the president was either not getting proper information, or the people around him weren’t conveying the urgency of the threat. In some cases, they argue, the specific officials he was hearing from had been discredited in his eyes, but once the right information got to him through other channels, he made the right calls.
  • “While the media and Democrats refused to seriously acknowledge this virus in January and February, President Trump took bold action to protect Americans and unleash the full power of the federal government to curb the spread of the virus, expand testing capacities and expedite vaccine development even when we had no true idea the level of transmission or asymptomatic spread,” said Judd Deere, a White House spokesman.
  • Decision-making was also complicated by a long-running dispute inside the administration over how to deal with China
  • The Containment IllusionBy the last week of February, it was clear to the administration’s public health team that schools and businesses in hot spots would have to close. But in the turbulence of the Trump White House, it took three more weeks to persuade the president that failure to act quickly to control the spread of the virus would have dire consequences.
  • There were key turning points along the way, opportunities for Mr. Trump to get ahead of the virus rather than just chase it. There were internal debates that presented him with stark choices, and moments when he could have chosen to ask deeper questions and learn more. How he handled them may shape his re-election campaign. They will certainly shape his legacy.
  • Facing the likelihood of a real pandemic, the group needed to decide when to abandon “containment” — the effort to keep the virus outside the U.S. and to isolate anyone who gets infected — and embrace “mitigation” to thwart the spread of the virus inside the country until a vaccine becomes available.
  • Among the questions on the agenda, which was reviewed by The New York Times, was when the department’s secretary, Mr. Azar, should recommend that Mr. Trump take textbook mitigation measures “such as school dismissals and cancellations of mass gatherings,” which had been identified as the next appropriate step in a Bush-era pandemic plan.
  • The group — including Dr. Anthony S. Fauci of the National Institutes of Health; Dr. Robert R. Redfield of the Centers for Disease Control and Prevention, and Mr. Azar, who at that stage was leading the White House Task Force — concluded they would soon need to move toward aggressive social distancing
  • A 20-year-old Chinese woman had infected five relatives with the virus even though she never displayed any symptoms herself. The implication was grave — apparently healthy people could be unknowingly spreading the virus — and supported the need to move quickly to mitigation.
  • The following day, Dr. Kadlec and the others decided to present Mr. Trump with a plan titled “Four Steps to Mitigation,” telling the president that they needed to begin preparing Americans for a step rarely taken in United States history.
  • a presidential blowup and internal turf fights would sidetrack such a move. The focus would shift to messaging and confident predictions of success rather than publicly calling for a shift to mitigation.
  • These final days of February, perhaps more than any other moment during his tenure in the White House, illustrated Mr. Trump’s inability or unwillingness to absorb warnings coming at him.
  • He instead reverted to his traditional political playbook in the midst of a public health calamity, squandering vital time as the coronavirus spread silently across the country.
  • A memo dated Feb. 14, prepared in coordination with the National Security Council and titled “U.S. Government Response to the 2019 Novel Coronavirus,” documented what more drastic measures would look like, including: “significantly limiting public gatherings and cancellation of almost all sporting events, performances, and public and private meetings that cannot be convened by phone. Consider school closures. Widespread ‘stay at home’ directives from public and private organizations with nearly 100% telework for some.”
  • his friend had a blunt message: You need to be ready. The virus, he warned, which originated in the city of Wuhan, was being transmitted by people who were showing no symptoms — an insight that American health officials had not yet accepted.
  • On the 18-hour plane ride home, Mr. Trump fumed as he watched the stock market crash after Dr. Messonnier’s comments. Furious, he called Mr. Azar when he landed at around 6 a.m. on Feb. 26, raging that Dr. Messonnier had scared people unnecessarily.
  • The meeting that evening with Mr. Trump to advocate social distancing was canceled, replaced by a news conference in which the president announced that the White House response would be put under the command of Vice President Mike Pence.
  • The push to convince Mr. Trump of the need for more assertive action stalled. With Mr. Pence and his staff in charge, the focus was clear: no more alarmist messages. Statements and media appearances by health officials like Dr. Fauci and Dr. Redfield would be coordinated through Mr. Pence’s office
  • It would be more than three weeks before Mr. Trump would announce serious social distancing efforts, a lost period during which the spread of the virus accelerated rapidly.Over nearly three weeks from Feb. 26 to March 16, the number of confirmed coronavirus cases in the United States grew from 15 to 4,226
  • The China FactorThe earliest warnings about coronavirus got caught in the crosscurrents of the administration’s internal disputes over China. It was the China hawks who pushed earliest for a travel ban. But their animosity toward China also undercut hopes for a more cooperative approach by the world’s two leading powers to a global crisis.
  • It was early January, and the call with a Hong Kong epidemiologist left Matthew Pottinger rattled.
  • Mr. Trump was walking up the steps of Air Force One to head home from India on Feb. 25 when Dr. Nancy Messonnier, the director of the National Center for Immunization and Respiratory Diseases, publicly issued the blunt warning they had all agreed was necessary.
  • It was one of the earliest warnings to the White House, and it echoed the intelligence reports making their way to the National Security Council
  • some of the more specialized corners of the intelligence world were producing sophisticated and chilling warnings.
  • In a report to the director of national intelligence, the State Department’s epidemiologist wrote in early January that the virus was likely to spread across the globe, and warned that the coronavirus could develop into a pandemic
  • Working independently, a small outpost of the Defense Intelligence Agency, the National Center for Medical Intelligence, came to the same conclusion.
  • By mid-January there was growing evidence of the virus spreading outside China. Mr. Pottinger began convening daily meetings about the coronavirus
  • The early alarms sounded by Mr. Pottinger and other China hawks were freighted with ideology — including a push to publicly blame China that critics in the administration say was a distraction
  • And they ran into opposition from Mr. Trump’s economic advisers, who worried a tough approach toward China could scuttle a trade deal that was a pillar of Mr. Trump’s re-election campaign.
  • Mr. Pottinger continued to believe the coronavirus problem was far worse than the Chinese were acknowledging. Inside the West Wing, the director of the Domestic Policy Council, Joe Grogan, also tried to sound alarms that the threat from China was growing.
  • The Consequences of ChaosThe chaotic culture of the Trump White House contributed to the crisis. A lack of planning and a failure to execute, combined with the president’s focus on the news cycle and his preference for following his gut rather than the data cost time, and perhaps lives.
  • the hawks kept pushing in February to take a critical stance toward China amid the growing crisis. Mr. Pottinger and others — including aides to Secretary of State Mike Pompeo — pressed for government statements to use the term “Wuhan Virus.”Mr. Pompeo tried to hammer the anti-China message at every turn, eventually even urging leaders of the Group of 7 industrialized countries to use “Wuhan virus” in a joint statement.
  • Others, including aides to Mr. Pence, resisted taking a hard public line, believing that angering Beijing might lead the Chinese government to withhold medical supplies, pharmaceuticals and any scientific research that might ultimately lead to a vaccine.
  • Mr. Trump took a conciliatory approach through the middle of March, praising the job Mr. Xi was doing.
  • That changed abruptly, when aides informed Mr. Trump that a Chinese Foreign Ministry spokesman had publicly spun a new conspiracy about the origins of Covid-19: that it was brought to China by U.S. Army personnel who visited the country last October.
  • On March 16, he wrote on Twitter that “the United States will be powerfully supporting those industries, like Airlines and others, that are particularly affected by the Chinese Virus.”
  • Mr. Trump’s decision to escalate the war of words undercut any remaining possibility of broad cooperation between the governments to address a global threat
  • Mr. Pottinger, backed by Mr. O’Brien, became one of the driving forces of a campaign in the final weeks of January to convince Mr. Trump to impose limits on travel from China
  • he circulated a memo on Jan. 29 urging Mr. Trump to impose the travel limits, arguing that failing to confront the outbreak aggressively could be catastrophic, leading to hundreds of thousands of deaths and trillions of dollars in economic losses.
  • The uninvited message could not have conflicted more with the president’s approach at the time of playing down the severity of the threat. And when aides raised it with Mr. Trump, he responded that he was unhappy that Mr. Navarro had put his warning in writing.
  • From the time the virus was first identified as a concern, the administration’s response was plagued by the rivalries and factionalism that routinely swirl around Mr. Trump and, along with the president’s impulsiveness, undercut decision making and policy development.
  • Even after Mr. Azar first briefed him about the potential seriousness of the virus during a phone call on Jan. 18 while the president was at his Mar-a-Lago resort in Florida, Mr. Trump projected confidence that it would be a passing problem.
  • “We have it totally under control,” he told an interviewer a few days later while attending the World Economic Forum in Switzerland. “It’s going to be just fine.”
  • The efforts to sort out policy behind closed doors were contentious and sometimes only loosely organized.
  • That was the case when the National Security Council convened a meeting on short notice on the afternoon of Jan. 27. The Situation Room was standing room only, packed with top White House advisers, low-level staffers, Mr. Trump’s social media guru, and several cabinet secretaries. There was no checklist about the preparations for a possible pandemic,
  • Instead, after a 20-minute description by Mr. Azar of his department’s capabilities, the meeting was jolted when Stephen E. Biegun, the newly installed deputy secretary of state, announced plans to issue a “level four” travel warning, strongly discouraging Americans from traveling to China. The room erupted into bickering.
  • A few days later, on the evening of Jan. 30, Mick Mulvaney, the acting White House chief of staff at the time, and Mr. Azar called Air Force One as the president was making the final decision to go ahead with the restrictions on China travel. Mr. Azar was blunt, warning that the virus could develop into a pandemic and arguing that China should be criticized for failing to be transparent.
  • Stop panicking, Mr. Trump told him.That sentiment was present throughout February, as the president’s top aides reached for a consistent message but took few concrete steps to prepare for the possibility of a major public health crisis.
  • As February gave way to March, the president continued to be surrounded by divided factions even as it became clearer that avoiding more aggressive steps was not tenable.
  • the virus was already multiplying across the country — and hospitals were at risk of buckling under the looming wave of severely ill people, lacking masks and other protective equipment, ventilators and sufficient intensive care beds. The question loomed over the president and his aides after weeks of stalling and inaction: What were they going to do?
  • Even then, and even by Trump White House standards, the debate over whether to shut down much of the country to slow the spread was especially fierce.
  • In a tense Oval Office meeting, when Mr. Mnuchin again stressed that the economy would be ravaged, Mr. O’Brien, the national security adviser, who had been worried about the virus for weeks, sounded exasperated as he told Mr. Mnuchin that the economy would be destroyed regardless if officials did nothing.
  • in the end, aides said, it was Dr. Deborah L. Birx, the veteran AIDS researcher who had joined the task force, who helped to persuade Mr. Trump. Soft-spoken and fond of the kind of charts and graphs Mr. Trump prefers, Dr. Birx did not have the rough edges that could irritate the president. He often told people he thought she was elegant.
  • During the last week in March, Kellyanne Conway, a senior White House adviser involved in task force meetings, gave voice to concerns other aides had. She warned Mr. Trump that his wished-for date of Easter to reopen the country likely couldn’t be accomplished. Among other things, she told him, he would end up being blamed by critics for every subsequent death caused by the virus.
izzerios

How US leaks upset two allies in one week - CNNPolitics.com - 1 views

shared by izzerios on 25 May 17 - No Cached
  • With multiple high-profile intelligence leaks in recent weeks, the US has now managed to upset two of its closest allies by allowing the disclosure of sensitive information
  • Trump was reported to have revealed highly sensitive, likely Israeli-shared intelligence to Russian officials in the Oval Office, the United Kingdom is voicing its frustration over leaked information coming from US sources.
  • President reportedly sharing sensitive information with a foreign power in one instance and US law enforcement sources providing information to the media in the other
  • ...12 more annotations...
  • UK Home Secretary Amber Rudd slammed US leaks on the investigation into the attack at an Ariana Grande concert in Manchester, England, as "irritating" on Wednesday after a string of details emerged from US law enforcement sources before they were released by British police or officials
  • The White House did not immediately respond to CNN's request for comment on Rudd's remarks
  • was greeted warmly by Prime Minister Benjamin Netanyahu, who showed no indication that Trump's interaction with the Russians posed a problem between the two nations.
  • the leaking of the suspect's name was more disruptive because it might have tipped off other suspects,"
  • "I will make clear to President Trump that intelligence that is shared between our law enforcement agencies must remain secure," she said following a cabinet-level security meeting.
  • Aaron David Miller, a former adviser to Democratic and Republican secretaries of state, who added that the leaks may reflect a lack of structure within the Trump administration itself.
  • "We've got a very close intelligence and defense partnership with the UK, and that news ... suggests that we have even more close allies who are questioning whether we can be trusted with vital intelligence," Sen. Chris Coons, D-Delaware
  • Israeli Defense Minister Avigdor Lieberman insisted there would be no effect on the close relations between the United States and Israel due to the apparent leak
  • "The intel community is probably beside themselves and worried about what they can confide now, if the President is going to be as careless as he was," Miller said
  • "If we will assess that our sources of intelligence are in danger due to the way it will be handled by the United States, then we will have to keep the very sensitive information close to our chests," Yatom
  • "You are not going to have the best capabilities to defend the nation if other countries aren't going to share as much with you."
  • Stern words will likely be directed to the US side, he said, but "on balance, it's probably not going to change intelligence-sharing arrangements all that much."
katherineharron

Feds on high alert Thursday after warnings about potential threats to US Capitol - CNNP... - 0 views

  • Federal law enforcement is on high alert Thursday in the wake of an intelligence bulletin issued earlier this week about a group of violent militia extremists having discussed plans to take control of the US Capitol and remove Democratic lawmakers on or around March 4 -- a date when some conspiracy theorists believe former President Donald Trump will be returning to the presidency.
  • The House changed its schedule in light of warnings from US Capitol Police, moving a vote planned for Thursday to Wednesday night to avoid being in session on March 4. The Senate is still expected to be in session debating the Covid-19 relief bill.
  • Those intelligence sharing and planning failures have been laid bare over the last two months in several hearings and have been a focal point of criticism from lawmakers investigating the violent attack that left several people dead.
  • ...10 more annotations...
  • The violent extremists also discussed plans to persuade thousands to travel to Washington, DC, to participate in the March 4 plot, according to the joint intelligence bulletin.
  • it is mostly online talk and not necessarily an indication anyone is coming to Washington to act on it. Read More
  • Some of the conspiracy theorists believe that the former President will be inaugurated on March 4, according to the joint bulletin. Between 1793 and 1933, inauguration often fell on March 4 or a surrounding date.
  • Pittman assured lawmakers, though, that her department is in an "enhanced" security posture and that the National Guard and Capitol Police have been briefed on what to expect in the coming days.
  • The effort to improve preparation extends to communicating with state and local officials. DHS held a call Wednesday with state and local law enforcement officials from around the country to discuss current threats posed by domestic extremists, including concerns about potential violence surrounding March 4 and beyond, according to two sources familiar with the matter. While specific details from the call remain unclear, both sources said the overarching message from DHS officials is that addressing threats posed by domestic extremists requires increased communication and intelligence sharing across federal and state and local entities, as well as a shift in how law enforcement officials interpret the information they receive.
  • Federal officials are emphasizing the point that gaps in intelligence sharing left law enforcement unprepared for the chaos that unfolded on January 6, even though they were notified of potential violence days before the attack, and that going forward, bulletins issued by DHS and FBI indicate a threat is serious enough to be communicated to relevant entities, even if the intelligence is based primarily on online chatter or other less definitive indicators, the sources said.
  • Perceived election fraud and other conspiracy theories associated with the presidential transition may contribute to violence with little or no warning, according to the bulletin, which is part of a series of intelligence products to highlight potential domestic violent extremist threats to the Washington, DC, region. "Given that the Capitol complex is currently fortified like a military installation, I don't anticipate any successful attacks against the property," said Brian Harrell, the former assistant secretary for infrastructure protection at DHS. "However, all threats should be taken seriously and investigations launched against those who would call for violence. We continue to see far-right extremist groups that are fueled by misinformation and conspiracy theories quickly become the most dangerous threat to society."
  • "You really cannot underestimate the potential that an individual or a small group of individuals will engage in violence because they believe a false narrative that they're seeing online,"
  • Although March 4 is a concern to law enforcement, it's not a "standalone event," the official said; rather, it's part of a "continuum of violence" based domestic extremist conspiracy theories. "It's a threat that continues to be of concern to law enforcement. And I suspect that we are going to have to be focused on it for months to come," the official said.
  • Pittman warned last month that militia groups involved in the January 6 insurrection want to "blow up the Capitol" and "kill as many members as possible" when President Joe Biden addresses a joint session of Congress.
aidenborst

Russia Continues Interfering in Election to Try to Help Trump, U.S. Intelligence Says -... - 1 views

  • Russia is using a range of techniques to denigrate Joseph R. Biden Jr., American intelligence officials said Friday in their first public assessment that Moscow continues to try to interfere in the 2020 campaign to help President Trump.
  • China preferred that Mr. Trump be defeated in November and was weighing whether to take more aggressive action in the election.
  • officials briefed on the intelligence said that Russia was the far graver, and more immediate, threat. While China seeks to gain influence in American politics, its leaders have not yet decided to wade directly into the presidential contest, however much they may dislike Mr. Trump, the officials said.
  • ...7 more annotations...
  • An American official briefed on the intelligence said it was wrong to equate the two countries. Russia, the official said, is a tornado, capable of inflicting damage on American democracy now. China is more like climate change, the official said: The threat is real and grave, but more long term.
  • Iran was seeking “to undermine U.S. democratic institutions, President Trump, and to divide the country”
  • Mr. Trump said, “The last person Russia wants to see in office is Donald Trump because nobody’s been tougher on Russia than I have.” He said that if Mr. Biden won the presidency, “China would own our country.”
  • “Donald Trump has publicly and repeatedly invited, emboldened and even tried to coerce foreign interference in American elections,” said Tony Blinken, a senior adviser to the former vice president.
  • “The director has basically put the American people on notice that Russia in particular, also China and Iran, are going to be trying to meddle in this election and undermine our democratic system,” said Mr. King, a member of the Senate Intelligence Committee.
  • Russia, but not China, is trying to “actively influence” the outcome of the 2020 election, said the American official briefed on the underlying intelligence.
  • Intelligence and other officials in recent days have been stepping up their releases of information about foreign interference efforts, and the State Department has sent texts to cellphones around the world advertising a $10 million reward for information on would-be election hackers.
Javier E

The Only Way to Deal With the Threat From AI? Shut It Down | Time - 0 views

  • An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-
  • This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin
  • he rule that most people aware of these issues would have endorsed 50 years earlier, was that if an AI system can speak fluently and says it’s self-aware and demands human rights, that ought to be a hard stop on people just casually owning that AI and using it past that point. We already blew past that old line in the sand. And that was probably correct; I agree that current AIs are probably just imitating talk of self-awareness from their training data. But I mark that, with how little insight we have into these systems’ internals, we do not actually know.
  • ...25 more annotations...
  • The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.
  • Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”
  • It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
  • Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”
  • Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.
  • The likely result of humanity facing down an opposed superhuman intelligence is a total loss
  • To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
  • There’s no proposed plan for how we could do any such thing and survive. OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.
  • An aside: None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria.
  • I didn’t also mention that we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.
  • I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.
  • the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.
  • If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow.
  • We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems
  • Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs.
  • This is a stupid state of affairs, and an undignified way for Earth to die, and the rest of humanity ought to step in at this point and help the industry solve its collective action problem.
  • When the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she’s not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium.
  • The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth
  • Here’s what would actually need to be done:
  • Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs
  • Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithm
  • Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
  • Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool
  • Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
  • when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too.
Javier E

Regular Old Intelligence is Sufficient--Even Lovely - 0 views

  • Ezra Klein, has done some of the most dedicated reporting on the topic since he moved to the Bay Area a few years ago, talking with many of the people creating this new technology.
  • one is that the people building these systems have only a limited sense of what’s actually happening inside the black box—the bot is doing endless calculations instantaneously, but not in a way even their inventors can actually follow
  • an obvious question, one Klein has asked: “’If you think calamity so possible, why do this at all?
  • ...18 more annotations...
  • second, the people inventing them think they are potentially incredibly dangerous: ten percent of them, in fact, think they might extinguish the human species. They don’t know exactly how, but think Sorcerer’s Apprentice (or google ‘paper clip maximizer.’)
  • But why? The sun won’t blow up for a few billion years, meaning that if we don’t manage to drive ourselves to extinction, we’ve got all the time in the world. If it takes a generation or two for normal intelligence to come up with the structure of all the proteins, some people may die because a drug isn’t developed in time for their particular disease, but erring on the side of avoiding extinction seems mathematically sound
  • That is, it seems to me, a dumb answer from smart people—the answer not of people who have thought hard about ethics or even outcomes, but the answer that would be supplied by a kind of cultist.
  • (Probably the kind with stock options).
  • it does go, fairly neatly, with the default modern assumption that if we can do something we should do it, which is what I want to talk about. The question that I think very few have bothered to answer is, why?
  • One pundit after another explains that an AI program called Deep Mind worked far faster than scientists doing experiments to uncover the basic structure of all the different proteins, which will allow quicker drug development. It’s regarded as ipso facto better because it’s faster, and hence—implicitly—worth taking the risks that come with AI.
  • Allowing that we’re already good enough—indeed that our limitations are intrinsic to us, define us, and make us human—should guide us towards trying to shut down this technology before it does deep damage.
  • I find they often answer from something that sounds like the A.I.’s perspective. Many — not all, but enough that I feel comfortable in this characterization — feel that they have a responsibility to usher this new form of intelligence into the world.”
  • As it happens, regular old intelligence has already give us most of what we need: engineers have cut the cost of solar power and windpower and the batteries to store the energy they produce so dramatically that they’re now the cheapest power on earth
  • We don’t actually need artificial intelligence in this case; we need natural compassion, so that we work with the necessary speed to deploy these technologies.
  • Beyond those, the cases become trivial, or worse
  • All of this is a way of saying something we don’t say as often as we should: humans are good enough. We don’t require improvement. We can solve the challenges we face, as humans.
  • It may take us longer than if we can employ some “new form of intelligence,” but slow and steady is the whole point of the race.
  • Unless, of course, you’re trying to make money, in which case “first-mover advantage” is the point
  • The other challenge that people cite, over and over again, to justify running the risks of AI is to “combat climate change,
  • here’s the thing: pausing, slowing down, stopping calls on the one human gift shared by no other creature, and perhaps by no machine. We are the animal that can, if we want to, decide not to do something we’re capable of doing.
  • n individual terms, that ability forms the core of our ethical and religious systems; in societal terms it’s been crucial as technology has developed over the last century. We’ve, so far, reined in nuclear and biological weapons, designer babies, and a few other maximally dangerous new inventions
  • It’s time to say do it again, and fast—faster than the next iteration of this tech.
Javier E

Sam Altman, the ChatGPT King, Is Pretty Sure It's All Going to Be OK - The New York Times - 0 views

  • He believed A.G.I. would bring the world prosperity and wealth like no one had ever seen. He also worried that the technologies his company was building could cause serious harm — spreading disinformation, undercutting the job market. Or even destroying the world as we know it.
  • “I try to be upfront,” he said. “Am I doing something good? Or really bad?”
  • In 2023, people are beginning to wonder if Sam Altman was more prescient than they realized.
  • ...44 more annotations...
  • And yet, when people act as if Mr. Altman has nearly realized his long-held vision, he pushes back.
  • This past week, more than a thousand A.I. experts and tech leaders called on OpenAI and other companies to pause their work on systems like ChatGPT, saying they present “profound risks to society and humanity.”
  • As people realize that this technology is also a way of spreading falsehoods or even persuading people to do things they should not do, some critics are accusing Mr. Altman of reckless behavior.
  • “The hype over these systems — even if everything we hope for is right long term — is totally out of control for the short term,” he told me on a recent afternoon. There is time, he said, to better understand how these systems will ultimately change the world.
  • Many industry leaders, A.I. researchers and pundits see ChatGPT as a fundamental technological shift, as significant as the creation of the web browser or the iPhone. But few can agree on the future of this technology.
  • Some believe it will deliver a utopia where everyone has all the time and money ever needed. Others believe it could destroy humanity. Still others spend much of their time arguing that the technology is never as powerful as everyone says it is, insisting that neither nirvana nor doomsday is as close as it might seem.
  • he is often criticized from all directions. But those closest to him believe this is as it should be. “If you’re equally upsetting both extreme sides, then you’re doing something right,” said OpenAI’s president, Greg Brockman.
  • To spend time with Mr. Altman is to understand that Silicon Valley will push this technology forward even though it is not quite sure what the implications will be
  • in 2019, he paraphrased Robert Oppenheimer, the leader of the Manhattan Project, who believed the atomic bomb was an inevitability of scientific progress. “Technology happens because it is possible,” he said
  • His life has been a fairly steady climb toward greater prosperity and wealth, driven by an effective set of personal skills — not to mention some luck. It makes sense that he believes that the good thing will happen rather than the bad.
  • He said his company was building technology that would “solve some of our most pressing problems, really increase the standard of life and also figure out much better uses for human will and creativity.”
  • He was not exactly sure what problems it will solve, but he argued that ChatGPT showed the first signs of what is possible. Then, with his next breath, he worried that the same technology could cause serious harm if it wound up in the hands of some authoritarian government.
  • Kelly Sims, a partner with the venture capital firm Thrive Capital who worked with Mr. Altman as a board adviser to OpenAI, said it was like he was constantly arguing with himself.
  • “In a single conversation,” she said, “he is both sides of the debate club.”
  • He takes pride in recognizing when a technology is about to reach exponential growth — and then riding that curve into the future.
  • he is also the product of a strange, sprawling online community that began to worry, around the same time Mr. Altman came to the Valley, that artificial intelligence would one day destroy the world. Called rationalists or effective altruists, members of this movement were instrumental in the creation of OpenAI.
  • Does it make sense to ride that curve if it could end in diaster? Mr. Altman is certainly determined to see how it all plays out.
  • “Why is he working on something that won’t make him richer? One answer is that lots of people do that once they have enough money, which Sam probably does. The other is that he likes power.”
  • “He has a natural ability to talk people into things,” Mr. Graham said. “If it isn’t inborn, it was at least fully developed before he was 20. I first met Sam when he was 19, and I remember thinking at the time: ‘So this is what Bill Gates must have been like.
  • poker taught Mr. Altman how to read people and evaluate risk.
  • It showed him “how to notice patterns in people over time, how to make decisions with very imperfect information, how to decide when it was worth pain, in a sense, to get more information,” he told me while strolling across his ranch in Napa. “It’s a great game.”
  • He believed, according to his younger brother Max, that he was one of the few people who could meaningfully change the world through A.I. research, as opposed to the many people who could do so through politics.
  • In 2019, just as OpenAI’s research was taking off, Mr. Altman grabbed the reins, stepping down as president of Y Combinator to concentrate on a company with fewer than 100 employees that was unsure how it would pay its bills.
  • Within a year, he had transformed OpenAI into a nonprofit with a for-profit arm. That way he could pursue the money it would need to build a machine that could do anything the human brain could do.
  • Mr. Brockman, OpenAI’s president, said Mr. Altman’s talent lies in understanding what people want. “He really tries to find the thing that matters most to a person — and then figure out how to give it to them,” Mr. Brockman told me. “That is the algorithm he uses over and over.”
  • Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, another lab intent on building artificial general intelligence.
  • “These are people who have left an indelible mark on the fabric of the tech industry and maybe the fabric of the world,” he said. “I think Sam is going to be one of those people.”
  • The trouble is, unlike the days when Apple, Microsoft and Meta were getting started, people are well aware of how technology can transform the world — and how dangerous it can be.
  • Mr. Scott of Microsoft believes that Mr. Altman will ultimately be discussed in the same breath as Steve Jobs, Bill Gates and Mark Zuckerberg.
  • The woman was the Canadian singer Grimes, Mr. Musk’s former partner, and the hat guy was Eliezer Yudkowsky, a self-described A.I. researcher who believes, perhaps more than anyone, that artificial intelligence could one day destroy humanity.
  • The selfie — snapped by Mr. Altman at a party his company was hosting — shows how close he is to this way of thinking. But he has his own views on the dangers of artificial intelligence.
  • In March, Mr. Altman tweeted out a selfie, bathed by a pale orange flash, that showed him smiling between a blond woman giving a peace sign and a bearded guy wearing a fedora.
  • He also helped spawn the vast online community of rationalists and effective altruists who are convinced that A.I. is an existential risk. This surprisingly influential group is represented by researchers inside many of the top A.I. labs, including OpenAI.
  • They don’t see this as hypocrisy: Many of them believe that because they understand the dangers clearer than anyone else, they are in the best position to build this technology.
  • Mr. Altman believes that effective altruists have played an important role in the rise of artificial intelligence, alerting the industry to the dangers. He also believes they exaggerate these dangers.
  • As OpenAI developed ChatGPT, many others, including Google and Meta, were building similar technology. But it was Mr. Altman and OpenAI that chose to share the technology with the world.
  • Many in the field have criticized the decision, arguing that this set off a race to release technology that gets things wrong, makes things up and could soon be used to rapidly spread disinformation.
  • Mr. Altman argues that rather than developing and testing the technology entirely behind closed doors before releasing it in full, it is safer to gradually share it so everyone can better understand risks and how to handle them.
  • He told me that it would be a “very slow takeoff.”
  • When I asked Mr. Altman if a machine that could do anything the human brain could do would eventually drive the price of human labor to zero, he demurred. He said he could not imagine a world where human intelligence was useless.
  • If he’s wrong, he thinks he can make it up to humanity.
  • His grand idea is that OpenAI will capture much of the world’s wealth through the creation of A.G.I. and then redistribute this wealth to the people. In Napa, as we sat chatting beside the lake at the heart of his ranch, he tossed out several figures — $100 billion, $1 trillion, $100 trillion.
  • If A.G.I. does create all that wealth, he is not sure how the company will redistribute it. Money could mean something very different in this new world.
  • But as he once told me: “I feel like the A.G.I. can help with that.”
Javier E

'The machine did it coldly': Israel used AI to identify 37,000 Hamas targets | Israel-G... - 0 views

  • All six said that Lavender had played a central role in the war, processing masses of data to rapidly identify potential “junior” operatives to target. Four of the sources said that, at one stage early in the war, Lavender listed as many as 37,000 Palestinian men who had been linked by the AI system to Hamas or PIJ.
  • The health ministry in the Hamas-run territory says 32,000 Palestinians have been killed in the conflict in the past six months. UN data shows that in the first month of the war alone, 1,340 families suffered multiple losses, with 312 families losing more than 10 members.
  • Several of the sources described how, for certain categories of targets, the IDF applied pre-authorised allowances for the estimated number of civilians who could be killed before a strike was authorised.
  • ...32 more annotations...
  • Two sources said that during the early weeks of the war they were permitted to kill 15 or 20 civilians during airstrikes on low-ranking militants. Attacks on such targets were typically carried out using unguided munitions known as “dumb bombs”, the sources said, destroying entire homes and killing all their occupants.
  • “You don’t want to waste expensive bombs on unimportant people – it’s very expensive for the country and there’s a shortage [of those bombs],” one intelligence officer said. Another said the principal question they were faced with was whether the “collateral damage” to civilians allowed for an attack.
  • “Because we usually carried out the attacks with dumb bombs, and that meant literally dropping the whole house on its occupants. But even if an attack is averted, you don’t care – you immediately move on to the next target. Because of the system, the targets never end. You have another 36,000 waiting.”
  • ccording to conflict experts, if Israel has been using dumb bombs to flatten the homes of thousands of Palestinians who were linked, with the assistance of AI, to militant groups in Gaza, that could help explain the shockingly high death toll in the war.
  • Details about the specific kinds of data used to train Lavender’s algorithm, or how the programme reached its conclusions, are not included in the accounts published by +972 or Local Call. However, the sources said that during the first few weeks of the war, Unit 8200 refined Lavender’s algorithm and tweaked its search parameters.
  • Responding to the publication of the testimonies in +972 and Local Call, the IDF said in a statement that its operations were carried out in accordance with the rules of proportionality under international law. It said dumb bombs are “standard weaponry” that are used by IDF pilots in a manner that ensures “a high level of precision”.
  • “The IDF does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist,” it added. “Information systems are merely tools for analysts in the target identification process.”
  • In earlier military operations conducted by the IDF, producing human targets was often a more labour-intensive process. Multiple sources who described target development in previous wars to the Guardian, said the decision to “incriminate” an individual, or identify them as a legitimate target, would be discussed and then signed off by a legal adviser.
  • n the weeks and months after 7 October, this model for approving strikes on human targets was dramatically accelerated, according to the sources. As the IDF’s bombardment of Gaza intensified, they said, commanders demanded a continuous pipeline of targets.
  • “We were constantly being pressured: ‘Bring us more targets.’ They really shouted at us,” said one intelligence officer. “We were told: now we have to fuck up Hamas, no matter what the cost. Whatever you can, you bomb.”
  • Lavender was developed by the Israel Defense Forces’ elite intelligence division, Unit 8200, which is comparable to the US’s National Security Agency or GCHQ in the UK.
  • After randomly sampling and cross-checking its predictions, the unit concluded Lavender had achieved a 90% accuracy rate, the sources said, leading the IDF to approve its sweeping use as a target recommendation tool.
  • Lavender created a database of tens of thousands of individuals who were marked as predominantly low-ranking members of Hamas’s military wing, they added. This was used alongside another AI-based decision support system, called the Gospel, which recommended buildings and structures as targets rather than individuals.
  • The accounts include first-hand testimony of how intelligence officers worked with Lavender and how the reach of its dragnet could be adjusted. “At its peak, the system managed to generate 37,000 people as potential human targets,” one of the sources said. “But the numbers changed all the time, because it depends on where you set the bar of what a Hamas operative is.”
  • broadly, and then the machine started bringing us all kinds of civil defence personnel, police officers, on whom it would be a shame to waste bombs. They help the Hamas government, but they don’t really endanger soldiers.”
  • Before the war, US and Israeli estimated membership of Hamas’s military wing at approximately 25-30,000 people.
  • there was a decision to treat Palestinian men linked to Hamas’s military wing as potential targets, regardless of their rank or importance.
  • According to +972 and Local Call, the IDF judged it permissible to kill more than 100 civilians in attacks on a top-ranking Hamas officials. “We had a calculation for how many [civilians could be killed] for the brigade commander, how many [civilians] for a battalion commander, and so on,” one source said.
  • Another source, who justified the use of Lavender to help identify low-ranking targets, said that “when it comes to a junior militant, you don’t want to invest manpower and time in it”. They said that in wartime there was insufficient time to carefully “incriminate every target”
  • So you’re willing to take the margin of error of using artificial intelligence, risking collateral damage and civilians dying, and risking attacking by mistake, and to live with it,” they added.
  • When it came to targeting low-ranking Hamas and PIJ suspects, they said, the preference was to attack when they were believed to be at home. “We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity,” one said. “It’s much easier to bomb a family’s home. The system is built to look for them in these situations.”
  • Such a strategy risked higher numbers of civilian casualties, and the sources said the IDF imposed pre-authorised limits on the number of civilians it deemed acceptable to kill in a strike aimed at a single Hamas militant. The ratio was said to have changed over time, and varied according to the seniority of the target.
  • The IDF’s targeting processes in the most intensive phase of the bombardment were also relaxed, they said. “There was a completely permissive policy regarding the casualties of [bombing] operations,” one source said. “A policy so permissive that in my opinion it had an element of revenge.”
  • “There were regulations, but they were just very lenient,” another added. “We’ve killed people with collateral damage in the high double digits, if not low triple digits. These are things that haven’t happened before.” There appears to have been significant fluctuations in the figure that military commanders would tolerate at different stages of the war
  • One source said that the limit on permitted civilian casualties “went up and down” over time, and at one point was as low as five. During the first week of the conflict, the source said, permission was given to kill 15 non-combatants to take out junior militants in Gaza
  • at one stage earlier in the war they were authorised to kill up to “20 uninvolved civilians” for a single operative, regardless of their rank, military importance, or age.
  • “It’s not just that you can kill any person who is a Hamas soldier, which is clearly permitted and legitimate in terms of international law,” they said. “But they directly tell you: ‘You are allowed to kill them along with many civilians.’ … In practice, the proportionality criterion did not exist.”
  • Experts in international humanitarian law who spoke to the Guardian expressed alarm at accounts of the IDF accepting and pre-authorising collateral damage ratios as high as 20 civilians, particularly for lower-ranking militants. They said militaries must assess proportionality for each individual strike.
  • An international law expert at the US state department said they had “never remotely heard of a one to 15 ratio being deemed acceptable, especially for lower-level combatants. There’s a lot of leeway, but that strikes me as extreme”.
  • Sarah Harrison, a former lawyer at the US Department of Defense, now an analyst at Crisis Group, said: “While there may be certain occasions where 15 collateral civilian deaths could be proportionate, there are other times where it definitely wouldn’t be. You can’t just set a tolerable number for a category of targets and say that it’ll be lawfully proportionate in each case.”
  • Whatever the legal or moral justification for Israel’s bombing strategy, some of its intelligence officers appear now to be questioning the approach set by their commanders. “No one thought about what to do afterward, when the war is over, or how it will be possible to live in Gaza,” one said.
  • Another said that after the 7 October attacks by Hamas, the atmosphere in the IDF was “painful and vindictive”. “There was a dissonance: on the one hand, people here were frustrated that we were not attacking enough. On the other hand, you see at the end of the day that another thousand Gazans have died, most of them civilians.”
Javier E

Donald Trump's alleged ties with Russia overshadow confirmation hearings | US news | Th... - 0 views

  • Representative Eric Swalwell, the ranking member of the CIA Subcommittee of the House permanent select committee on intelligence, called for an independent bipartisan commission to investigate Russian attempts to disrupt the US election.
  • “The president is responsible for vital decisions about national security, including decisions about whether to go to war, which depend on the broad collection activities and reasoned analysis of the intelligence community. A scenario in which the president dismisses the intelligence community, or worse, accuses it of treachery, is profoundly dangerous,” Wyden said.
  • Vicki Divoll, a former attorney for the CIA and the Senate intelligence panel, saw little chance for a rapprochement between the intelligence agencies and Trump.
  • ...1 more annotation...
  • “After disparaging and demeaning the hardworking officers of the intelligence community, then grudgingly accepting their conclusions about Russian election hacking, Mr Trump is now hurling the worst epithet out there – comparisons to Nazi Germany – against them, without basis and on the eve of taking office,” Divoll said. “We are at our peril to be entering an era in which there is such open, irrational and hysterical hostility by a president against a community of 17 agencies whose mandate is to keep us safe.”
1 - 20 of 870 Next › Last »
Showing 20 items per page