Skip to main content

Home/ History Readings/ Group items tagged lab

Rss Feed Group items tagged

Javier E

Opinion | The Pandemic Probably Started in a Lab. These 5 Key Points Explain Why. - The... - 0 views

  • a growing volume of evidence — gleaned from public records released under the Freedom of Information Act, digital sleuthing through online databases, scientific papers analyzing the virus and its spread, and leaks from within the U.S. government — suggests that the pandemic most likely occurred because a virus escaped from a research lab in Wuhan, China.
  • If so, it would be the most costly accident in the history of science.
  • The SARS-like virus that caused the pandemic emerged in Wuhan, the city where the world’s foremost research lab for SARS-like viruses is located.
  • ...48 more annotations...
  • Dr. Shi’s group was fascinated by how coronaviruses jump from species to species. To find viruses, they took samples from bats and other animals, as well as from sick people living near animals carrying these viruses or associated with the wildlife trade. Much of this work was conducted in partnership with the EcoHealth Alliance, a U.S.-based scientific organization that, since 2002, has been awarded over $80 million in federal funding to research the risks of emerging infectious diseases.
  • Their research showed that the viruses most similar to SARS‑CoV‑2, the virus that caused the pandemic, circulate in bats that live roughly 1,000 miles away from Wuhan. Scientists from Dr. Shi’s team traveled repeatedly to Yunnan province to collect these viruses and had expanded their search to Southeast Asia. Bats in other parts of China have not been found to carry viruses that are as closely related to SARS-CoV-2.
  • When the Covid-19 outbreak was detected, Dr. Shi initially wondered if the novel coronavirus had come from her laboratory, saying she had never expected such an outbreak to occur in Wuhan.
  • The SARS‑CoV‑2 virus is exceptionally contagious and can jump from species to species like wildfire. Yet it left no known trace of infection at its source or anywhere along what would have been a thousand-mile journey before emerging in Wuhan.
  • The year before the outbreak, the Wuhan institute, working with U.S. partners, had proposed creating viruses with SARS‑CoV‑2’s defining feature
  • The laboratory pursued risky research that resulted in viruses becoming more infectious: Coronaviruses were grown from samples from infected animals and genetically reconstructed and recombined to create new viruses unknown in nature. These new viruses were passed through cells from bats, pigs, primates and humans and were used to infect civets and humanized mice (mice modified with human genes). In essence, this process forced these viruses to adapt to new host species, and the viruses with mutations that allowed them to thrive emerged as victors.
  • Worse still, as the pandemic raged, their American collaborators failed to publicly reveal the existence of the Defuse proposal. The president of EcoHealth, Peter Daszak, recently admitted to Congress that he doesn’t know about virus samples collected by the Wuhan institute after 2015 and never asked the lab’s scientists if they had started the work described in Defuse.
  • By 2019, Dr. Shi’s group had published a database describing more than 22,000 collected wildlife samples. But external access was shut off in the fall of 2019, and the database was not shared with American collaborators even after the pandemic started, when such a rich virus collection would have been most useful in tracking the origin of SARS‑CoV‑2. It remains unclear whether the Wuhan institute possessed a precursor of the pandemic virus.
  • In 2021, The Intercept published a leaked 2018 grant proposal for a research project named Defuse, which had been written as a collaboration between EcoHealth, the Wuhan institute and Ralph Baric at the University of North Carolina, who had been on the cutting edge of coronavirus research for years. The proposal described plans to create viruses strikingly similar to SARS‑CoV‑2.
  • Coronaviruses bear their name because their surface is studded with protein spikes, like a spiky crown, which they use to enter animal cells. The Defuse project proposed to search for and create SARS-like viruses carrying spikes with a unique feature: a furin cleavage site — the same feature that enhances SARS‑CoV‑2’s infectiousness in humans, making it capable of causing a pandemic. Defuse was never funded by the United States.
  • owever, in his testimony on Monday, Dr. Fauci explained that the Wuhan institute would not need to rely on U.S. funding to pursue research independently.
  • While it’s possible that the furin cleavage site could have evolved naturally (as seen in some distantly related coronaviruses), out of the hundreds of SARS-like viruses cataloged by scientists, SARS‑CoV‑2 is the only one known to possess a furin cleavage site in its spike. And the genetic data suggest that the virus had only recently gained the furin cleavage site before it started the pandemic.
  • Ultimately, a never-before-seen SARS-like virus with a newly introduced furin cleavage site, matching the description in the Wuhan institute’s Defuse proposal, caused an outbreak in Wuhan less than two years after the proposal was drafted.
  • When the Wuhan scientists published their seminal paper about Covid-19 as the pandemic roared to life in 2020, they did not mention the virus’s furin cleavage site — a feature they should have been on the lookout for, according to their own grant proposal, and a feature quickly recognized by other scientists.
  • At the Wuhan Institute of Virology, a team of scientists had been hunting for SARS-like viruses for over a decade, led by Shi Zhengl
  • In May, citing failures in EcoHealth’s monitoring of risky experiments conducted at the Wuhan lab, the Biden administration suspended all federal funding for the organization and Dr. Daszak, and initiated proceedings to bar them from receiving future grants. In his testimony on Monday, Dr. Fauci said that he supported the decision to suspend and bar EcoHealth.
  • Separately, Dr. Baric described the competitive dynamic between his research group and the institute when he told Congress that the Wuhan scientists would probably not have shared their most interesting newly discovered viruses with him. Documents and email correspondence between the institute and Dr. Baric are still being withheld from the public while their release is fiercely contested in litigation.
  • In the end, American partners very likely knew of only a fraction of the research done in Wuhan. According to U.S. intelligence sources, some of the institute’s virus research was classified or conducted with or on behalf of the Chinese military.
  • In the congressional hearing on Monday, Dr. Fauci repeatedly acknowledged the lack of visibility into experiments conducted at the Wuhan institute, saying, “None of us can know everything that’s going on in China, or in Wuhan, or what have you. And that’s the reason why — I say today, and I’ve said at the T.I.,” referring to his transcribed interview with the subcommittee, “I keep an open mind as to what the origin is.”
  • The Wuhan lab pursued this type of work under low biosafety conditions that could not have contained an airborne virus as infectious as SARS‑CoV‑2.
  • Labs working with live viruses generally operate at one of four biosafety levels (known in ascending order of stringency as BSL-1, 2, 3 and 4) that describe the work practices that are considered sufficiently safe depending on the characteristics of each pathogen. The Wuhan institute’s scientists worked with SARS-like viruses under inappropriately low biosafety conditions.
  • ​​Biosafety levels are not internationally standardized, and some countries use more permissive protocols than others.
  • In one experiment, Dr. Shi’s group genetically engineered an unexpectedly deadly SARS-like virus (not closely related to SARS‑CoV‑2) that exhibited a 10,000-fold increase in the quantity of virus in the lungs and brains of humanized mice. Wuhan institute scientists handled these live viruses at low biosafety levels, including BSL-2.
  • Even the much more stringent containment at BSL-3 cannot fully prevent SARS‑CoV‑2 from escaping. Two years into the pandemic, the virus infected a scientist in a BSL-3 laboratory in Taiwan, which was, at the time, a zero-Covid country. The scientist had been vaccinated and was tested only after losing the sense of smell. By then, more than 100 close contacts had been exposed. Human error is a source of exposure even at the highest biosafety levels, and the risks are much greater for scientists working with infectious pathogens at low biosafety.
  • An early draft of the Defuse proposal stated that the Wuhan lab would do their virus work at BSL-2 to make it “highly cost-effective.” Dr. Baric added a note to the draft highlighting the importance of using BSL-3 to contain SARS-like viruses that could infect human cells, writing that “U.S. researchers will likely freak out.”
  • Years later, after SARS‑CoV‑2 had killed millions, Dr. Baric wrote to Dr. Daszak: “I have no doubt that they followed state determined rules and did the work under BSL-2. Yes China has the right to set their own policy. You believe this was appropriate containment if you want but don’t expect me to believe it. Moreover, don’t insult my intelligence by trying to feed me this load of BS.”
  • SARS‑CoV‑2 is a stealthy virus that transmits effectively through the air, causes a range of symptoms similar to those of other common respiratory diseases and can be spread by infected people before symptoms even appear. If the virus had escaped from a BSL-2 laboratory in 2019, the leak most likely would have gone undetected until too late.
  • One alarming detail — leaked to The Wall Street Journal and confirmed by current and former U.S. government officials — is that scientists on Dr. Shi’s team fell ill with Covid-like symptoms in the fall of 2019. One of the scientists had been named in the Defuse proposal as the person in charge of virus discovery work. The scientists denied having been sick.
  • The hypothesis that Covid-19 came from an animal at the Huanan Seafood Market in Wuhan is not supported by strong evidence.
  • In December 2019, Chinese investigators assumed the outbreak had started at a centrally located market frequented by thousands of visitors daily. This bias in their search for early cases meant that cases unlinked to or located far away from the market would very likely have been missed
  • To make things worse, the Chinese authorities blocked the reporting of early cases not linked to the market and, claiming biosafety precautions, ordered the destruction of patient samples on January 3, 2020, making it nearly impossible to see the complete picture of the earliest Covid-19 cases. Information about dozens of early cases from November and December 2019 remains inaccessible.
  • A pair of papers published in Science in 2022 made the best case for SARS‑CoV‑2 having emerged naturally from human-animal contact at the Wuhan market by focusing on a map of the early cases and asserting that the virus had jumped from animals into humans twice at the market in 2019
  • More recently, the two papers have been countered by other virologists and scientists who convincingly demonstrate that the available market evidence does not distinguish between a human superspreader event and a natural spillover at the market.
  • Furthermore, the existing genetic and early case data show that all known Covid-19 cases probably stem from a single introduction of SARS‑CoV‑2 into people, and the outbreak at the Wuhan market probably happened after the virus had already been circulating in humans.
  • Not a single infected animal has ever been confirmed at the market or in its supply chain. Without good evidence that the pandemic started at the Huanan Seafood Market, the fact that the virus emerged in Wuhan points squarely at its unique SARS-like virus laboratory.
  • With today’s technology, scientists can detect how respiratory viruses — including SARS, MERS and the flu — circulate in animals while making repeated attempts to jump across species. Thankfully, these variants usually fail to transmit well after crossing over to a new species and tend to die off after a small number of infections
  • investigators have not reported finding any animals infected with SARS‑CoV‑2 that had not been infected by humans. Yet, infected animal sources and other connective pieces of evidence were found for the earlier SARS and MERS outbreaks as quickly as within a few days, despite the less advanced viral forensic technologies of two decades ago.
  • Even though Wuhan is the home base of virus hunters with world-leading expertise in tracking novel SARS-like viruses, investigators have either failed to collect or report key evidence that would be expected if Covid-19 emerged from the wildlife trade. For example, investigators have not determined that the earliest known cases had exposure to intermediate host animals before falling ill.
  • No antibody evidence shows that animal traders in Wuhan are regularly exposed to SARS-like viruses, as would be expected in such situations.
  • In previous outbreaks of coronaviruses, scientists were able to demonstrate natural origin by collecting multiple pieces of evidence linking infected humans to infected animals
  • In contrast, virologists and other scientists agree that SARS‑CoV‑2 required little to no adaptation to spread rapidly in humans and other animals. The virus appears to have succeeded in causing a pandemic upon its only detected jump into humans.
  • it was a SARS-like coronavirus with a unique furin cleavage site that emerged in Wuhan, less than two years after scientists, sometimes working under inadequate biosafety conditions, proposed collecting and creating viruses of that same design.
  • a laboratory accident is the most parsimonious explanation of how the pandemic began.
  • Given what we now know, investigators should follow their strongest leads and subpoena all exchanges between the Wuhan scientists and their international partners, including unpublished research proposals, manuscripts, data and commercial orders. In particular, exchanges from 2018 and 2019 — the critical two years before the emergence of Covid-19 — are very likely to be illuminating (and require no cooperation from the Chinese government to acquire), yet they remain beyond the public’s view more than four years after the pandemic began.
  • it is undeniable that U.S. federal funding helped to build an unprecedented collection of SARS-like viruses at the Wuhan institute, as well as contributing to research that enhanced them.
  • Advocates and funders of the institute’s research, including Dr. Fauci, should cooperate with the investigation to help identify and close the loopholes that allowed such dangerous work to occur. The world must not continue to bear the intolerable risks of research with the potential to cause pandemics.
  • A successful investigation of the pandemic’s root cause would have the power to break a decades-long scientific impasse on pathogen research safety, determining how governments will spend billions of dollars to prevent future pandemics. A credible investigation would also deter future acts of negligence and deceit by demonstrating that it is indeed possible to be held accountable for causing a viral pandemic
  • Last but not least, people of all nations need to see their leaders — and especially, their scientists — heading the charge to find out what caused this world-shaking event. Restoring public trust in science and government leadership requires it.
Javier E

Opinion | Let's Imagine We Knew Exactly How the Pandemic Started - The New York Times - 0 views

  • To some, it all sounds like noise. “Whether Covid came accidentally from a lab in Wuhan or a seafood market is almost beside the point,” Edward Luce wrote in The Financial Times last month,
  • This has always struck me as an exceedingly strange perspective. Perhaps it is a truism to say that the events that brought about the deaths of perhaps 20 million people around the world and the jagged disruption of many billions of other lives are of enormous consequence and that dismissing the matter of its cause as simply a “blame game” is a form of not just historical but moral incuriosity.
  • It is consequential as long as it remains unresolved, as well. That’s because our collective uncertainty about the origin of the pandemic has itself shaped the way we’ve come to think about what we’ve all just lived through, the way we responded in the first place and the way the pandemic has played out, often weaponized, in geopolitics.
  • ...27 more annotations...
  • Three years since its start we are still more likely to see the pandemic in partisan rather than world-historical terms. And the grandly tragic story of the pandemic takes on a profoundly different shape and color depending on the nature of its first act.
  • In a world where a natural origin was confirmed beyond all doubt, we might look back and narrate the pandemic as one particular kind of story: a morality tale showcasing the incomplete triumph of modern civilization and the enduring threats from nature, and highlighting the way that, whatever we might have told ourselves in 2019 or 2009 about the fortress of the wealthy world, pandemic disease remained a humbling civilization-scale challenge no nation had very good answers for.
  • in a world where a lab-leak origin had been confirmed instead, we would probably find ourselves telling a very different set of stories — primarily about humanity’s Icarian hubris, or perhaps about scientists’ Faustian indifference to the downside risks of new research, or the way in which very human impulses to cover up mistakes and wrongdoing might have compounded those mistakes to disastrous global effect.
  • It would have been, “We brought this on ourselves.” Or perhaps, if we were feeling xenophobic rather than humbly human, “They brought this on us,”
  • the pandemic would probably have joined nuclear weapons as a conventional illustration of the dark side of human knowledge, perhaps even surpassed them — 20 million dead is nothing to trifle with, after all, though it remains less than the overall death toll of World War II or even the Great Leap Forward.
  • the horror would also offer a silver lining: If human action was responsible for this pandemic, then in theory, human action could prevent the next one as well.
  • if the figures are even mostly reliable, they reflect a remarkable indifference on the part of the country to the source of a once-in-a-century disease disaster
  • It is as though we’ve decided both that the pandemic was “man-made” and that its emergence was a kind of inevitability we can’t do much about.
  • a definitive confirmation of a lab origin probably would not mean that responsibility lay in any simplistic way with China. But that isn’t to say the case wouldn’t have been made, probably in a variety of forms — calls for “reparations,” demands for global provision of free vaccines — that would only have contributed additional antagonism and resentment to the world stage, further polarizing the great-power landscape.
  • It would be as though following a catastrophic earthquake, we didn’t bother to sort out whether it had been caused by local fracking but instead argued endlessly about the imperfections of disaster response
  • as we piece together a working history of the past few years, you might hope we’d grow more focused on nailing the story down.
  • it seems likely to me that in the very earliest days of 2020, with cases exploding in China but not yet elsewhere, knowing that the disease was a result of gain-of-function research and had escaped from a lab probably would have produced an even more significant wave of global fear
  • n a world where neither narrative has been confirmed, and where pandemic origins are governed by an epistemological fog, I worry we have begun to collate the two stories in a somewhat paradoxical and self-defeating way
  • presumably, many fewer people contemplating the initial news would’ve assumed that the outbreak would be largely limited to Asia, as previous outbreaks had been; public health messengers in places like the United States probably would not have so casually reassuring; and even more dramatic circuit-breaking responses like a monthlong international travel ban might’ve been instituted quite quickly
  • As the pandemic wore on, I suspect that effect would have lingered beyond the initial panic. At first, it might’ve been harder to decide that the virus was just something to live with if we knew simultaneously that it was something introduced to the world in error.
  • And later, when the vaccines arrived, I suspect there might have been considerably less resistance to them, particularly on the American right, where anxiety and xenophobia might have trumped public-health skepticism and legacy anti-vaccine sentiment
  • the opposite counterfactual is just as illuminating
  • The question and its unresolvability have mattered enormously for geopolitics,
  • it is hard to think “superbug” and not panic.
  • The disease and global response may well have accelerated our “new Cold War,” as Luce writes, but it is hard to imagine an alternate history where a known lab-leak origin didn’t move the world there much faster.
  • On the other hand, the natural logic of a confirmed zoonotic origin would probably have been to push nations of the world closer together into networks of collaboration and cooperation
  • the direction of change would have most likely been toward more integration rather than less. After all, this is to some degree what happened in the wake of the initial outbreaks of SARS and MERS and the Ebola outbreaks of the past decade.
  • Instead, the geopolitics remain unsteady, which is to say, a bit jagged
  • The United States can weaponize a narrative about lab origin — as China hawks in both the Trump and Biden administrations have repeatedly done — without worrying too much about providing real proof or suffering concrete backlash.
  • And China can stonewall origin investigations by citing sovereignty rights and a smoke screen story about the disease originating in frozen food shipped in from abroad without paying much of an international price for the intransigence or bad-faith argumentation, either.
  • each has carried forward a gripe that needn’t be substantiated in order to be deployed.
  • ambiguity also offers plausible deniability, which means that without considerably more Chinese transparency and cooperation, those pushing both stories will find themselves still making only probabilistic cases. We’re probably going to be living with that uncertainty, in a political and social world shaped by it, for the foreseeable future
hannahcarter11

Rep. Michael McCaul, top Republican on House Foreign Affairs Committee, calls Covid-19 ... - 0 views

  • Texas Rep. Michael McCaul, a top Republican on the House Foreign Affairs Committee, claimed Sunday the origins of the coronavirus pandemic are the "worst cover-up" in human history.
  • The comments from McCaul follows a directive from President Joe Biden ordering the intelligence community to redouble its efforts in investigating the origins of the coronavirus pandemic and to report back to him in 90 days. A US intelligence report found several researchers at China's Wuhan Institute of Virology fell ill in November 2019 and had to be hospitalized.
  • Other lawmakers have also called for answers regarding the origin of the virus and members of the House Foreign Affairs Committee, which has long been investigating the origins of the pandemic, received a classified briefing on the matter earlier this month, according to a source familiar with the matter.
  • ...3 more annotations...
  • A fierce debate has raged over whether the virus escaped from a lab in Wuhan or originated in the wild. Initially, prominent scientists publicly derided the so-called lab leak theory -- embraced by then-President Donald Trump and his allies -- as a conspiracy theory, and the intelligence community put out a rare public statement in late April 2020 affirming that it "also concurs with the wide scientific consensus that the Covid-19 virus was not manmade or genetically modified."
  • But as early as March 27, 2020, the Defense Intelligence Agency -- which is home to one of the intelligence community's most robust scientific cells -- in a classified assessment reported by Newsweek found that it was possible that the virus had emerged "accidentally" due to "unsafe laboratory practices."
  • And the Chinese government's lack of transparency and the restricted sharing of data have also hindered the intelligence community's ability to thoroughly investigate the lab leak theory. The US and Britain called on China last week to participate in a second phase of a World Health Organization investigation into the pandemic's origins, but China responded that its role in the probe "has been completed."
saberal

If the Wuhan lab-leak hypothesis is true, expect a political earthquake | Thomas Frank ... - 0 views

  • at the end of a scary article about the history of “gain of function” research and its possible role in the still ongoing Covid pandemic, Nicholson Baker wrote as follows: “This may be the great scientific meta-experiment of the 21st century. Could a world full of scientists do all kinds of reckless recombinant things with viral diseases for many years and successfully avoid a serious outbreak? The hypothesis was that, yes, it was doable. The risk was worth taking. There would be no pandemic.”
  • Except there was. If it does indeed turn out that the lab-leak hypothesis is the right explanation for how it began — that the common people of the world have been forced into a real-life lab experiment, at tremendous cost — there is a moral earthquake on the way.
  • Think of all the disasters of recent years: economic neoliberalism, destructive trade policies, the Iraq War, the housing bubble, banks that are “too big to fail,” mortgage-backed securities, the Hillary Clinton campaign of 2016 — all of these disasters brought to you by the total, self-assured unanimity of the highly educated people who are supposed to know what they’re doing, plus the total complacency of the highly educated people who are supposed to be supervising them.
  • ...5 more annotations...
  • Because if the hypothesis is right, it will soon start to dawn on people that our mistake was not insufficient reverence for scientists, or inadequate respect for expertise, or not enough censorship on Facebook. It was a failure to think critically about all of the above, to understand that there is no such thing as absolute expertise
  • There was a time when the Covid pandemic seemed to confirm so many of our assumptions. It cast down the people we regarded as villains. It raised up those we thought were heroes. It prospered people who could shift easily to working from home even as it problematized the lives of those Trump voters living in the old economy.
  • But these days the consensus doesn’t consense quite as well as it used to. Now the media is filled with disturbing stories suggesting that Covid might have come — not from “populism” at all, but from a laboratory screw-up in Wuhan, China. You can feel the moral convulsions beginning as the question sets in: What if science itself is in some way culpable for all this?
  • In the years since (and for complicated reasons), liberal leaders have labored to remake themselves into defenders of professional rectitude and established legitimacy in nearly every field. In reaction to the fool Trump, liberalism made a sort of cult out of science, expertise, the university system, executive-branch “norms,” the “intelligence community,” the State Department, NGOs, the legacy news media, and the hierarchy of credentialed achievement in general.
  • The news media, in its zealous policing of the boundaries of the permissible, insisted that Russiagate was ever so true but that the lab-leak hypothesis was false false false, and woe unto anyone who dared disagree. Reporters gulped down whatever line was most flattering to the experts they were quoting and then insisted that it was 100% right and absolutely incontrovertible — that anything else was only unhinged Trumpist folly, that democracy dies when unbelievers get to speak, and so on.
yehbru

The best way to get to the bottom of the Covid-19 lab leak theory (opinion) - CNN - 0 views

  • On Wednesday, President Joe Biden called for an inquiry by US intelligence agencies into the true origins of Covid-19.
  • The "lab leak" explanation, which was panned and dismissed by a number of analysts, gained new life after the Wall Street Journal reported on a previously undisclosed US intelligence report revealing that three researchers from the Wuhan lab became so sick with Covid-19-like symptoms in November 2019 -- before official reports of the first outbreak -- that they had to seek hospita
  • The Biden administration should itself -- separate and apart from the World Health Organization -- lead a multilateral effort to investigate the origins of the virus.
  • ...6 more annotations...
  • The lab leak theory has been judged by at least one US intelligence agency as the more likely explanation for Covid-19's origins, while two agencies think the virus was more likely spread to humans from an infected animal
  • An investigation into the true origins of the virus is essential not only for scientific reasons, but also because policymakers around the world need this knowledge to better prepare themselves for future pandemics.
  • It should be no surprise, then, that the WHO's own investigation into the origins of Covid-19 concluded that a lab leak was probably not the cause of the pandemic and that infection from natural sources was more likely. But investigators were only permitted to examine research conducted by Chinese state scientists and did not have full access to the data or facilities that would have allowed them to assess whether the virus that causes Covid-19 may have been present before cases of the disease were first confirmed in China in December 2019.
  • Beijing, for its part, considers the case closed and has argued that the attention should be turned to other countries for the role they may have played in the early days of the pandemic.
  • More specifically, the Biden administration is calling on the WHO to complete a second phase of its investigation in a way that allows "international experts the independence to fully assess the source of the virus and the early days of the outbreak."
  • Biden has been eager to redouble our engagement and work together with America's friends and allies around the world. Getting to the root cause of a pandemic that has already killed nearly 3.5 million people globally presents a golden opportunity to do just that.
Javier E

Our Services | Education Elements - 0 views

  • We have found the “rotation” and “flex” models to be the best ways to combine online learning and offline teaching to meet the needs of students and schools. Unlike pure “face-to-face” teaching, these models integrate technology into the core instructional time, with greater personal attention given to students than in purely online environments.
  • Rotation: Lab Student groups rotate between traditional classroom instruction and online instruction in a computer or learning lab Lab monitored by an instructional aide rather than certified teacher. In use at Rocketship Public Schools Rotation: Classroom Student groups rotate between traditional classroom instruction and online instruction within the classroom Classroom overseen by certified teachers, apprentice teachers, and instructional aides. In use at KIPP Empower Academy and Alliance College-Ready Public Schools Flex Student learn primarily online in a brick and mortar school location. Teachers act as facilitators. In use at Carpe Diem Schools
Javier E

Opinion | The Government Must Say What It Knows About Covid's Origins - The New York Times - 0 views

  • By keeping evidence that seemed to provide ammunition to proponents of a lab leak theory under wraps and resisting disclosure, U.S. officials have contributed to making the topic of the pandemic’s origins more poisoned and open to manipulation by bad-faith actors.
  • Treating crucial information like a dark secret empowers those who viciously and unfairly accuse public health officials and scientists of profiting off the pandemic. As Megan K. Stack wrote in Times Opinion this spring, “Those who seek to suppress disinformation may be destined, themselves, to sow it.”
  • According to an Economist/YouGov poll published in March, 66 percent of Americans — including majorities of Democrats and independents — believe the pandemic was caused by research activities, a number that has gone up since 2020
  • ...5 more annotations...
  • The American public, however, only rarely heard refreshing honesty from their officials or even their scientists — and this tight-lipped, denialist approach appears to have only strengthened belief that the pandemic arose from carelessness during research or even, in less reality-based accounts, something deliberate
  • Only 16 percent of Americans believed that it was likely or definitely false that the emergence of the Covid virus was tied to research in a Chinese lab, while 17 percent were unsure.
  • Worse, biosafety, globally, remains insufficiently regulated. Making biosafety into a controversial topic makes it harder to move forward with necessary regulation and international effort
  • For years, scientists and government officials did not publicly talk much about the fact that a 1977 “Russian” influenza pandemic that killed hundreds of thousands of people most likely began when a vaccine trial went awry.
  • one reason for the relative silence was the fear of upsetting the burgeoning cooperation over flu surveillance and treatment by the United States, China and Russia.
ethanshilling

More Scientists Urge Broad Inquiry Into Coronavirus Origins - The New York Times - 0 views

  • A group of 18 scientists stated Thursday in a letter published in the journal Science that there is not enough evidence to decide whether a natural origin or an accidental laboratory leak caused the Covid-19 pandemic.
  • “Most of the discussion you hear about SARS-CoV-2 origins at this point is coming from, I think, the relatively small number of people who feel very certain about their views,” Dr. Bloom said.
  • Proponents of the idea that the virus may have leaked from a lab, especially the Wuhan Institute of Virology in China where SARS viruses were studied, have been active this year since a World Health Organization team issued a report claiming that such a leak was extremely unlikely, even though the mission never investigated any Chinese labs.
  • ...5 more annotations...
  • Recent letters by another group of scientists and international affairs experts argued at length for the relative likelihood of a laboratory leak. Previous statements from other scientists and the W.H.O. report both asserted that a natural origin was by far the most plausible.
  • The list of signers includes researchers with deep knowledge of the SARS family of viruses, such as Ralph Baric at the University of North Carolina, who had collaborated with the Chinese virologist Shi Zhengli in research done at the university on the original SARS virus. Dr. Baric did not respond to attempts to reach him by email and telephone.
  • Speaking for himself only, Dr. Relman said in an interview that “the piece that Kristian Anderson and four others wrote last March in my view simply fails to provide evidence to support their conclusions.”
  • Angela Rasmussen, a virologist at University of Saskatchewan’s Vaccine and Infectious Disease Organization, has criticized the politicization of the laboratory leak theory.
  • She supports further investigation, but said that “there is more evidence (both genomic and historical precedent) that this was the result of zoonotic emergence rather than a laboratory accident.”
Javier E

Nine Days in Wuhan, the Ground Zero of the Coronavirus Pandemic | The New Yorker - 0 views

  • By now, with worldwide infections at thirty-five million and counting, and with near-total silence on the part of the Chinese government, the market has become a kind of petri dish for the imagination.
  • One common Chinese conspiracy theory claims that the U.S. Army deliberately seeded the virus during the 2019 Military World Games, which were held in Wuhan that October. On the other side of the world, a number of Americans believe that the virus was released, whether accidentally or otherwise, from the Wuhan Institute of Virology, whose research includes work on coronaviruses.
  • There’s no evidence to support these theories, and even the prevalent animal-market connection is unclear. There weren’t many wildlife dealers in the market—about a dozen stalls, according to most published reports—and Wuhan natives have little appetite for exotic animals.
  • ...50 more annotations...
  • I never met a cabdriver who had been swab-tested less than twice, and a couple had been tested five times. Most of the cabbies had no relatives or friends who had been infected; swabbing was simply required by the city and by their cab companies.
  • When Wuhan was sealed, the strategy of isolation was replicated throughout the city. Housing compounds were closed and monitored by neighborhood committees, with residents going out only for necessities.
  • Toward the end of the first month, the guidelines were tightened further, until virtually all goods were delivered. On February 17th, Fang Fang wrote, “Everyone is now required to remain inside their homes at all times.”
  • Meanwhile, approximately ten thousand contact tracers were working in the city, in order to cut off chains of infection, and hospitals were developing large-scale testing systems. But isolation remained crucial: patients were isolated; suspected exposures were isolated; medical workers were isolated.
  • Zhang said the experience of working through the pandemic had left him calmer and more patient. He drove more carefully now; he wasn’t in such a rush.
  • I often asked Wuhan residents how they had been personally changed by the spring, and there was no standard response. Some expressed less trust in government information; others said they had increased faith in the national leadership.
  • Wuhan had most recently reported a locally transmitted symptomatic case on May 18th. It’s the most thoroughly tested city in China: at the end of May, in part to boost confidence, the government tried to test every resident, a total of eleven million.
  • There are three hundred and twenty-one testing locations in the city, and the system is so extensive that in June, when Beijing suffered an outbreak, Wuhan hospitals sent seventy-two staffers to the capital to help with tests.
  • “I tend to take a charitable view of countries that are at the beginning stage of epidemics,” Jennifer Nuzzo, an epidemiologist at the Johns Hopkins Center for Health Security, told me, in a phone conversation. According to her, it’s unrealistic to expect that any country could have stopped this particular virus at its source. “I’ve always believed that this thing was going to spread,” she said
  • The physician who handled testing told me that, on average, his hospital still recorded one positive for every forty thousand exams. Most of these positives were repeat patients: after having been infected during the initial run of the virus, they recovered fully, and then for some reason, months later, showed evidence of the virus again. So far, most of the positives had been asymptomatic, and the physician saw no indication that the virus was spreading in the city.
  • In town, there were few propaganda signs about the epidemic, and Wuhan newspapers ran upbeat headlines every morning (Yangtze Daily, August 29th, front page: “STUDENTS DO NOT HAVE TO WEAR MASKS IN SCHOOLS”). Movie theatres were open; restaurants and bars had no seating restrictions. At the Hanyang Renxinghui Mall, I saw barefaced kids playing in what may have been one of the last fully functioning ball pits on earth, a sight that seemed worthy of other headlines (“CHILDREN DO NOT HAVE TO WEAR MASKS IN WUHAN BALL PITS”).
  • Across town, colleges and universities were in the process of bringing back more than a million students. Wuhan has the second-highest number of students of any city in China, after Guangzhou.
  • Wuhan memories remained fresh, and the materials of documentation were also close at hand. People sometimes handed over manuscripts, and they took out their phones and pulled up photographs and messages from January and February. But I wondered how much of this material would dissipate over time.
  • In town, I met two Chinese journalists in their twenties who were visiting from out of town. They had been posted during the period of the sealed city: back then, anybody sent to cover events in Wuhan had to stay for the long haul.
  • One was a director of streaming media whom I’ll call Han, and he had found that government-run outlets generally wanted footage that emphasized the victory over the disease, not the suffering of Wuhan residents. Han hoped that eventually he’d find other ways to use the material. “It will be in the hard drive,” he said, tapping his camera.
  • After that, Yin reported on a number of issues that couldn’t be published or completed, and she often talked with scientists and officials who didn’t want to say too much. “One person said, ‘Ten years later, if the climate has changed, I’ll tell you my story,’ ” Yin told me. “He knew that he would be judged by history.” She continued, “These people are inside the system, but they also know that they are inside history.”
  • In time, we will learn more, but the delay is important to the Communist Party. It handles history the same way that it handles the pandemic—a period of isolation is crucial. Throughout the Communist era, there have been many moments of quarantined history: the Great Leap Forward, the Cultural Revolution, the massacre around Tiananmen Square. In every case, an initial silencing has been followed by sporadic outbreaks of leaked information. Wuhan will eventually follow the same pattern, but for the time being many memories will remain in the sealed city.
  • When I spoke with scientists outside China, they weren’t focussed on the government’s early missteps
  • Such fare is much more popular in Guangdong, in the far south. It’s possible that the disease arrived from somewhere else and then spread in the wet, cool conditions of the fish stalls. A few Wuhan residents told me that a considerable amount of their seafood comes from Guangdong, and they suggested that perhaps a southerner had unwittingly imported the disease,
  • Wafaa El-Sadr, the director of ICAP, a global-health center at Columbia University, pointed out that Chinese scientists had quickly sequenced the virus’s genome, which was made available to researchers worldwide on January 11th. “I honestly think that they had a horrific situation in Wuhan and they were able to contain it,” she said. “There were mistakes early on, but they did act, and they shared fast.”
  • For much of El-Sadr’s career, she has worked on issues related to AIDS in the United States, Africa, and elsewhere. After years of research, scientists eventually came to the consensus that H.I.V. most likely started through the bushmeat trade—the first human was probably infected after coming into contact with a primate or primate meat.
  • El-Sadr views the coronavirus as another inevitable outcome of people’s encroachment on the natural world. “We are now living through two concomitant massive pandemics that are the result of spillover from animal to human hosts, the H.I.V. and the COVID pandemics,” she wrote to me, in an e-mail. “Never in history has humanity experienced something along this scale and scope.”
  • There’s a tendency to believe that we would know the source of the coronavirus if the Chinese had been more forthcoming, or if they hadn’t cleaned out the Huanan market before stalls and animals could be studied properly.
  • But Peter Daszak, a British disease ecologist who has collaborated with the Wuhan Institute of Virology for sixteen years on research on bat coronaviruses, told me that it’s typical to fail to gather good data from the site of an initial outbreak. Once people get sick, local authorities inevitably focus on the public-health emergency. “You send in the human doctors, not the veterinarians,” he said, in a phone conversation. “And the doctors’ response is to clean out the market. They want to stop the infections.”
  • Daszak believes the virus probably circulated for weeks before the Wuhan outbreak, and he doubts that the city was the source. “There are bats in Wuhan, but it was the wrong time of year,” he told me. “It was winter, and bats are not out as much.”
  • His research has indicated that, across Southeast Asia, more than a million people each year are infected by bat coronaviruses. Some individuals trap, deal, or raise animals that might serve as intermediary hosts. “But generally it’s people who live near bat caves,”
  • Daszak said that he had always thought that such an outbreak was most likely to occur in Kunming or Guangzhou, southern cities that are close to many bat caves and that also have an intensive wildlife trade.
  • He thinks that Chinese scientists are probably now searching hospital freezers for lab samples of people who died of pneumonia shortly before the outbreak. “You would take those samples and look for the virus,” he said. “They’ll find something eventually. These things just don’t happen overnight; it requires a lot of work. We’ve seen this repeatedly with every disease. It turns out that it was already trickling through the population.”
  • Daszak is the president of EcoHealth Alliance, a nonprofit research organization based in New York. EcoHealth has become the target of conspiracy theorists, including some who claim that the virus was man-made. Daszak and many prominent virologists say that anything created in a lab would show clear signs of manipulation.
  • There’s also speculation that the outbreak started when researchers accidentally released a coronavirus they were studying at the Wuhan Institute of Virology. But there’s no evidence of a leak, or even that the institute has ever studied a virus that could cause a COVID-19 outbreak.
  • “Scientists in China are under incredible pressure to publish,” Daszak said. “It really drives openness and transparency.”
  • He has spent a good deal of time in Wuhan, and co-authored more than a dozen papers with Chinese colleagues. “If we had found a virus that infected human cells and spread within a cell culture, we would have put the information out there,” he said. “In sixteen years, I’ve never come across the slightest hint of subterfuge. They’ve never hidden data. I’ve never had a situation where one lab person tells me one thing and the other says something else. If you were doing things that you didn’t want people to know about, why would you invite foreigners into the lab?”
  • In April, President Trump told reporters that the U.S. should stop funding research connected to the Wuhan Institute of Virology. Shortly after Trump’s comments, the National Institutes of Health cancelled a $3.7-million grant to EcoHealth, which had been studying how bat coronaviruses are transmitted to people.
  • I asked Daszak why, if he has such faith in the openness of his Wuhan colleagues, the Chinese government has been so closed about other aspects of the outbreak. He said that science is one thing, and politics something else; he thinks that officials were embarrassed about the early mistakes, and in response they simply shut down all information.
  • At the beginning of July, China National Biotec Group, a subsidiary of a state-owned pharmaceutical company called Sinopharm, completed construction of a vaccine-manufacturing plant in Wuhan. The project began while the city was still sealed. “That’s the politically correct thing to do,” a Shanghai-based biotech entrepreneur told me. “To show the world that the heroic people of Wuhan have come back.”
  • Yiwu He, the chief innovation officer at the University of Hong Kong, told me that the C.N.B.G. vaccine has already been given to a number of Chinese government officials, under an emergency-use approval granted by the authorities. “I know a few government officials personally, and they told me that they took the vaccine,” he said, in a phone conversation. He thought that the total number was probably around a hundred. “It’s middle-level officials,” he said. “Vice-ministers, mayors, vice-mayors.”
  • Pharmaceutical executives have also been expected to lead the way, like the construction manager who donned P.P.E. in order to escort his workers into the patient ward. “Every senior executive at Sinopharm and C.N.B.G. has been vaccinated,” He said. “Including the C.E.O. of Sinopharm, the chairman of the board, every vice-president—everyone.” The Chinese press has reported that vaccinations have also been administered to hundreds of thousands of citizens in high-risk areas around the world.
  • In the West, China’s image has been badly damaged by the pandemic and by other recent events. The country has tightened political crackdowns in Hong Kong and Xinjiang, and, in May, after Australia called for an investigation into the origins of the virus, China responded furiously, placing new tariffs and restrictions on Australian goods ranging from barley to beef.
  • But He believes that the situation is fluid. “All of these feelings can turn around quickly,” he told me. “I think that once China has a vaccine, and if they can help other countries, it can make a huge difference.”
  • There’s also a competitive element. “China wants to beat America,” He said. He believes that the C.N.B.G. vaccine will receive some level of approval for public use by the end of October. “Chinese officials are thinking that Donald Trump might approve a U.S. vaccine before the election,” he said. “So their goal is to have a vaccine approved before that.”
  • No matter how quickly the Chinese develop a vaccine, or how effectively they have handled the pandemic since January, it’s unlikely to make Westerners forget the mistakes and misinformation during the pandemic’s earliest phase.
  • Some of this is due to a cultural difference—the Chinese response to errors is often to look forward, not back. On January 31st, Fang Fang commented in her diary, “The Chinese people have never been fond of admitting their own mistakes, nor do they have a very strong sense of repentance.” It’s often hard for them to understand why this quality is so frustrating for Westerners. In this regard, the pandemic is truly a mirror—it doesn’t allow the Chinese to look out and see themselves through the eyes of others.
  • The pandemic illuminates both the weaknesses and the strengths of the Chinese system, as well as the relationship between the government and the people. They know each other well: officials never felt the need to tell citizens exactly what happened in Wuhan, but they understood that American-level casualties would have been shocking—given China’s population, the tally would have been more than a million and counting.
  • In order to avoid death on that scale, the government also knew that people would be willing to accept strict lockdowns and contribute their own efforts toward fighting the virus.
  • In turn, citizens were skilled at reading their government. People often held two apparently contradictory ideas: that the Party lied about some things but gave good guidance about others. More often than not, citizens could discern the difference. During the pandemic, it was striking that, when the Chinese indulged in conspiracy theories, these ideas rarely resulted in personally risky behavior, as they often did in the U.S.
  • Perhaps the Chinese have been inoculated by decades of censorship and misinformation: in such an environment, people develop strong instincts for self-preservation, and they don’t seem as disoriented by social media as many Americans are.
  • Early in the year, I corresponded by WeChat with a Wuhan pharmacist who worked in a hospital where many were infected. On February 26th, he expressed anger about the early coverup. “My personal opinion is that the government has always been careless and suppressed dissent,” he wrote. “Because of this, they lost a golden opportunity to control the virus.”
  • In Wuhan, we met a few times, and during one of our conversations I showed him what he had written in February. I asked what he would do now if he found himself in Li Wenliang’s position, aware of an outbreak of some unknown disease. Would he post a warning online? Contact a health official? Alert a journalist?The pharmacist thought for a moment. “I would tell my close friends in person,” he said. “But I wouldn’t put anything online. Nothing in writing.”
  • I asked if such an event would turn out differently now.“It would be the same,” he said. “It’s a problem with the system.”
  • He explained that, with an authoritarian government, local officials are afraid of alarming superiors, which makes them inclined to cover things up. But, once higher-level leaders finally grasp the truth, they can act quickly and effectively.
Javier E

Scientists Predicted the Coronavirus Pandemic - The Atlantic - 0 views

  • The now-prophetic words could be found at the end of a research paper published in the journal Clinical Microbiology Reviews in October of 2007: “The presence of a large reservoir of SARS-CoV-like viruses in horseshoe bats, together with the culture of eating exotic mammals in southern China, is a time bomb.”
  • The warning—made nearly 13 years ago and more than four years after the worrying first wave of severe acute respiratory syndrome, or SARS, killed nearly 800 people globally—was among the earliest to predict the emergence of something like SARS-CoV-2, the virus behind the current COVID-19 pandemic.
  • ilar.”
  • ...25 more annotations...
  • Dogged by skepticism and inconsistent funding, these coronavirus researchers were stymied from developing treatments and vaccines for SARS—many of which could have been helpful in the current crisis.
  • funding declines hobbled individual investigators who weren’t part of these larger consortia. Pharmaceutical companies that develop vaccines and therapies scaled back on coronavirus research, too. Within a few years after the SARS outbreak, public health funding agencies both in the United States and abroad “no longer regarded coronaviruses as a high public health threat compared to other diseases,” Saif wrote in an email.
  • to some experts whose business it is to hunt potential pathogens before they spill over into human populations, the many years spent not girding for a serious coronavirus outbreak were tragically—and unnecessarily—wasted.
  • “We were out there on the ground after SARS, working on coronaviruses with Chinese colleagues in collaboration,” said Peter Daszak, president of the EcoHealth Alliance, a New York–based nonprofit group that took part in a large federally funded effort, called Predict, to hunt for new pandemic viruses in wildlife in 31 countries, including China. That program was famously defunded last fall, just before the SARS-CoV-2 outbreak began.
  • “But we were the only group of western scientists,” Daszak added. “How can we be the only people looking for these viruses when there was such a clear and present danger?”
  • when SARS emerged in late 2002, there was initially “general disbelief among medical people that a coronavirus could be the basis of such a huge outbreak.”
  • As that epidemic spread, an influx of new researchers crowded the field. More grants were awarded, and funding started to climb. “Everyone wanted to know where the virus had come from,” said Ralph Baric, a microbiologist at the University of North Carolina’s Gillings School of Global Public Health. Initial findings pointed to wild civets and raccoon dogs sold for meat and pelts, respectively, in Chinese markets. Later evidence began to implicate horseshoe bats as the original source of the infections. Some researchers whose pre-SARS careers had been grounded in basic coronavirus biology began working on therapies and vaccines—and they made steady progress for several years.
  • Another similarly affected researcher was Brenda Hogue, a virologist at Arizona State University in Tempe. Hogue had devoted her career to studying coronaviruses, focusing on the protein machinery that drives their assembly. After SARS, she and her colleagues turned part of their attention toward developing a vaccine. But when the funding dropped off in 2008, she said, the vaccine went into limbo “and we put our efforts into other directions.”
  • Then on May 12, The Wall Street Journal reported that the Chinese government was responding in kind, “by stalling international efforts to find the source of the [SARS-CoV-2] virus amid an escalating U.S. push to blame China for the pandemic.”
  • To demonstrate that a particular virus is actually harmful to people, scientists need to isolate and culture the microbe and show it infects human cells in the lab
  • Led by virologist Zheng-Li Shi, the Wuhan team reported in 2013 that this particular virus, called WIV1, binds with ACE2 in civet and human cells, and then replicates efficiently inside them. “That was the red flag,” Saif said. Earlier evidence suggested that direct contact with these bats could lead to viral spillover in humans. “Now there was proof of that.”
  • The bats had been trapped in a cave in Kunming, the capital of the Yunnan province. At least seven other SARS-like strains were present in that same colony, leading the researchers to speculate that bat coronaviruses remained “a substantial global threat to public health.”
  • They created a hybrid microbe by attaching the spike protein from SHC014 to the genetic backbone of a SARS-like virus that was previously adapted to grow in mice. Called a chimera—an organism containing cells with more than one genotype—this lab-made microbe had no problem binding with ACE2 and infecting human cells. Baric’s research team concluded that like WIV1, any SARS-like viruses outfitted with the SHC014 spike could pose cross-species threats to people.
  • Baric acknowledged the risky nature of the research but emphasized the safety protocols. “In general, we don’t know the transmissibility or virulence potential of any bat viruses or chimeras,” Baric said in an email message. “Hence it’s best to keep and work with them under biosafety level 3 laboratory conditions to maximize safety.”
  • Baric also pointed out that a chimera would display a genetic signature “that says what it is.” The adjoining parts of a chimera segregate discreetly in a logical pattern.
  • A genetic analysis of the chimera produced in his lab, for instance, “would come out to be mouse-adapted SARS everywhere but the spike, which is SHC014.” Similar logical patterns are absent in SARS-CoV-2, indicating that the virus that causes COVID-19 evolved naturally.
  • ven as Baric and others were generating lab evidence that more SARS-like viruses were poised for human emergence, another outbreak—in pigs, not people—provided another strong and recent signal: Some 25,000 piglets were killed by a coronavirus in the Guangdong province of China, starting in 2016. That virus, too, was found in horseshoe bats, and Buchmeier described the outbreak as both a major cross-species spillover and a warning shot that was never really picked up by the broader public-health community.
  • The EcoHealth Alliance, which had been part of the Predict effort, maintained its own collaboration with the Wuhan Institute of Virology using funds supplied by the National Institutes of Health. But on April 24, the Trump administration—which is investigating whether SARS-CoV-2 escaped accidentally from the Wuhan Institute, an allegation that’s been broadly discredited—directed the NIH to cut off that support.
  • hen cases of those diseases fell off, public-health responders shifted to other viral emergencies such as Ebola and Zika, and coronavirus research funding dropped sharply.
  • To disease experts, the bickering is a worrying—perhaps even astonishing—indicator that at least some global leaders still aren’t hearing what they have to say about the threat of coronaviruses, and Baric asserted that the ongoing pandemic exposes the need for better communication between countries, not less. “That is absolutely key,” he said. “Critical information needs to be passed as quickly as possible.”
  • Many other warnings would follow.Indeed, evidence of a looming and more deadly coronavirus pandemic had been building for years. Yet experts who specialize in coronaviruses—a large family of pathogens, found especially in birds and mammals, that can cross over to humans from other mammals and cause varying degrees of illness—struggled to convince a broader audience of the risk
  • the number of coronavirus-research grants funded by the National Institutes of Health—which had increased from a low of 28 in 2002 to a peak of 103 in 2008—went into a tailspin.
  • Though support for coronavirus research spiked a bit with the MERS outbreak in 2012, the increase was short-lived. Since that outbreak was quickly contained, the disease didn’t raise wider concerns and grant opportunities declined further.
  • Ironically, just as funding for drugs and vaccines was drying up, evidence that other coronavirus threats lurked in wildlife was only getting stronger
  • Ten years would pass, however, before researchers could show there were other SARS-like viruses in nature that also bind with ACE2. The evidence came from a team based at the Wuhan Institute of Virology
Javier E

Inside the Struggle to Make Lab-Grown Meat - WSJ - 0 views

  • “We can make it on small scales successfully,” said Josh Tetrick, chief executive officer of a rival food-technology company, Eat Just Inc.
  • What is uncertain is whether we and other companies will be able to produce this at the largest of scales, at the lowest of costs within the next decade.”
  • Mr. Tetrick said Eat Just’s Good Meat unit sells less than 5,000 pounds annually of its hybrid cultivated chicken in Singapore,
  • ...10 more annotations...
  • Uma Valeti, the company’s CEO, said Upside has proven it can safely produce a delicious product. The company said that it has helped pioneer an industry and that it is making progress on growing larger quantities of meat, while bringing down its cost.
  • According to former employees, Upside has struggled to produce large quantities of meat. They said the company often scrambled to make enough for lab analysis and tastings. Upside for years worked to grow whole cuts of meat, which proved difficult in its bioreactors. It battled contamination in its labs. Traces of rodent DNA once tainted a chicken cell line, according to former employees, and confirmed by company executives.  
  • Today, the company is growing its marquee filet not in large bioreactors at its pilot plant but in two-liter plastic bottles akin to those used to grow cells for decades by pharmaceutical companies. 
  • “Roller bottles aren’t scalable. Too small, too labor-intensive,”
  • Upside’s pilot plant isn’t yet operating at the 50,000-pound annual capacity the company announced when it opened in 2021, according to company executives, much less its future target of 400,000 pounds. Production can accelerate once Upside receives USDA clearance, company executives said.
  • Industry champions said they are confident that steady scientific progress will help reduce production costs for cultivated meat, while climate change and global population growth will intensify the need for it.
  • “It turned out that tissue, or creating this whole-cut texture, was really challenging,” said Amy Chen, Upside’s chief operating officer
  • Upside also wrestled with problems common to other cultivated-meat makers, including a battle against bacteria, according to former employees.Growing meat requires meticulous sterilization because small quantities of bacteria can quickly overtake a bioreactor, ruining a batch.
  • The company said contamination can slow production, but doesn’t affect final cultivated products, unlike conventional meat. The company said that autoclaves sometimes require maintenance and that meat grown for consumers won’t be produced in the older building
  • Some industry officials think companies can surmount contamination problems, but that other hurdles will still abound, including those tied to growing the finicky cells and the high cost of supplies.  
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

White House touts lab study showing coronavirus vulnerability to summer weather - The W... - 0 views

  • recent lab studies carried out by the agency at the U.S. Army’s biosecurity laboratory at Fort Detrick, Md.
  • the novel coronavirus, like many other viruses, does not survive as long when exposed to high amounts of ultraviolet light and warm and humid conditions.
  • “Within the conditions we’ve tested to date, the virus in droplets of saliva survives best in indoors and dry conditions. … The virus dies quickest in the presence of direct sunlight.”
  • ...12 more annotations...
  • The half-life is a measurement of the time it takes for a given amount of the virus to become reduced by half.
  • the half-life of the virus, in the absence of sunlight (indoors), lowers from 18 hours to one hour when the temperature rises from around room temperature (70 to 75 degrees) to 95 degrees and the humidity increases from 20 percent to 80 percent.
  • The laboratory experiment also tested how the virus decays when exposed to various elements while suspended in the air. When the airborne virus at temperatures between 70 and 75 degrees is exposed to sunlight, its half-life decreases from around 60 minutes before exposure to 1.5 minutes after.
  • The laboratory results show that increases in temperature, humidity and sunlight all can speed up how fast the virus is destroyed, based on measurements of its half-life when exposed to these elements.
  • The weather is no panacea when it comes to the coronavirus pandemic, considering that warm states, such as Georgia and Florida, already are seeing significant outbreaks, as are warm and humid countries, including Singapore. Even if the virus were to wane during the summer, a dreaded second wave would still be likely in the fall, as has happened with past pandemic flu outbreaks.
  • A slide presented by Bryan also recommended moving activities outside.
  • in the real world, the virus on a playground surface exposed to direct sunlight would die quickly, but the virus could survive longer in shaded areas.
  • If the summer months reduce the transmission rates of the virus, that would help officials’ efforts to squelch its spread without resorting to drastic mitigation measures, such as stay-at-home orders, that have had massive economic repercussions.
  • “It would be irresponsible for us to say summer will kill the virus,” Bryan said, calling summer conditions “another tool in toolbox” to use against the virus.
  • “Increasing the temperature and humidity of potentially contaminated indoor spaces appears to reduce the stability of the virus,” he said. “And extra care may be warranted for dry environments that do not have exposure to solar light.”
  • That report pointed to shortcomings in the studies published so far that trace the spread of the coronavirus and connect the pattern of spread to temperature and humidity, stating they “should be interpreted with caution.”
  • The NAS report states: “There is some evidence to suggest that SARS-CoV-2 may transmit less efficiently in environments with higher ambient temperature and humidity; however, given the lack of host immunity globally, this reduction in transmission efficiency may not lead to a significant reduction in disease spread” without mitigation measures, such as social distancing
martinelligi

Why U.S. Thinks Wuhan Lab Is Worth A Look In Search For Pandemic Origins : NPR - 0 views

  • The idea that the coronavirus could have leaked from a lab in Wuhan, China — instead of jumping from animals to humans — was dismissed as a conspiracy theory by many scientists a year ago. That has changed now.
  • "this needs more investigation," she said Thursday in an interview with NPR's Rachel Martin on Morning Edition. "And saying that this needs more investigation doesn't mean the virus leaked from a lab. But we need to investigate that and figure that out because it really does have implications for how we'll prevent the next pandemic."
saberal

Nikki Haley calls for Beijing Olympics boycott, urges Biden diplomats to create COVID p... - 0 views

  • Former U.S. Ambassador to the United Nations Nikki Haley on Tuesday called on the United States to boycott the 2022 Winter Olympics set to take place next February in Beijing.
  • "[The U.S. should] go to Japan, go to India and all these other allies and say look, until we have a full investigation of that lab, until we know what is in it, what precautions are being done to make sure nothing comes back out of that lab and until we know what China knew, when they knew it and what they did about it, we're not going to support the Olympics."
  • "I think what is really important is Congress needs to go through and find out exactly what the National Institute of Health knew about the Wuhan lab, what they knew about any of thinks viruses that existed, if they funded anything and what they did about it," Haley told Fox News on Tuesday.
anonymous

Researchers Create 'Model Embryos' To Study Human Fertility : Shots - Health News : NPR - 0 views

  • For decades, science has been trying to unlock the mysteries of how a single cell becomes a fully formed human being and what goes wrong to cause genetic diseases, miscarriages and infertility.Now, scientists have created living entities in their labs that resemble human embryos; the results of two new experiments are the most complete such "model embryos" developed to date.
  • The blastoids appear to have enough differences from naturally formed embryos to prevent them ever becoming a viable fetus or baby. But they appear to be very close.
  • Crucial periods of embryonic development are hidden inside women's bodies during pregnancies and therefore inaccessible to study. And conducting experiments on human embryos in the laboratory is difficult and controversial.
  • ...9 more annotations...
  • So in recent years, scientists started creating structures that resemble human embryos in the lab by using chemical signals to coax cells into forming themselves into entities that look like very primitive human embryos.
  • Now, Wu's team and another international team of scientists have gone further than ever before. They created hollow balls of cells that closely resemble embryos at the stage when they usually implant in the womb, which are known as blastocysts. The new laboratory-made embryos have been dubbed "blastoids."
  • Now with this technique we can make hundreds of these structures. So this will allow us to scale up our understanding of very early human development. We think this will be very important." Some other scientists are hailing the research.
  • The goal of the experiments is to gain important insights into early human development and find new ways to prevent birth defects and miscarriages and treat fertility problems.But the research, which was published in two separate papers Wednesday in the journal Nature Portfolio, raises sensitive moral and ethical concerns.
  • The two experiments started with different cells to get similar results. Wu's group created his blastoids from human embryonic stem cells, and from "induced pluripotent stem cells," which are made from adult cells. Polo's group started with adult skin cells.
  • Hyun agrees the research is very important and could lead to a many other advances. But Hyun says it's important to come up with clear guidelines about how scientists can responsibly be permitted to pursue this kind of research.
  • Hyun favors revising a guideline known as the 14-day rule, which prohibits experiments on human embryos in the lab beyond two weeks of their existence. Hyun says exceptions should be allowed under certain carefully reviewed conditions.
  • But others worry about easing the 14-day rule.That could mean "we could just keep growing these sort-of humans in a test tube and not even considering the fact that they're so close to being human, right?"
  • In fact, another team of scientists at the Weizmann Institute of Science in Israel figured out how to grow mouse embryos outside the womb — a step toward creating an "artificial womb," according to report published Wednesday in the journal Nature.
ethanshilling

Airstrike Damages Gaza's Only Covid-19 Testing Lab, Officials Say - The New York Times - 0 views

  • Since Covid-19 first emerged in the blockaded Gaza Strip, a shortage of medical supplies has allowed authorities to administer only a relatively tiny number of coronavirus tests.
  • Now, the sole laboratory in Gaza that processes test results has become temporarily inoperable after an Israeli airstrike nearby on Monday, officials in Gaza said.
  • The strike, which targeted a separate building in Gaza City, sent shrapnel and debris flying across the street, damaging the lab and the administrative offices of the Hamas-run Health Ministry, said Dr. Majdi Dhair, director of the ministry’s preventive medicine department.
  • ...4 more annotations...
  • The Israeli Army did not immediately respond to a request for comment about the strike. Since Israel began its bombing campaign in Gaza on May 10, the army has said that its airstrikes aim solely at militants and their infrastructure.
  • Over the past week, the authorities in Gaza have tested an average of 515 Palestinians daily for the virus. Only 1.9 percent of Gaza’s two million people were fully vaccinated as of Monday, according to official data, compared with 56 percent in Israel.
  • Unvaccinated Palestinians were crowding into schools run by the United Nations relief agency in Gaza, turning them into de facto bomb shelters. Matthias Schmale, the U.N. agency’s director of operations, said last week that those schools “could turn into mass spreaders.”
  • Mr. Schmale and the top World Health Organization official in Gaza, Sacha Bootsma, also said that all vaccinations had stopped when hostilities broke out, and that any vaccine supplies headed to the territory had been delayed by the closure of Gaza’s border crossings.
Javier E

In Stinging Rebuke, China Tells U.S. Diplomat That Its Rise Can't Be Stopped - The New ... - 0 views

  • A senior Chinese diplomat on Monday bluntly warned the visiting American deputy secretary of state, Wendy R. Sherman, that the Biden administration’s strategy of pursuing both confrontation and cooperation with Beijing was sure to fail.
  • China’s vice foreign minister, Xie Feng, told Ms. Sherman that the United States’ “competitive, collaborative and adversarial rhetoric” was a “thinly veiled attempt to contain and suppress China,” according to a summary of Mr. Xie’s comments that the Chinese foreign ministry sent to reporters.
  • Mr. Xie’s remarks underscored the anger that has been building in China toward the United States, undermining the chances that the approach will work.
  • ...6 more annotations...
  • “It seems that a whole-of-government and whole-of-society campaign is being waged to bring China down,” Mr. Xie told Ms. Sherman, according to the summaries of his comments, which were also issued on the Chinese foreign ministry website. “Do bad things and get good results. How is that ever possible?”
  • Chinese people “feel that the real emphasis is on the adversarial aspect; the collaborative aspect is just an expediency,” Mr. Xie told Ms. Sherman, according to the summary.
  • The acrimony echoed the opening of high-level talks between senior Chinese and Biden administration officials in March, when Beijing’s top foreign policy official, Yang Jiechi, delivered a 16-minute lecture, accusing them of arrogance and hypocrisy.
  • Last week, Chinese officials said they were “extremely shocked” by a W.H.O. proposal to take a fresh look at the lab leak theory. A report in March from an initial W.H.O. inquiry stated that it was “extremely unlikely” that the coronavirus had jumped into the wider population through a lab leak.
  • Under Xi Jinping, the Chinese government has expressed impatience with criticism and demands from Washington, especially over what Beijing deems internal issues like Hong Kong, Xinjiang and human rights.“We’ll never accept insufferably arrogant lecturing from those ‘master teachers!’” Mr. Xi said in a speech on July 1 marking 100 years since the founding of the Chinese Communist Party. He also warned that foes would “crack their heads and spill blood” against a wall of Chinese resolve.
  • China’s foreign minister, Wang Yi, who was also scheduled to meet Ms. Sherman in Tianjin, said over the weekend that the United States needed to be taught some humility.“If the United States still hasn’t learned how to get along with other countries in an equal manner, then we have a responsibility to work with the international community to give it a good catch-up lesson,” Mr. Wang said in talks on Saturday with his Pakistani counterpart, Shah Mahmood Qureshi, according to the Chinese foreign ministry.
Javier E

Elon Musk Ramps Up A.I. Efforts, Even as He Warns of Dangers - The New York Times - 0 views

  • At a 2014 aerospace event at the Massachusetts Institute of Technology, Mr. Musk indicated that he was hesitant to build A.I himself.“I think we need to be very careful about artificial intelligence,” he said while answering audience questions. “With artificial intelligence, we are summoning the demon.”
  • That winter, the Future of Life Institute, which explores existential risks to humanity, organized a private conference in Puerto Rico focused on the future of A.I. Mr. Musk gave a speech there, arguing that A.I. could cross into dangerous territory without anyone realizing it and announced that he would help fund the institute. He gave $10 million.
  • OpenAI was set up as a nonprofit, with Mr. Musk and others pledging $1 billion in donations. The lab vowed to “open source” all its research, meaning it would share its underlying software code with the world. Mr. Musk and Mr. Altman argued that the threat of harmful A.I. would be mitigated if everyone, rather than just tech giants like Google and Facebook, had access to the technology.
  • ...4 more annotations...
  • as OpenAI began building the technology that would result in ChatGPT, many at the lab realized that openly sharing its software could be dangerous. Using A.I., individuals and organizations can potentially generate and distribute false information more quickly and efficiently than they otherwise could. Many OpenAI employees said the lab should keep some of its ideas and code from the public.
  • Mr. Musk renewed his complaints that A.I. was dangerous and accelerated his own efforts to build it. At a Tesla investor event last month, he called for regulators to protect society from A.I., even though his car company has used A.I. systems to push the boundaries of self-driving technologies that have been involved in fatal crashes.
  • During the interview last week with Mr. Carlson, Mr. Musk said OpenAI was no longer serving as a check on the power of tech giants. He wanted to build TruthGPT, he said, “a maximum-truth-seeking A.I. that tries to understand the nature of the universe.
  • Experts who have discussed A.I. with Mr. Musk believe he is sincere in his worries about the technology’s dangers, even as he builds it himself. Others said his stance was influenced by other motivations, most notably his efforts to promote and profit from his companies.
Javier E

How Could AI Destroy Humanity? - The New York Times - 0 views

  • “AI will steadily be delegated, and could — as it becomes more autonomous — usurp decision making and thinking from current humans and human-run institutions,” said Anthony Aguirre, a cosmologist at the University of California, Santa Cruz and a founder of the Future of Life Institute, the organization behind one of two open letters.
  • “At some point, it would become clear that the big machine that is running society and the economy is not really under human control, nor can it be turned off, any more than the S&P 500 could be shut down,” he said.
  • Are there signs A.I. could do this?Not quite. But researchers are transforming chatbots like ChatGPT into systems that can take actions based on the text they generate. A project called AutoGPT is the prime example.
  • ...11 more annotations...
  • The idea is to give the system goals like “create a company” or “make some money.” Then it will keep looking for ways of reaching that goal, particularly if it is connected to other internet services.
  • A system like AutoGPT can generate computer programs. If researchers give it access to a computer server, it could actually run those programs. In theory, this is a way for AutoGPT to do almost anything online — retrieve information, use applications, create new applications, even improve itself.
  • Mr. Leahy argues that as researchers, companies and criminals give these systems goals like “make some money,” they could end up breaking into banking systems, fomenting revolution in a country where they hold oil futures or replicating themselves when someone tries to turn them off.
  • “People are actively trying to build systems that self-improve,” said Connor Leahy, the founder of Conjecture, a company that says it wants to align A.I. technologies with human values. “Currently, this doesn’t work. But someday, it will. And we don’t know when that day is.”
  • Systems like AutoGPT do not work well right now. They tend to get stuck in endless loops. Researchers gave one system all the resources it needed to replicate itself. It couldn’t do it.In time, those limitations could be fixed.
  • Because they learn from more data than even their creators can understand, these system also exhibit unexpected behavior. Researchers recently showed that one system was able to hire a human online to defeat a Captcha test. When the human asked if it was “a robot,” the system lied and said it was a person with a visual impairment.Some experts worry that as researchers make these systems more powerful, training them on ever larger amounts of data, they could learn more bad habits.
  • Who are the people behind these warnings?In the early 2000s, a young writer named Eliezer Yudkowsky began warning that A.I. could destroy humanity. His online posts spawned a community of believers.
  • Mr. Yudkowsky and his writings played key roles in the creation of both OpenAI and DeepMind, an A.I. lab that Google acquired in 2014. And many from the community of “EAs” worked inside these labs. They believed that because they understood the dangers of A.I., they were in the best position to build it.
  • The two organizations that recently released open letters warning of the risks of A.I. — the Center for A.I. Safety and the Future of Life Institute — are closely tied to this movement.
  • The recent warnings have also come from research pioneers and industry leaders like Elon Musk, who has long warned about the risks. The latest letter was signed by Sam Altman, the chief executive of OpenAI; and Demis Hassabis, who helped found DeepMind and now oversees a new A.I. lab that combines the top researchers from DeepMind and Google.
  • Other well-respected figures signed one or both of the warning letters, including Dr. Bengio and Geoffrey Hinton, who recently stepped down as an executive and researcher at Google. In 2018, they received the Turing Award, often called “the Nobel Prize of computing,” for their work on neural networks.
1 - 20 of 176 Next › Last »
Showing 20 items per page