Skip to main content

Home/ History Readings/ Group items tagged engineers

Rss Feed Group items tagged

criscimagnael

China Eastern Pilots Were Experienced, Adding to Mystery of Crash - The New Y... - 0 views

  • The pilot of the China Eastern Airlines flight that crashed in southern China with 132 people aboard was an industry veteran with more than 6,000 hours of flying time.
  • His co-pilot was even more experienced, having flown since the early days of China’s post-Mao era, training on everything from Soviet-model biplanes to newer Boeing models.
  • How they piloted the Boeing 737 will be closely examined as investigators seek to explain what is probably China’s worst air disaster in more than a decade. Experts have said it is unlikely that anyone survived the crash.
  • ...10 more annotations...
  • On Thursday, rescuers said they had found engine components, part of a wing and other “important debris” as they searched the mountainside in a rural part of the Guangxi region for a fourth day.
  • Mr. Zhang, who was born in 1963, was one of China’s most experienced pilots,
  • A day earlier, the workers had found a black box, believed to be the cockpit voice recorder, which could provide investigators with crucial details. Officials said it was damaged but that its memory unit was relatively intact. The plane’s second black box, which records flight data, has yet to be recovered.
  • Their past performance was “very good,” Sun Shiying, the chairman of China Eastern Airlines’ Yunnan branch, said on Wednesday. When reached by phone, an airline representative declined to answer further questions about the crew.
  • At the main crash site, a state broadcaster showed the workers digging with shovels around a large piece of wreckage that the reporter described as a wing, which bore part of the China Eastern logo and was perched on a steep, barren slope fringed by dense thickets of now-flattened bamboo. Heavy rains had left the roads slick and inundated the earth with muddy pools.
  • Over his career as a commercial pilot with China Yunnan, which later merged with China Eastern, Mr. Zhang flew four different models of aircraft and accumulated 31,769 hours of flight experience.
  • The airline commonly paired young pilots with older pilots, and Mr. Zhang had mentored more than 100, CAAC News said. Mr. Yang was one of them.
  • Experts said that investigating the crash, which involved a sudden dive from cruising altitude in good weather, would require a close look at both the aircraft and the pilots, including the possibility that the plane was deliberately brought down. But they stressed that the cause was far from determined.
  • “Certainly an intentional downing is always a part of any investigation, and especially with this particular flight profile,” said Hassan Shahidi, chief executive of the Flight Safety Foundation, a nonprofit organization created after World War II to promote aviation safety. But he cautioned that it was “premature to jump onto any possibilities.”
  • “If the captain were intending to commit suicide, they’d have to overcome the other flight crew members,” Mr. Marks said.
criscimagnael

Methane Leaks Plague New Mexico Oil and Gas Wells - The New York Times - 0 views

  • Startlingly large amounts of methane are leaking from wells and pipelines in New Mexico, according to a new analysis of aerial data, suggesting that the oil and gas industry may be contributing more to climate change than was previously known.
  • The study, by researchers at Stanford University, estimates that oil and gas operations in New Mexico’s Permian Basin are releasing 194 metric tons per hour of methane, a planet-warming gas many times more potent than carbon dioxide. That is more than six times as much as the latest estimate from the Environmental Protection Agency.
  • He and Ms. Chen, a Ph.D. student in energy resources engineering, said they believed their results showed the necessity of surveying a large number of sites in order to accurately measure the environmental impact of oil and gas production.
  • ...7 more annotations...
  • The largest previous assessment of methane emissions from oil and gas in the United States, published in 2018, reviewed studies covering about 1,000 well sites, a tiny fraction of the more than one million active wells in the country. The new study, by contrast, used aerial data to examine nearly 27,000 sites from above: more than 90 percent of all wells in the New Mexico portion of the Permian Basin, which also extends into Texas.
  • estimated about a decade ago that the break-even point — the point above which natural gas would actually hurt the climate more than coal — was a 3.1 percent methane leakage rate. Based on more recent data from the Intergovernmental Panel on Climate Change, Dr. Howarth estimates that the threshold is closer to 2.8 or 2.9 percent.That makes the 9.4 percent leakage rate in the new study highly alarming,
  • They found that a small number of wells and pipelines accounted for “the vast majority” of methane leaks, Ms. Chen said, adding, “Comprehensive point source surveys find more high-consequence emission events, which drive total emissions.”
  • Natural gas accounts for about a third of American energy consumption, and because it is less costly than coal in terms of carbon dioxide emissions, many policymakers have promoted it as a “bridge” that could do less damage to the climate while society works on a longer-term transition to renewable energy. But compared to coal, natural gas results in much higher emissions of methane, which is a more potent greenhouse gas than carbon dioxide, but doesn’t last as long in the atmosphere.
  • Methane can be released by wells both on purpose, in a process known as venting, and through unintentional leaks from aging or faulty equipment.
  • If there was good news in the study, it was that a small number of oil and gas sites contributed disproportionately to emissions — suggesting that, if the worst offenders change their practices, it is possible for the industry to operate more cleanly.
  • The Stanford researchers emphasized that the same methodology they used to quantify methane emissions could be used to identify problem sites and target regulations accordingly.“Aerial technology found high methane emissions,” Ms. Chen said, “but can also help fix them cost effectively.”
Javier E

If We Knew Then What We Know Now About Covid, What Would We Have Done Differently? - WSJ - 0 views

  • A small cadre of aerosol scientists had a different theory. They suspected that Covid-19 was transmitted not so much by droplets but by smaller infectious aerosol particles that could travel on air currents way farther than 6 feet and linger in the air for hours. Some of the aerosol particles, they believed, were small enough to penetrate the cloth masks widely used at the time.
  • For much of 2020, doctors and public-health officials thought the virus was transmitted through droplets emitted from one person’s mouth and touched or inhaled by another person nearby. We were advised to stay at least 6 feet away from each other to avoid the droplets
  • The group had a hard time getting public-health officials to embrace their theory. For one thing, many of them were engineers, not doctors.
  • ...37 more annotations...
  • “My first and biggest wish is that we had known early that Covid-19 was airborne,”
  • , “Once you’ve realized that, it informs an entirely different strategy for protection.” Masking, ventilation and air cleaning become key, as well as avoiding high-risk encounters with strangers, he says.
  • Instead of washing our produce and wearing hand-sewn cloth masks, we could have made sure to avoid superspreader events and worn more-effective N95 masks or their equivalent. “We could have made more of an effort to develop and distribute N95s to everyone,” says Dr. Volckens. “We could have had an Operation Warp Speed for masks.”
  • We didn’t realize how important clear, straight talk would be to maintaining public trust. If we had, we could have explained the biological nature of a virus and warned that Covid-19 would change in unpredictable ways.  
  • In the face of a pandemic, he says, the public needs an early basic and blunt lesson in virology
  • “The science is really important, but if you don’t get the trust and communication right, it can only take you so far,”
  • and mutates, and since we’ve never seen this particular virus before, we will need to take unprecedented actions and we will make mistakes, he says.
  • Since the public wasn’t prepared, “people weren’t able to pivot when the knowledge changed,”
  • By the time the vaccines became available, public trust had been eroded by myriad contradictory messages—about the usefulness of masks, the ways in which the virus could be spread, and whether the virus would have an end date.
  • , the absence of a single, trusted source of clear information meant that many people gave up on trying to stay current or dismissed the different points of advice as partisan and untrustworthy.
  • We didn’t know how difficult it would be to get the basic data needed to make good public-health and medical decisions. If we’d had the data, we could have more effectively allocated scarce resources
  • For much of the pandemic, doctors, epidemiologists, and state and local governments had no way to find out in real time how many people were contracting Covid-19, getting hospitalized and dying
  • Doctors didn’t know what medicines worked. Governors and mayors didn’t have the information they needed to know whether to require masks. School officials lacked the information needed to know whether it was safe to open schools.
  • people didn’t know whether it was OK to visit elderly relatives or go to a dinner party.
  • just months before the outbreak of the pandemic, the Council of State and Territorial Epidemiologists released a white paper detailing the urgent need to modernize the nation’s public-health system still reliant on manual data collection methods—paper records, phone calls, spreadsheets and faxes.
  • While the U.K. and Israel were collecting and disseminating Covid case data promptly, in the U.S. the CDC couldn’t. It didn’t have a centralized health-data collection system like those countries did, but rather relied on voluntary reporting by underfunded state and local public-health systems and hospitals.
  • doctors and scientists say they had to depend on information from Israel, the U.K. and South Africa to understand the nature of new variants and the effectiveness of treatments and vaccines. They relied heavily on private data collection efforts such as a dashboard at Johns Hopkins University’s Coronavirus Resource Center that tallied cases, deaths and vaccine rates globally.
  • With good data, Dr. Ranney says, she could have better managed staffing and taken steps to alleviate the strain on doctors and nurses by arranging child care for them.
  • To solve the data problem, Dr. Ranney says, we need to build a public-health system that can collect and disseminate data and acts like an electrical grid. The power company sees a storm coming and lines up repair crews.
  • If we’d known how damaging lockdowns would be to mental health, physical health and the economy, we could have taken a more strategic approach to closing businesses and keeping people at home.
  • t many doctors say they were crucial at the start of the pandemic to give doctors and hospitals a chance to figure out how to accommodate and treat the avalanche of very sick patients.
  • The measures reduced deaths, according to many studies—but at a steep cost.
  • The lockdowns didn’t have to be so harmful, some scientists say. They could have been more carefully tailored to protect the most vulnerable, such as those in nursing homes and retirement communities, and to minimize widespread disruption.
  • Lockdowns could, during Covid-19 surges, close places such as bars and restaurants where the virus is most likely to spread, while allowing other businesses to stay open with safety precautions like masking and ventilation in place.  
  • If England’s March 23, 2020, lockdown had begun one week earlier, the measure would have nearly halved the estimated 48,600 deaths in the first wave of England’s pandemic
  • If the lockdown had begun a week later, deaths in the same period would have more than doubled
  • The key isn’t to have the lockdowns last a long time, but that they are deployed earlier,
  • It is possible to avoid lockdowns altogether. Taiwan, South Korea and Hong Kong—all countries experienced at handling disease outbreaks such as SARS in 2003 and MERS—avoided lockdowns by widespread masking, tracking the spread of the virus through testing and contact tracing and quarantining infected individuals.
  • Had we known that even a mild case of Covid-19 could result in long Covid and other serious chronic health problems, we might have calculated our own personal risk differently and taken more care.
  • Early in the pandemic, public-health officials were clear: The people at increased risk for severe Covid-19 illness were older, immunocompromised, had chronic kidney disease, Type 2 diabetes or serious heart conditions
  • t had the unfortunate effect of giving a false sense of security to people who weren’t in those high-risk categories. Once case rates dropped, vaccines became available and fear of the virus wore off, many people let their guard down, ditching masks, spending time in crowded indoor places.
  • it has become clear that even people with mild cases of Covid-19 can develop long-term serious and debilitating diseases. Long Covid, whose symptoms include months of persistent fatigue, shortness of breath, muscle aches and brain fog, hasn’t been the virus’s only nasty surprise
  • In February 2022, a study found that, for at least a year, people who had Covid-19 had a substantially increased risk of heart disease—even people who were younger and had not been hospitalized
  • respiratory conditions.
  • Some scientists now suspect that Covid-19 might be capable of affecting nearly every organ system in the body. It may play a role in the activation of dormant viruses and latent autoimmune conditions people didn’t know they had
  •  A blood test, he says, would tell people if they are at higher risk of long Covid and whether they should have antivirals on hand to take right away should they contract Covid-19.
  • If the risks of long Covid had been known, would people have reacted differently, especially given the confusion over masks and lockdowns and variants? Perhaps. At the least, many people might not have assumed they were out of the woods just because they didn’t have any of the risk factors.
Javier E

The Only Way to Deal With the Threat From AI? Shut It Down | Time - 0 views

  • An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-
  • This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin
  • he rule that most people aware of these issues would have endorsed 50 years earlier, was that if an AI system can speak fluently and says it’s self-aware and demands human rights, that ought to be a hard stop on people just casually owning that AI and using it past that point. We already blew past that old line in the sand. And that was probably correct; I agree that current AIs are probably just imitating talk of self-awareness from their training data. But I mark that, with how little insight we have into these systems’ internals, we do not actually know.
  • ...25 more annotations...
  • The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.
  • Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”
  • It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
  • Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”
  • Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.
  • The likely result of humanity facing down an opposed superhuman intelligence is a total loss
  • To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
  • There’s no proposed plan for how we could do any such thing and survive. OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.
  • An aside: None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria.
  • I didn’t also mention that we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.
  • I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.
  • the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.
  • If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow.
  • We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems
  • Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs.
  • This is a stupid state of affairs, and an undignified way for Earth to die, and the rest of humanity ought to step in at this point and help the industry solve its collective action problem.
  • When the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she’s not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium.
  • The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth
  • Here’s what would actually need to be done:
  • Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs
  • Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithm
  • Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
  • Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool
  • Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
  • when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too.
Javier E

Regular Old Intelligence is Sufficient--Even Lovely - 0 views

  • Ezra Klein, has done some of the most dedicated reporting on the topic since he moved to the Bay Area a few years ago, talking with many of the people creating this new technology.
  • one is that the people building these systems have only a limited sense of what’s actually happening inside the black box—the bot is doing endless calculations instantaneously, but not in a way even their inventors can actually follow
  • an obvious question, one Klein has asked: “’If you think calamity so possible, why do this at all?
  • ...18 more annotations...
  • second, the people inventing them think they are potentially incredibly dangerous: ten percent of them, in fact, think they might extinguish the human species. They don’t know exactly how, but think Sorcerer’s Apprentice (or google ‘paper clip maximizer.’)
  • But why? The sun won’t blow up for a few billion years, meaning that if we don’t manage to drive ourselves to extinction, we’ve got all the time in the world. If it takes a generation or two for normal intelligence to come up with the structure of all the proteins, some people may die because a drug isn’t developed in time for their particular disease, but erring on the side of avoiding extinction seems mathematically sound
  • That is, it seems to me, a dumb answer from smart people—the answer not of people who have thought hard about ethics or even outcomes, but the answer that would be supplied by a kind of cultist.
  • (Probably the kind with stock options).
  • it does go, fairly neatly, with the default modern assumption that if we can do something we should do it, which is what I want to talk about. The question that I think very few have bothered to answer is, why?
  • One pundit after another explains that an AI program called Deep Mind worked far faster than scientists doing experiments to uncover the basic structure of all the different proteins, which will allow quicker drug development. It’s regarded as ipso facto better because it’s faster, and hence—implicitly—worth taking the risks that come with AI.
  • Allowing that we’re already good enough—indeed that our limitations are intrinsic to us, define us, and make us human—should guide us towards trying to shut down this technology before it does deep damage.
  • I find they often answer from something that sounds like the A.I.’s perspective. Many — not all, but enough that I feel comfortable in this characterization — feel that they have a responsibility to usher this new form of intelligence into the world.”
  • As it happens, regular old intelligence has already give us most of what we need: engineers have cut the cost of solar power and windpower and the batteries to store the energy they produce so dramatically that they’re now the cheapest power on earth
  • We don’t actually need artificial intelligence in this case; we need natural compassion, so that we work with the necessary speed to deploy these technologies.
  • Beyond those, the cases become trivial, or worse
  • All of this is a way of saying something we don’t say as often as we should: humans are good enough. We don’t require improvement. We can solve the challenges we face, as humans.
  • It may take us longer than if we can employ some “new form of intelligence,” but slow and steady is the whole point of the race.
  • Unless, of course, you’re trying to make money, in which case “first-mover advantage” is the point
  • The other challenge that people cite, over and over again, to justify running the risks of AI is to “combat climate change,
  • here’s the thing: pausing, slowing down, stopping calls on the one human gift shared by no other creature, and perhaps by no machine. We are the animal that can, if we want to, decide not to do something we’re capable of doing.
  • n individual terms, that ability forms the core of our ethical and religious systems; in societal terms it’s been crucial as technology has developed over the last century. We’ve, so far, reined in nuclear and biological weapons, designer babies, and a few other maximally dangerous new inventions
  • It’s time to say do it again, and fast—faster than the next iteration of this tech.
Javier E

He's Narrating Your New Audiobook. He's Also Been Dead for Nearly 10 Years. - WSJ - 0 views

  • AI’s reach into audiobook narration isn’t merely theoretical. Thousands of AI-narrated audiobooks are available on popular marketplaces including Alphabet Inc.’s Google Play Books and Apple Inc.’s Apple Books. Amazon.com Inc., AMZN +0.25% whose Audible unit is the largest U.S. audiobook service, doesn’t offer any for now, but says it is evaluating its position.
  • The technology hasn’t been widely embraced by the largest U.S. book publishers, which mostly use it for marketing efforts and some foreign-language title
  • it is a boon for smaller outfits and little-known authors, whose books might not have the sales potential to warrant the cost—traditionally at least $5,000—of recording an audio version.
  • ...13 more annotations...
  • Apple and Google said they allow users to create audiobooks free of charge that use digitally replicated human voices. The voices featured in audiobooks generated by Apple and Google come from real people, whose voices helped train their automated-narration engines.
  • Ms. Papel said there is still plenty of work for professional narrators because the new era of AI auto-narration is just getting under way, though she said that might not be the case in the future.
  • “From what I can see, human narrators are freaking out,
  • Melissa Papel, a Paris-born actress who records from her home studio in Los Angeles, said she recorded eight hours of content for DeepZen, reading in French from different books. “One called for me to read in an angry way, another in a disgusted way, a humorous way, a dramatic way,” she said.
  • Charles Watkinson, director of the University of Michigan Press, said the publisher has made about 100 audiobooks using Google’s free auto-narrated audiobook platform since early last year. The new technology made those titles possible because it eliminated the costs associated with using a production studio, support staff and human narrators.
  • “I understood that they would use my voice to teach software how to speak more humanly,” Ms. Papel said. “I didn’t realize they could use my voice to pronounce words I didn’t say. That’s incredible.”
  • DeepZen pays its narrators a flat fee plus a royalty based on the revenue the company generates from different projects. The agreements span multiple years
  • a national union that represents performers, including professional audiobook narrators, said he expects AI to eventually disrupt the industry.
  • Audiobook sales rose 7% last year, according to the Association of American Publishers, while print book sales declined by 5.8%, according to book tracker Circana BookScan.
  • eepZen says it has signed deals with 35 publishers in the U.S. and abroad and is working with 25 authors.
  • Josiah Ziegler, a psychiatrist in Fort Collins, Colo., last year created Intellectual Classics, which focuses on nonfiction works that are out of copyright and don’t have an audiobook edition. 
  • He chose Mr. Herrmann as the narrator for “The War with Mexico,” a work by Justin H. Smith that won the 1920 Pulitzer Prize for history whose audiobook version Dr. Ziegler expects to publish later this year.
  • DeepZen, which has created nearly a hundred audiobooks featuring Mr. Herrmann’s voice, is pursuing the rights of other well-known stars who have died.
Javier E

What Does Peter Thiel Want? - Persuasion - 0 views

  • Of the many wealthy donors working to shape the future of the Republican Party, none has inspired greater fascination, confusion, and anxiety than billionaire venture capitalist Peter Thiel. 
  • Thiel’s current outlook may well make him a danger to American democracy. But assessing the precise nature of that threat requires coming to terms with his ultimate aims—which have little to do with politics at all. 
  • Thiel and others point out that when we lift our gaze from our phones and related consumer products to the wider vistas of human endeavor—breakthroughs in medicine, the development of new energy sources, advances in the speed and ease of transportation, and the exploration of space—progress has indeed slowed to a crawl.
  • ...21 more annotations...
  • It certainly informed his libertarianism, which inclined in the direction of an Ayn Rand-inspired valorization of entrepreneurial superman-geniuses whose great acts of capitalistic creativity benefit all of mankind. Thiel also tended to follow Rand in viewing the masses as moochers who empower Big Government to crush these superman-geniuses.
  • Thiel became something of an opportunistic populist inclined to view liberal elites and institutions as posing the greatest obstacle to building an economy and culture of dynamistic creativity—and eager to mobilize the anger and resentment of “the people” as a wrecking ball to knock them down. 
  • the failure of the Trump administration to break more decisively from the political status quo left Thiel uninterested in playing a big role in the 2020 election cycle.
  • Does Thiel personally believe that the 2020 election was stolen from Trump? I doubt it. It’s far more likely he supports the disruptive potential of encouraging election-denying candidates to run and helping them to win.
  • Thiel is moved to indignation by the fact that since 1958 no commercial aircraft (besides the long-decommissioned Concorde) has been developed that can fly faster than 977 kilometers per hou
  • Thiel is, first and foremost, a dynamist—someone who cares above all about fostering innovation, exploration, growth, and discovery.
  • the present looks and feels pretty much the same as 1969, only “with faster computers and uglier cars.” 
  • Thiel’s approach to the problem is distinctive in that he sees the shortfall as evidence of a deeper and more profound moral, aesthetic, and even theological failure. Human beings are capable of great creativity and invention, and we once aspired to achieve it in every realm. But now that aspiration has been smothered by layer upon layer of regulation and risk-aversion. “Legal sclerosis,” Thiel claimed in that same book review, “is likely a bigger obstacle to the adoption of flying cars than any engineering problem.”
  • Progress in science and technology isn’t innate to human beings, Thiel believes. It’s an expression of a specific cultural or civilizational impulse that has its roots in Christianity and reached a high point during the Victorian era of Western imperialism
  • Thiel aims to undermine the progressive liberalism that dominates the mainstream media, the federal bureaucracy, the Justice Department, and the commanding heights of culture (in universities, think tanks, and other nonprofits).
  • In Thiel’s view, recapturing civilizational greatness through scientific and technological achievement requires fostering a revival of a kind of Christian Prometheanism (a monotheistic variation on the rebellious creativity and innovation pursued by the demigod Prometheus in ancient Greek mythology)
  • Against those who portray modern scientific and technological progress as a rebellion against medieval Christianity, Thiel insists it is Christianity that encourages a metaphysical optimism about transforming and perfecting the world, with the ultimate goal of turning it into “a place where no accidents can happen” and the achievement of “personal immortality” becomes possible
  • All that’s required to reach this transhuman end is that we “remain open to an eschatological frame in which God works through us in building the kingdom of heaven today, here on Earth—in which the kingdom of heaven is both a future reality and something partially achievable in the present.” 
  • As Thiel put it last summer in a wide-ranging interview with the British website UnHerd, the Christian world “felt very expansive, both in terms of the literal empire and also in terms of the progress of knowledge, of science, of technology, and somehow that was naturally consonant with a certain Christian eschatology—a Christian vision of history.”
  • JD Vance is quoted on the subject of what this political disruption might look like during a Trump presidential restoration in 2025. Vance suggests that Trump should “fire every single midlevel bureaucrat, every civil servant in the administrative state, replace them with our people. And when the courts stop [him], stand before the country, and say, ‘the chief justice has made his ruling. Now let him enforce it.’”
  • Another Thiel friend and confidante discussed at length in Vanity Fair, neo-reactionary Curtis Yarvin, takes the idea of disrupting the liberal order even further, suggesting various ways a future right-wing president (Trump or someone else) could shake things up, shredding the smothering blanket of liberal moralism, conformity, rules, and regulations, thereby encouraging the creation of something approaching a scientific-technological wild west, where innovation and experimentation rule the day. Yarvin’s preferred path to tearing down what he calls the liberal “Cathedral,” laid out in detail on a two-hour Claremont Institute podcast from May 2021, involves a Trump-like figure seizing dictatorial power in part by using a specially designed phone app to direct throngs of staunch supporters (Jan. 6-style) to overpower law enforcement at key locations around the nation’s capital.  
  • this isn’t just an example of guilt-by-association. These are members of Thiel’s inner circle, speaking publicly about ways of achieving shared goals. Thiel funded Vance’s Senate campaign to the tune of at least $15 million. Is it likely the candidate veered into right-wing radicalism with a Vanity Fair reporter in defiance of his campaign’s most crucial donor?
  • As for Yarvin, Thiel continued to back his tech start up (Urbit) after it became widely known he was the pseudonymous author behind the far-right blog “Unqualified Reservations,” and as others have shown, the political thinking of the two men has long overlapped in numerous other ways. 
  • He’s deploying his considerable resources to empower as many people and groups as he can, first, to win elections by leveraging popular disgust at corrupt institutions—and second, to use the power they acquire to dismantle or even topple those institutions, hopefully allowing a revived culture of Christian scientific-technological dynamism to arise from out of the ruins.  
  • Far more than most big political donors, Thiel appears to care only about the extra-political goal of his spending. How we get to a world of greater dynamism—whether it will merely require selective acts of troublemaking disruption, or whether, instead, it will ultimately involve smashing the political order of the United States to bits—doesn’t really concern him. Democratic politics itself—the effort of people with competing interests and clashing outlooks to share rule for the sake of stability and common flourishing—almost seems like an irritant and an afterthought to Peter Thiel.
  • What we do have is the opportunity to enlighten ourselves about what these would-be Masters of the Universe hope to accomplish—and to organize politically to prevent them from making a complete mess of things in the process.
Javier E

Pause or panic: battle to tame the AI monster - 0 views

  • What exactly are they afraid of? How do you draw a line from a chatbot to global destruction
  • This tribe feels we have made three crucial errors: giving the AI the capability to write code, connecting it to the internet and teaching it about human psychology. In those steps we have created a self-improving, potentially manipulative entity that can use the network to achieve its ends — which may not align with ours
  • This is a technology that learns from our every interaction with it. In an eerie glimpse of AI’s single-mindedness, OpenAI revealed in a paper that GPT-4 was willing to lie, telling a human online it was a blind person, to get a task done.
  • ...16 more annotations...
  • For researchers concerned with more immediate AI risks, such as bias, disinformation and job displacement, the voices of doom are a distraction. Professor Brent Mittelstadt, director of research at the Oxford Internet Institute, said the warnings of “the existential risks community” are overblown. “The problem is you can’t disprove the future scenarios . . . in the same way you can’t disprove science fiction.” Emily Bender, a professor of linguistics at the University of Washington, believes the doomsters are propagating “unhinged AI hype, helping those building this stuff sell it”.
  • Those urging us to stop, pause and think again have a useful card up our sleeves: the people building these models do not fully understand them. AI like ChatGPT is made up of huge neural networks that can defy their creators by coming up with “emergent properties”.
  • Google’s PaLM model started translating Bengali despite not being trained to do so
  • Let’s not forget the excitement, because that is also part of Moloch, driving us forward. The lure of AI’s promises for humanity has been hinted at by DeepMind’s AlphaFold breakthrough, which predicted the 3D structures of nearly all the proteins known to humanity.
  • Noam Shazeer, a former Google engineer credited with setting large language models such as ChatGPT on their present path, was asked by The Sunday Times how the models worked. He replied: “I don’t think anybody really understands how they work, just like nobody really understands how the brain works. It’s pretty much alchemy.”
  • The industry is turning itself to understanding what has been created, but some predict it will take years, decades even.
  • Alex Heath, deputy editor of The Verge, who recently attended an AI conference in San Francisco. “It’s clear the people working on generative AI are uneasy about the worst-case scenario of it destroying us all. These fears are much more pronounced in private than they are in public.” One figure building an AI product “said over lunch with a straight face that he is savoring the time before he is killed by AI”.
  • Greg Brockman, co-founder of OpenAI, told the TED2023 conference this week: “We hear from people who are excited, we hear from people who are concerned. We hear from people who feel both those emotions at once. And, honestly, that’s how we feel.”
  • A CBS interviewer challenged Sundar Pichai, Google’s chief executive, this week: “You don’t fully understand how it works, and yet you’ve turned it loose on society?
  • In 2020 there wasn’t a single drug in clinical trials developed using an AI-first approach. Today there are 18
  • Consider this from Bill Gates last month: “I think in the next five to ten years, AI-driven software will finally deliver on the promise of revolutionising the way people teach and learn.”
  • If the industry is aware of the risks, is it doing enough to mitigate them? Microsoft recently cut its ethics team, and researchers building AI outnumber those focused on safety by 30-to-1,
  • The concentration of AI power, which worries so many, also presents an opportunity to more easily develop some global rules. But there is little agreement on direction. Europe is proposing a centrally defined, top-down approach. Britain wants an innovation-friendly environment where rules are defined by each industry regulator. The US commerce department is consulting on whether risky AI models should be certified. China is proposing strict controls on generative AI that could upend social order.
  • Part of the drive to act now is to ensure we learn the lessons of social media. Twenty years after creating it, we are trying to put it back in a legal straitjacket after learning that its algorithms understand us only too well. “Social media was the first contact between AI and humanity, and humanity lost,” Yuval Harari, the Sapiens author,
  • Others point to bioethics, especially international agreements on human cloning. Tegmark said last week: “You could make so much money on human cloning. Why aren’t we doing it? Because biologists thought hard about this and felt this is way too risky. They got together in the Seventies and decided, let’s not do this because it’s too unpredictable. We could lose control over what happens to our species. So they paused.” Even China signed up.
  • One voice urging calm is Yann LeCun, Meta’s chief AI scientist. He has labelled ChatGPT a “flashy demo” and “not a particularly interesting scientific advance”. He tweeted: “A GPT-4-powered robot couldn’t clear up the dinner table and fill up the dishwasher, which any ten-year-old can do. And it couldn’t drive a car, which any 18-year-old can learn to do in 20 hours of practice. We’re still missing something big for human-level AI.” If this is sour grapes and he’s wrong, Moloch already has us in its thrall.
Javier E

Ozempic or Bust - The Atlantic - 0 views

  • June 2024 Issue
  • Explore
  • it is impossible to know, in the first few years of any novel intervention, whether its success will last.
  • ...77 more annotations...
  • The ordinary fixes—the kind that draw on people’s will, and require eating less and moving more—rarely have a large or lasting effect. Indeed, America itself has suffered through a long, maddening history of failed attempts to change its habits on a national scale: a yo-yo diet of well-intentioned treatments, policies, and other social interventions that only ever lead us back to where we started
  • Through it all, obesity rates keep going up; the diabetes epidemic keeps worsening.
  • The most recent miracle, for Barb as well as for the nation, has come in the form of injectable drugs. In early 2021, the Danish pharmaceutical company Novo Nordisk published a clinical trial showing remarkable results for semaglutide, now sold under the trade names Wegovy and Ozempic.
  • Patients in the study who’d had injections of the drug lost, on average, close to 15 percent of their body weight—more than had ever been achieved with any other drug in a study of that size. Wadden knew immediately that this would be “an incredible revolution in the treatment of obesity.”
  • Many more drugs are now racing through development: survodutide, pemvidutide, retatrutide. (Among specialists, that last one has produced the most excitement: An early trial found an average weight loss of 24 percent in one group of participants.
  • In the United States, an estimated 189 million adults are classified as having obesity or being overweight
  • The drugs don’t work for everyone. Their major side effects—nausea, vomiting, and diarrhea—can be too intense for many patients. Others don’t end up losing any weight
  • For the time being, just 25 percent of private insurers offer the relevant coverage, and the cost of treatment—about $1,000 a month—has been prohibitive for many Americans.
  • The drugs have already been approved not just for people with diabetes or obesity, but for anyone who has a BMI of more than 27 and an associated health condition, such as high blood pressure or cholesterol. By those criteria, more than 140 million American adults already qualify
  • if this story goes the way it’s gone for other “risk factor” drugs such as statins and antihypertensives, then the threshold for prescriptions will be lowered over time, inching further toward the weight range we now describe as “normal.”
  • How you view that prospect will depend on your attitudes about obesity, and your tolerance for risk
  • The first GLP-1 drug to receive FDA approval, exenatide, has been used as a diabetes treatment for more than 20 years. No long-term harms have been identified—but then again, that drug’s long-term effects have been studied carefully only across a span of seven years
  • the data so far look very good. “These are now being used, literally, in hundreds of thousands of people across the world,” she told me, and although some studies have suggested that GLP-1 drugs may cause inflammation of the pancreas, or even tumor growth, these concerns have not borne out.
  • adolescents are injecting newer versions of these drugs, and may continue to do so every week for 50 years or more. What might happen over all that time?
  • “All of us, in the back of our minds, always wonder, Will something show up?  ” Although no serious problems have yet emerged, she said, “you wonder, and you worry.”
  • in light of what we’ve been through, it’s hard to see what other choices still remain. For 40 years, we’ve tried to curb the spread of obesity and its related ailments, and for 40 years, we’ve failed. We don’t know how to fix the problem. We don’t even understand what’s really causing it. Now, again, we have a new approach. This time around, the fix had better work.
  • The fen-phen revolution arrived at a crucial turning point for Wadden’s field, and indeed for his career. By then he’d spent almost 15 years at the leading edge of research into dietary interventions, seeing how much weight a person might lose through careful cutting of their calories.
  • But that sort of diet science—and the diet culture that it helped support—had lately come into a state of ruin. Americans were fatter than they’d ever been, and they were giving up on losing weight. According to one industry group, the total number of dieters in the country declined by more than 25 percent from 1986 to 1991.
  • Rejecting diet culture became something of a feminist cause. “A growing number of women are joining in an anti-diet movement,” The New York Times reported in 1992. “They are forming support groups and ceasing to diet with a resolve similar to that of secretaries who 20 years ago stopped getting coffee for their bosses.
  • Now Wadden and other obesity researchers were reaching a consensus that behavioral interventions might produce in the very best scenario an average lasting weight loss of just 5 to 10 percent
  • National surveys completed in 1994 showed that the adult obesity rate had surged by more than half since 1980, while the proportion of children classified as overweight had doubled. The need for weight control in America had never seemed so great, even as the chances of achieving it were never perceived to be so small.
  • Wadden wasn’t terribly concerned, because no one in his study had reported any heart symptoms. But ultrasounds revealed that nearly one-third of them had some degree of leakage in their heart valves. His “cure for obesity” was in fact a source of harm.
  • In December 1994, the Times ran an editorial on what was understood to be a pivotal discovery: A genetic basis for obesity had finally been found. Researchers at Rockefeller University were investigating a molecule, later named leptin, that gets secreted from fat cells and travels to the brain, and that causes feelings of satiety. Lab mice with mutations in the leptin gene—importantly, a gene also found in humans—overeat until they’re three times the size of other mice. “The finding holds out the dazzling hope,”
  • In April 1996, the doctors recommended yes: Dexfenfluramine was approved—and became an instant blockbuster. Patients received prescriptions by the hundreds of thousands every month. Sketchy wellness clinics—call toll-free, 1-888-4FEN-FEN—helped meet demand. Then, as now, experts voiced concerns about access. Then, as now, they worried that people who didn’t really need the drugs were lining up to take them. By the end of the year, sales of “fen” alone had surpassed $300 million.
  • It was nothing less than an awakening, for doctors and their patients alike. Now a patient could be treated for excess weight in the same way they might be treated for diabetes or hypertension—with a drug they’d have to take for the rest of their life.
  • the article heralded a “new understanding of obesity as a chronic disease rather than a failure of willpower.”
  • News had just come out that, at the Mayo Clinic in Minnesota, two dozen women taking fen-phen—including six who were, like Barb, in their 30s—had developed cardiac conditions. A few had needed surgery, and on the operating table, doctors discovered that their heart valves were covered with a waxy plaque.
  • Americans had been prescribed regular fenfluramine since 1973, and the newer drug, dexfenfluramine, had been available in France since 1985. Experts took comfort in this history. Using language that is familiar from today’s assurances regarding semaglutide and other GLP-1 drugs, they pointed out that millions were already on the medication. “It is highly unlikely that there is anything significant in toxicity to the drug that hasn’t been picked up with this kind of experience,” an FDA official named James Bilstad would later say in a Time cover story headlined “The Hot New Diet Pill.
  • “I know I can’t get any more,” she told Williams. “I have to use up what I have. And then I don’t know what I’m going to do after that. That’s the problem—and that is what scares me to death.” Telling people to lose weight the “natural way,” she told another guest, who was suggesting that people with obesity need only go on low-carb diets, is like “asking a person with a thyroid condition to just stop their medication.”
  • She’d gone off the fen-phen and had rapidly regained weight. “The voices returned and came back in a furor I’d never heard before,” Barb later wrote on her blog. “It was as if they were so angry at being silenced for so long, they were going to tell me 19 months’ worth of what they wanted me to hear. I was forced to listen. And I ate. And I ate. And ate.”
  • For Barb, rapid weight loss has brought on a different metaphysical confusion. When she looks in the mirror, she sometimes sees her shape as it was two years ago. In certain corners of the internet, this is known as “phantom fat syndrome,” but Barb dislikes that term. She thinks it should be called “body integration syndrome,” stemming from a disconnect between your “larger-body memory” and “smaller-body reality.
  • In 2003, the U.S. surgeon general declared obesity “the terror within, a threat that is every bit as real to America as the weapons of mass destruction”; a few months later, Eric Finkelstein, an economist who studies the social costs of obesity, put out an influential paper finding that excess weight was associated with up to $79 billion in health-care spending in 1998, of which roughly half was paid by Medicare and Medicaid. (Later he’d conclude that the number had nearly doubled in a decade.
  • In 2004, Finkelstein attended an Action on Obesity summit hosted by the Mayo Clinic, at which numerous social interventions were proposed, including calorie labeling in workplace cafeterias and mandatory gym class for children of all grades.
  • he message at their core, that soda was a form of poison like tobacco, spread. In San Francisco and New York, public-service campaigns showed images of soda bottles pouring out a stream of glistening, blood-streaked fat. Michelle Obama led an effort to depict water—plain old water—as something “cool” to drink.
  • Soon, the federal government took up many of the ideas that Brownell had helped popularize. Barack Obama had promised while campaigning for president that if America’s obesity trends could be reversed, the Medicare system alone would save “a trillion dollars.” By fighting fat, he implied, his ambitious plan for health-care reform would pay for itself. Once he was in office, his administration pulled every policy lever it could.
  • Michelle Obama helped guide these efforts, working with marketing experts to develop ways of nudging kids toward better diets and pledging to eliminate “food deserts,” or neighborhoods that lacked convenient access to healthy, affordable food. She was relentless in her public messaging; she planted an organic garden at the White House and promoted her signature “Let’s Move!” campaign around the country.
  • An all-out war on soda would come to stand in for these broad efforts. Nutrition studies found that half of all Americans were drinking sugar-sweetened beverages every day, and that consumption of these accounted for one-third of the added sugar in adults’ diets. Studies turned up links between people’s soft-drink consumption and their risks for type 2 diabetes and obesity. A new strand of research hinted that “liquid calories” in particular were dangerous to health.
  • when their field lost faith in low-calorie diets as a source of lasting weight loss, the two friends went in opposite directions. Wadden looked for ways to fix a person’s chemistry, so he turned to pharmaceuticals. Brownell had come to see obesity as a product of our toxic food environment: He meant to fix the world to which a person’s chemistry responded, so he started getting into policy.
  • The social engineering worked. Slowly but surely, Americans’ lamented lifestyle began to shift. From 2001 to 2018, added-sugar intake dropped by about one-fifth among children, teens, and young adults. From the late 1970s through the early 2000s, the obesity rate among American children had roughly tripled; then, suddenly, it flattened out.
  • although the obesity rate among adults was still increasing, its climb seemed slower than before. Americans’ long-standing tendency to eat ever-bigger portions also seemed to be abating.
  • sugary drinks—liquid candy, pretty much—were always going to be a soft target for the nanny state. Fixing the food environment in deeper ways proved much harder. “The tobacco playbook pretty much only works for soda, because that’s the closest analogy we have as a food item,
  • that tobacco playbook doesn’t work to increase consumption of fruits and vegetables, he said. It doesn’t work to increase consumption of beans. It doesn’t work to make people eat more nuts or seeds or extra-virgin olive oil.
  • Careful research in the past decade has shown that many of the Obama-era social fixes did little to alter behavior or improve our health. Putting calorie labels on menus seemed to prompt at most a small decline in the amount of food people ate. Employer-based wellness programs (which are still offered by 80 percent of large companies) were shown to have zero tangible effects. Health-care spending, in general, kept going up.
  • From the mid-1990s to the mid-2000s, the proportion of adults who said they’d experienced discrimination on account of their height or weight increased by two-thirds, going up to 12 percent. Puhl and others started citing evidence that this form of discrimination wasn’t merely a source of psychic harm, but also of obesity itself. Studies found that the experience of weight discrimination is associated with overeating, and with the risk of weight gain over time.
  • obesity rates resumed their ascent. Today, 20 percent of American children have obesity. For all the policy nudges and the sensible revisions to nutrition standards, food companies remain as unfettered as they were in the 1990s, Kelly Brownell told me. “Is there anything the industry can’t do now that it was doing then?” he asked. “The answer really is no. And so we have a very predictable set of outcomes.”
  • she started to rebound. The openings into her gastric pouch—the section of her stomach that wasn’t bypassed—stretched back to something like their former size. And Barb found ways to “eat around” the surgery, as doctors say, by taking food throughout the day in smaller portions
  • Bariatric surgeries can be highly effective for some people and nearly useless for others. Long-term studies have found that 30 percent of those who receive the same procedure Barb did regain at least one-quarter of what they lost within two years of reaching their weight nadir; more than half regain that much within five years.
  • if the effects of Barb’s surgery were quickly wearing off, its side effects were not: She now had iron, calcium, and B12 deficiencies resulting from the changes to her gut. She looked into getting a revision of the surgery—a redo, more or less—but insurance wouldn’t cover it
  • She found that every health concern she brought to doctors might be taken as a referendum, in some way, on her body size. “If I stubbed my toe or whatever, they’d just say ‘Lose weight.’ ” She began to notice all the times she’d be in a waiting room and find that every chair had arms. She realized that if she was having a surgical procedure, she’d need to buy herself a plus-size gown—or else submit to being covered with a bedsheet when the nurses realized that nothing else would fit.
  • Barb grew angrier and more direct about her needs—You’ll have to find me a different chair, she started saying to receptionists. Many others shared her rage. Activists had long decried the cruel treatment of people with obesity: The National Association to Advance Fat Acceptance had existed, for example, in one form or another, since 1969; the Council on Size & Weight Discrimination had been incorporated in 1991. But in the early 2000s, the ideas behind this movement began to wend their way deeper into academia, and they soon gained some purchase with the public.
  • “Our public-health efforts to address obesity have failed,” Eric Finkelstein, the economist, told me.
  • Others attacked the very premise of a “healthy weight”: People do not have any fundamental need, they argued, morally or medically, to strive for smaller bodies as an end in itself. They called for resistance to the ideology of anti-fatness, with its profit-making arms in health care and consumer goods. The Association for Size Diversity and Health formed in 2003; a year later, dozens of scholars working on weight-related topics joined together to create the academic field of fat studies.
  • As the size-diversity movement grew, its values were taken up—or co-opted—by Big Business. Dove had recently launched its “Campaign for Real Beauty,” which included plus-size women. (Ad Age later named it the best ad campaign of the 21st century.) People started talking about “fat shaming” as something to avoid
  • By 2001, Bacon, who uses they/them pronouns, had received their Ph.D. and finished a rough draft of a book, Health at Every Size, which drew inspiration from a broader movement by that name among health-care practitioners
  • But something shifted in the ensuing years. In 2007, Bacon got a different response, and the book was published. Health at Every Size became a point of entry for a generation of young activists and, for a time, helped shape Americans’ understanding of obesity.
  • Some experts were rethinking their advice on food and diet. At UC Davis, a physiologist named Lindo Bacon who had struggled to overcome an eating disorder had been studying the effects of “intuitive eating,” which aims to promote healthy, sustainable behavior without fixating on what you weigh or how you look
  • The heightened sensitivity started showing up in survey data, too. In 2010, fewer than half of U.S. adults expressed support for giving people with obesity the same legal protections from discrimination offered to people with disabilities. In 2015, that rate had risen to three-quarters.
  • In Bacon’s view, the 2000s and 2010s were glory years. “People came together and they realized that they’re not alone, and they can start to be critical of the ideas that they’ve been taught,” Bacon told me. “We were on this marvelous path of gaining more credibility for the whole Health at Every Size movement, and more awareness.”
  • that sense of unity proved short-lived; the movement soon began to splinter. Black women have the highest rates of obesity, and disproportionately high rates of associated health conditions. Yet according to Fatima Cody Stanford, an obesity-medicine physician at Harvard Medical School, Black patients with obesity get lower-quality care than white patients with obesity.
  • That system was exactly what Bacon and the Health at Every Size movement had set out to reform. The problem, as they saw it, was not so much that Black people lacked access to obesity medicine, but that, as Bacon and the Black sociologist Sabrina Strings argued in a 2020 article, Black women have been “specifically targeted” for weight loss, which Bacon and Strings saw as a form of racism
  • But members of the fat-acceptance movement pointed out that their own most visible leaders, including Bacon, were overwhelmingly white. “White female dietitians have helped steal and monetize the body positive movement,” Marquisele Mercedes, a Black activist and public-health Ph.D. student, wrote in September 2020. “And I’m sick of it.”
  • Tensions over who had the standing to speak, and on which topics, boiled over. In 2022, following allegations that Bacon had been exploitative and condescending toward Black colleagues, the Association for Size Diversity and Health expelled them from its ranks and barred them from attending its events.
  • As the movement succumbed to in-fighting, its momentum with the public stalled. If attitudes about fatness among the general public had changed during the 2000s and 2010s, it was only to a point. The idea that some people can indeed be “fit but fat,” though backed up by research, has always been a tough sell.
  • Although Americans had become less inclined to say they valued thinness, measures of their implicit attitudes seemed fairly stable. Outside of a few cities such as San Francisco and Madison, Wisconsin, new body-size-discrimination laws were never passed.
  • In the meantime, thinness was coming back into fashion
  • In the spring of 2022, Kim Kardashian—whose “curvy” physique has been a media and popular obsession—boasted about crash-dieting in advance of the Met Gala. A year later, the model and influencer Felicity Hayward warned Vogue Business that “plus-size representation has gone backwards.” In March of this year, the singer Lizzo, whose body pride has long been central to her public persona, told The New York Times that she’s been trying to lose weight. “I’m not going to lie and say I love my body every day,” she said.
  • Among the many other dramatic effects of the GLP-1 drugs, they may well have released a store of pent-up social pressure to lose weight.
  • If ever there was a time to debate that impulse, and to question its origins and effects, it would be now. But Puhl told me that no one can even agree on which words are inoffensive. The medical field still uses obesity, as a description of a diagnosable disease. But many activists despise that phrase—some spell it with an asterisk in place of the e—and propose instead to reclaim fat.
  • Everyone seems to agree on the most important, central fact: that we should be doing everything we can to limit weight stigma. But that hasn’t been enough to stop the arguing.
  • Things feel surreal these days to just about anyone who has spent years thinking about obesity. At 71, after more than four decades in the field, Thomas Wadden now works part-time, seeing patients just a few days a week. But the arrival of the GLP-1 drugs has kept him hanging on for a few more years, he said. “It’s too much of an exciting period to leave obesity research right now.”
  • When everyone is on semaglutide or tirzepatide, will the soft-drink companies—Brownell’s nemeses for so many years—feel as if a burden has been lifted? “My guess is the food industry is probably really happy to see these drugs come along,” he said. They’ll find a way to reach the people who are taking GLP‑1s, with foods and beverages in smaller portions, maybe. At the same time, the pressures to cut back on where and how they sell their products will abate.
  • the triumph in obesity treatment only highlights the abiding mystery of why Americans are still getting fatter, even now
  • Perhaps one can lay the blame on “ultraprocessed” foods, he said. Maybe it’s a related problem with our microbiomes. Or it could be that obesity, once it takes hold within a population, tends to reproduce itself through interactions between a mother and a fetus. Others have pointed to increasing screen time, how much sleep we get, which chemicals are in the products that we use, and which pills we happen to take for our many other maladies.
  • “The GLP-1s are just a perfect example of how poorly we understand obesity,” Mozaffarian told me. “Any explanation of why they cause weight loss is all post-hoc hand-waving now, because we have no idea. We have no idea why they really work and people are losing weight.”
  • The new drugs—and the “new understanding of obesity” that they have supposedly occasioned—could end up changing people’s attitudes toward body size. But in what ways
  • When the American Medical Association declared obesity a disease in 2013, Rebecca Puhl told me, some thought “it might reduce stigma, because it was putting more emphasis on the uncontrollable factors that contribute to obesity.” Others guessed that it would do the opposite, because no one likes to be “diseased.”
  • why wasn’t there another kind of nagging voice that wouldn’t stop—a sense of worry over what the future holds? And if she wasn’t worried for herself, then what about for Meghann or for Tristan, who are barely in their 40s? Wouldn’t they be on these drugs for another 40 years, or even longer? But Barb said she wasn’t worried—not at all. “The technology is so much better now.” If any problems come up, the scientists will find solutions.
Javier E

Order and Calm Eased Evacuation from Burning Japan Airlines Jet - The New York Times - 0 views

  • While a number of factors aided what many have called a miracle at Haneda Airport — a well trained crew of 12; a veteran pilot with 12,000 hours of flight experience; advanced aircraft design and materials — the relative absence of panic onboard during the emergency procedure perhaps helped the most.
  • “Even though I heard screams, mostly people were calm and didn’t stand up from their seats but kept sitting and waiting,” said Aruto Iwama, a passenger who gave a video interview to the newspaper The Guardian. “That’s why I think we were able to escape smoothly.”
  • Experts said that while crews are trained — and passenger jets are tested — for cabin evacuations within 90 seconds in an emergency landing, technical specifications on the 2-year-old Airbus A350-900 most likely gave those on the flight a bit more time to escape.
  • ...5 more annotations...
  • Firewalls around the engines, nitrogen pumps in fuel tanks that help prevent immediate burning, and fire-resistant materials on seats and flooring most likely helped to keep the rising flames at bay, said Sonya A. Brown, a senior lecturer in aerospace design at the University of New South Wales in Sydney, Australia.
  • “Really, the Japan Airlines crew in this case performed extremely well,” Dr. Brown said. The fact that passengers did not stop to retrieve carry-on luggage or otherwise slow down the exit was “really critical,” she added.
  • Tadayuki Tsutsumi, an official at Japan Airlines, said the most important component of crew performance during an emergency was “panic control” and determining which exit doors were safe to use.
  • Former flight attendants described the rigorous training and drills that crew members undergo to prepare for emergencies. “When training for evacuation procedures, we repeatedly used smoke/fire simulation to make sure we could be mentally ready when situations like those occurred in reality,” Yoko Chang, a former cabin attendant and an instructor of aspiring crew members, wrote in an Instagram message.
  • Ms. Chang, who did not work for JAL, added that airlines require cabin crew members to pass evacuation exams every six months.
Javier E

He Turned 55. Then He Started the World's Most Important Company. - WSJ - 0 views

  • You probably use a device with a chip made by TSMC every day, but TSMC does not actually design or market those chips. That would have sounded completely absurd before the existence of TSMC. Back then, companies designed chips that they manufactured themselves. Chang’s radical idea for a great semiconductor company was one that would exclusively manufacture chips that its customers designed. By not designing or selling its own chips, TSMC never competed with its own clients. In exchange, they wouldn’t have to bother running their own fabrication plants, or fabs, the expensive and dizzyingly sophisticated facilities where circuits are carved on silicon wafers.
  • The innovative business model behind his chip foundry would transform the industry and make TSMC indispensable to the global economy. Now it’s the company that Americans rely on the most but know the least about
  • I wanted to know more about his decision to start a new company when he could have stopped working altogether. What I discovered was that his age was one of his assets. Only someone with his experience and expertise could have possibly executed his plan for TSMC. 
  • ...30 more annotations...
  • “I could not have done it sooner,” he says. “I don’t think anybody could have done it sooner. Because I was the first one.” 
  • By the late 1960s, he was managing TI’s integrated-circuit division. Before long, he was running the entire semiconductor group. 
  • He transferred to the Massachusetts Institute of Technology, where he studied mechanical engineering, earned his master’s degree and would have stayed for his Ph.D. if he hadn’t failed the qualifying exam. Instead, he got his first job in semiconductors and moved to Texas Instruments in 1958
  • he came along as the integrated circuit was being invented, and his timing couldn’t have been any better, as Chang belonged to the first generation of semiconductor geeks. He developed a reputation as a tenacious manager who could wring every possible improvement out of production lines, which put his career on the fast track.
  • Chang grew up dreaming of being a writer—a novelist, maybe a journalist—and he planned to major in English literature at Harvard University. But after his freshman year, he decided that what he actually wanted was a good job
  • “They talk about life-work balance,” he says. “That’s a term I didn’t even know when I was their age. Work-life balance. When I was their age, if there was no work, there was no life.” 
  • These days, TSMC is investing $40 billion to build plants in Arizona, but the project has been stymied by delays, setbacks and labor shortages, and Chang told me that some of TSMC’s young employees in the U.S. have attitudes toward work that he struggles to understand. 
  • Chang says he wouldn’t have taken the risk of moving to Taiwan if he weren’t financially secure. In fact, he didn’t take that same risk the first time he could have.
  • “The closer the industry match,” they wrote, “the greater the success rate.” 
  • By then, Chang knew that he wasn’t long for Texas Instruments. But his stock options hadn’t vested, so he turned down the invitation to Taiwan. “I was not financially secure yet,” he says. “I was never after great wealth. I was only after financial security.” For this corporate executive in the middle of the 1980s, financial security equated to $200,000 a year. “After tax, of course,” he says. 
  • Chang’s situation had changed by the time Li called again three years later. He’d exercised a few million dollars of stock options and bought tax-exempt municipal bonds that paid enough for him to be financially secure by his living standards. Once he’d achieved that goal, he was ready to pursue another one. 
  • “There was no certainty at all that Taiwan would give me the chance to build a great semiconductor company, but the possibility existed, and it was the only possibility for me,” Chang says. “That’s why I went to Taiwan.” 
  • Not long ago, a team of economists investigated whether older entrepreneurs are more successful than younger ones. By scrutinizing Census Bureau records and freshly available Internal Revenue Service data, they were able to identify 2.7 million founders in the U.S. who started companies between 2007 and 2014. Then they looked at their ages.
  • The average age of those entrepreneurs at the founding of their companies was 41.9. For the fastest-growing companies, that number was 45. The economists also determined that 50-year-old founders were almost twice as likely to achieve major success as 30-year-old founders, while the founders with the lowest chance of success were the ones in their early 20s
  • “Successful entrepreneurs are middle-aged, not young,” they wrote in their 2020 paper.  
  • Silicon Valley’s venture capitalists throw money at talented young entrepreneurs in the hopes they will start the next trillion-dollar company. They have plentiful energy, insatiable ambition and the vision to peek around corners and see the future. What they don’t typically have are mortgages, family obligations and other adult responsibilities to distract them or diminish their appetite for risk. Chang himself says that younger people are more innovative when it comes to science and technical subjects. 
  • But in business, older is better. Entrepreneurs in their 40s and 50s may not have the exuberance to believe they will change the world, but they have the experience to know how they actually can. Some need years of specialized training before they can start a company. In biotechnology, for example, founders are more likely to be college professors than college dropouts. Others require the lessons and connections they accumulate over the course of their careers. 
  • one more finding from their study of U.S. companies that helps explain the success of a chip maker in Taiwan. It was that prior employment in the area of their startups—both the general sector and specific industry—predicted “a vastly higher probability” of success.
  • Chang was such a workaholic that he made sales calls on his honeymoon and had no patience for those who didn’t share his drive
  • Morris Chang had 30 years of experience in his industry when he decided to uproot his life and move to another continent. He knew more about semiconductors than just about anyone on earth—and certainly more than anyone in Taiwan. As soon as he started his job at the Industrial Technology Research Institute, Chang was summoned to K.T. Li’s office and given a second job. “He felt I should start a semiconductor company in Taiwan,”
  • “I decided right away that this could not be the kind of great company that I wanted to build at either Texas Instruments or General Instrument,”
  • TI handled every part of chip production, but what worked in Texas would not translate to Taiwan. The only way that he could build a great company in his new home was to make a new sort of company altogether, one with a business model that would exploit the country’s strengths and mitigate its many weaknesses.
  • Chang determined that Taiwan had precisely one strength in the chip supply chain. The research firm that he was now running had been experimenting with semiconductors for the previous 10 years. When he studied that decade of data, Chang was pleasantly surprised by Taiwan’s yields, the percentage of working chips on silicon wafers. They were almost twice as high in Taiwan as they were in the U.S., he said. 
  • “People were ingrained in thinking the secret sauce of a successful semiconductor company was in the wafer fab,” Campbell told me. “The transition to the fabless semiconductor model was actually pretty obvious when you thought about it. But it was so against the prevailing wisdom that many people didn’t think about it.” 
  • Taiwan’s government took a 48% stake, with the rest of the funding coming from the Dutch electronics giant Philips and Taiwan’s private sector, but Chang was the driving force behind the company. The insight to build TSMC around such an unconventional business model was born from his experience, contacts and expertise. He understood his industry deeply enough to disrupt it. 
  • “TSMC was a business-model innovation,” Chang says. “For innovations of that kind, I think people of a more advanced age are perhaps even more capable than people of a younger age.”
  • the personal philosophy that he’d developed over the course of his long career. “To be a partner to our customers,” he says. That founding principle from 1987 is the bedrock of the foundry business to this day, as TSMC says the key to its success has always been enabling the success of its customers.  
  • TSMC manufactures chips in iPhones, iPads and Mac computers for Apple, which manufactures a quarter of TSMC’s net revenue. Nvidia is often called a chip maker, which is curious, because it doesn’t make chips. TSMC does. 
  • Churning out identical copies of a single chip for an iPhone requires one TSMC fab to produce more than a quintillion transistors—that is, one million trillions—every few months. In a year, the entire semiconductor industry produces “more transistors than the combined quantity of all goods produced by all other companies, in all other industries, in all human history,” Miller writes. 
  • I asked how he thought about success when he moved to Taiwan. “The highest degree of success in 1985, according to me, was to build a great company. A lower degree of success was at least to do something that I liked to do and I wanted to do,” he says. “I happened to achieve the highest degree of success that I had in mind.” 
Javier E

New York Times Bosses Seek to Quash Rebellion in the Newsroom - WSJ - 0 views

  • The internal probe was meant to find out who leaked information related to a planned podcast episode about that article. But its intensity and scope suggests the Times’s leadership, after years of fights with its workforce over a variety of issues involving journalistic integrity, is sending a signal: Enough.
  • “The idea that someone dips into that process in the middle, and finds something that they considered might be interesting or damaging to the story under way, and then provides that to people outside, felt to me and my colleagues like a breakdown in the sort of trust and collaboration that’s necessary in the editorial process,” Executive Editor Joe Kahn said in an interview. “I haven’t seen that happen before.”
  • while its business hums along, the Times’s culture has been under strain.
  • ...17 more annotations...
  • Newsroom leaders, concerned that some Times journalists are compromising their neutrality and applying ideological purity tests to coverage decisions, are seeking to draw a line. 
  • Kahn noted that the organization has added a lot of digital-savvy workers who are skilled in areas like data analytics, design and product engineering but who weren’t trained in independent journalism. He also suggested that colleges aren’t preparing new hires to be tolerant of dissenting views
  • International editor Philip Pan later intervened, saying the WhatsApp thread—at its worst a “tense forum where the questions and comments can feel accusatory”—should be for sharing information, not for hosting debates, according to messages reviewed by the Journal. 
  • Coverage of the Israel-Hamas war has become particularly fraught at the Times, with some reporters saying the Times’s work is tilting in favor of Israel and others pushing back forcefully, say people familiar with the situation. That has led to dueling charges of bias and journalistic malpractice among reporters and editors, forcing management to referee disputes.
  • “Just like our readers at the moment, there are really really strong passions about that issue and not that much willingness to really explore the perspectives of people who are on the other side of that divide,” Kahn said, adding that it’s hard work for staffers “to put their commitment to the journalism often ahead of their own personal views.”
  • Last fall, Times staffers covering the war got into a heated dispute in a WhatsApp group chat over the publication’s reporting on Al-Shifa hospital in Gaza, which Israel alleged was a command-and-control center for Hamas.
  • While the Guild represents staffers across many major U.S. news outlets, its members also include employees from non-news advocacy organizations such as pro-Palestinian group Jewish Voice For Peace, Democratic Socialists of America and divisions of the ACLU. 
  • “Young adults who are coming up through the education system are less accustomed to this sort of open debate, this sort of robust exchange of views around issues they feel strongly about than may have been the case in the past,” he said, adding that the onus is on the Times to instill values like independence in its employees.
  • The publisher of the Times, 43-year-old A.G. Sulzberger, says readers’ trust is at risk, however. Some journalists, including at the Times, are criticizing journalistic traditions like impartiality, while embracing “a different model of journalism, one guided by personal perspective and animated by personal conviction,” Sulzberger wrote in a 12,000-word essay last year in Columbia Journalism Review. 
  • Despite such moves, NewsGuard, an organization that rates credibility of news sites, in February reduced the Times’s score from the maximum of 100 to 87.5, saying it doesn’t have a clear enough delineation between news and opinion.
  • Emboldened by their show of strength on Bennet, employees would flex their muscles again on multiple occasions, pushing to oust colleagues they felt had engaged in journalistic or workplace misconduct. 
  • One thing Powell noticed, he said, was that coverage that challenged popular political and cultural beliefs was being neglected. Powell’s work includes a story on MIT’s canceling of a lecture by an academic who had criticized affirmative action, and another examining whether the ACLU is more willing to defend the First Amendment rights of progressives than far-right groups.
  • Kahn, who succeeded Baquet as executive editor in June 2022, and Opinion Editor Kathleen Kingsbury said in a letter to staff that they wouldn’t tolerate participation by Times journalists in protests or attacks on colleagues.
  • Divisions have formed in the newsroom over the role of the union that represents Times staffers, the NewsGuild-CWA. Some staffers say it has inappropriately inserted itself into debates with management, including over coverage of the trans community and the war. 
  • The Times isn’t the only news organization where employees have become more vocal in complaints about coverage and workplace practices. War coverage has also fueled tensions at The Wall Street Journal, with some reporters in meetings and internal chat groups complaining that coverage is skewed—either favoring Israel or Palestinians.  
  • When Times staffers logged on to a union virtual meeting last fall to discuss whether to call for a cease-fire in Gaza, some attendees from other organizations had virtual backgrounds displaying Palestinian flags. The meeting, where a variety of members were given around two minutes to share their views on the matter, felt like the kind of rally the Times’ policy prohibits, according to attendees. 
  • In January, Sulzberger shared his thoughts on covering Trump during a visit to the Washington bureau. It was imperative to keep Trump coverage emotion-free, he told staffers, according to people who attended. He referenced the Times story, “Why a Second Trump Presidency May Be More Radical Than His First,” by Charlie Savage, Jonathan Swan and Maggie Haberman, as a good example of fact-based and fair coverage. 
Javier E

Mistral, the 9-Month-Old AI Startup Challenging Silicon Valley's Giants - WSJ - 0 views

  • Mensch, who started in academia, has spent much of his life figuring out how to make AI and machine-learning systems more efficient. Early last year, he joined forces with co-founders Timothée Lacroix, 32, and Guillaume Lample, 33, who were then at Meta Platforms’ artificial-intelligence lab in Paris. 
  • hey are betting that their small team can outmaneuver Silicon Valley titans by finding more efficient ways to build and deploy AI systems. And they want to do it in part by giving away many of their AI systems as open-source software.
  • Eric Boyd, corporate vice president of Microsoft’s AI platform, said Mistral presents an intriguing test of how far clever engineering can push AI systems. “So where else can you go?” he asked. “That remains to be seen.”
  • ...7 more annotations...
  • Mensch said his new model cost less than €20 million, the equivalent of roughly $22 million, to train. By contrast OpenAI Chief Executive Sam Altman said last year after the release of GPT-4 that training his company’s biggest models cost “much more than” $50 million to $100 million.
  • Brave Software made a free, open-source model from Mistral the default to power its web-browser chatbot, said Brian Bondy, Brave’s co-founder and chief technology officer. He said that the company finds the quality comparable with proprietary models, and Mistral’s open-source approach also lets Brave control the model locally.
  • “We want to be the most capital-efficient company in the world of AI,” Mensch said. “That’s the reason we exist.” 
  • Mensch joined the Google AI unit then called DeepMind in late 2020, where he worked on the team building so-called large language models, the type of AI system that would later power ChatGPT. By 2022, he was one of the lead authors of a paper about a new AI model called Chinchilla, which changed the field’s understanding of the relationship among the size of an AI model, how much data is used to build it and how well it performs, known as AI scaling laws.
  • Mensch took a role lobbying French policymakers, including French President Emmanuel Macron, against certain elements of the European Union’s new AI Act, which Mensch warned could slow down companies and would, in his view, do nothing to make AI safer. After changes to the text in Brussels, it will be a manageable burden for Mistral, Mensch says, even if he thinks the law should have remained focused on how AI is used rather than also regulating the underlying technology.  
  • For Mensch and his co-founders, releasing their initial AI systems as open source that anyone could use or adapt free of charge was an important principle. It was also a way to get noticed by developers and potential clients eager for more control over the AI they use
  • Mistral’s most advanced models, including the one unveiled Monday, aren’t available open source. 
Javier E

Opinion | The 100-Year Extinction Panic Is Back, Right on Schedule - The New York Times - 0 views

  • The literary scholar Paul Saint-Amour has described the expectation of apocalypse — the sense that all history’s catastrophes and geopolitical traumas are leading us to “the prospect of an even more devastating futurity” — as the quintessential modern attitude. It’s visible everywhere in what has come to be known as the polycrisis.
  • Climate anxiety, of the sort expressed by that student, is driving new fields in psychology, experimental therapies and debates about what a recent New Yorker article called “the morality of having kids in a burning, drowning world.”
  • The conviction that the human species could be on its way out, extinguished by our own selfishness and violence, may well be the last bipartisan impulse.
  • ...28 more annotations...
  • a major extinction panic happened 100 years ago, and the similarities are unnerving.
  • The 1920s were also a period when the public — traumatized by a recent pandemic, a devastating world war and startling technological developments — was gripped by the conviction that humanity might soon shuffle off this mortal coil.
  • It also helps us see how apocalyptic fears feed off the idea that people are inherently violent, self-interested and hierarchical and that survival is a zero-sum war over resources.
  • Either way, it’s a cynical view that encourages us to take our demise as a foregone conclusion.
  • What makes an extinction panic a panic is the conviction that humanity is flawed and beyond redemption, destined to die at its own hand, the tragic hero of a terrestrial pageant for whom only one final act is possible
  • What the history of prior extinction panics has to teach us is that this pessimism is both politically questionable and questionably productive. Our survival will depend on our ability to recognize and reject the nihilistic appraisals of humanity that inflect our fears for the future, both left and right.
  • As a scholar who researches the history of Western fears about human extinction, I’m often asked how I avoid sinking into despair. My answer is always that learning about the history of extinction panics is actually liberating, even a cause for optimism
  • Nearly every generation has thought its generation was to be the last, and yet the human species has persisted
  • As a character in Jeanette Winterson’s novel “The Stone Gods” says, “History is not a suicide note — it is a record of our survival.”
  • Contrary to the folk wisdom that insists the years immediately after World War I were a period of good times and exuberance, dark clouds often hung over the 1920s. The dread of impending disaster — from another world war, the supposed corruption of racial purity and the prospect of automated labor — saturated the period
  • The previous year saw the publication of the first of several installments of what many would come to consider his finest literary achievement, “The World Crisis,” a grim retrospective of World War I that laid out, as Churchill put it, the “milestones to Armageddon.
  • Bluntly titled “Shall We All Commit Suicide?,” the essay offered a dismal appraisal of humanity’s prospects. “Certain somber facts emerge solid, inexorable, like the shapes of mountains from drifting mist,” Churchill wrote. “Mankind has never been in this position before. Without having improved appreciably in virtue or enjoying wiser guidance, it has got into its hands for the first time the tools by which it can unfailingly accomplish its own extermination.”
  • The essay — with its declaration that “the story of the human race is war” and its dismay at “the march of science unfolding ever more appalling possibilities” — is filled with right-wing pathos and holds out little hope that mankind might possess the wisdom to outrun the reaper. This fatalistic assessment was shared by many, including those well to Churchill’s left.
  • “Are not we and they and all the race still just as much adrift in the current of circumstances as we were before 1914?” he wondered. Wells predicted that our inability to learn from the mistakes of the Great War would “carry our race on surely and inexorably to fresh wars, to shortages, hunger, miseries and social debacles, at last either to complete extinction or to a degradation beyond our present understanding.” Humanity, the don of sci-fi correctly surmised, was rushing headlong into a “scientific war” that would “make the biggest bombs of 1918 seem like little crackers.”
  • The pathbreaking biologist J.B.S. Haldane, another socialist, concurred with Wells’s view of warfare’s ultimate destination. In 1925, two decades before the Trinity test birthed an atomic sun over the New Mexico desert, Haldane, who experienced bombing firsthand during World War I, mused, “If we could utilize the forces which we now know to exist inside the atom, we should have such capacities for destruction that I do not know of any agency other than divine intervention which would save humanity from complete and peremptory annihilation.”
  • F.C.S. Schiller, a British philosopher and eugenicist, summarized the general intellectual atmosphere of the 1920s aptly: “Our best prophets are growing very anxious about our future. They are afraid we are getting to know too much and are likely to use our knowledge to commit suicide.”
  • Many of the same fears that keep A.I. engineers up at night — calibrating thinking machines to human values, concern that our growing reliance on technology might sap human ingenuity and even trepidation about a robot takeover — made their debut in the early 20th century.
  • The popular detective novelist R. Austin Freeman’s 1921 political treatise, “Social Decay and Regeneration,” warned that our reliance on new technologies was driving our species toward degradation and even annihilation
  • Extinction panics are, in both the literal and the vernacular senses, reactionary, animated by the elite’s anxiety about maintaining its privilege in the midst of societal change
  • There is a perverse comfort to dystopian thinking. The conviction that catastrophe is baked in relieves us of the moral obligation to act. But as the extinction panic of the 1920s shows us, action is possible, and these panics can recede
  • To whatever extent, then, that the diagnosis proved prophetic, it’s worth asking if it might have been at least partly self-fulfilling.
  • today’s problems are fundamentally new. So, too, must be our solutions
  • It is a tired observation that those who don’t know history are destined to repeat it. We live in a peculiar moment in which this wisdom is precisely inverted. Making it to the next century may well depend on learning from and repeating the tightrope walk — between technological progress and self-annihilation — that we have been doing for the past 100 years
  • We have gotten into the dangerous habit of outsourcing big issues — space exploration, clean energy, A.I. and the like — to private businesses and billionaires
  • That ideologically varied constellation of prominent figures shared a basic diagnosis of humanity and its prospects: that our species is fundamentally vicious and selfish and our destiny therefore bends inexorably toward self-destruction.
  • Less than a year after Churchill’s warning about the future of modern combat — “As for poison gas and chemical warfare,” he wrote, “only the first chapter has been written of a terrible book” — the 1925 Geneva Protocol was signed, an international agreement banning the use of chemical or biological weapons in combat. Despite the many horrors of World War II, chemical weapons were not deployed on European battlefields.
  • As for machine-age angst, there’s a lesson to learn there, too: Our panics are often puffed up, our predictions simply wrong
  • In 1928, H.G. Wells published a book titled “The Way the World Is Going,” with the modest subtitle “Guesses and Forecasts of the Years Ahead.” In the opening pages, he offered a summary of his age that could just as easily have been written about our turbulent 2020s. “Human life,” he wrote, “is different from what it has ever been before, and it is rapidly becoming more different.” He continued, “Perhaps never in the whole history of life before the present time, has there been a living species subjected to so fiercely urgent, many-sided and comprehensive a process of change as ours today. None at least that has survived. Transformation or extinction have been nature’s invariable alternatives. Ours is a species in an intense phase of transition.”
Javier E

How We Can Control AI - WSJ - 0 views

  • What’s still difficult is to encode human values
  • That currently requires an extra step known as Reinforcement Learning from Human Feedback, in which programmers use their own responses to train the model to be helpful and accurate. Meanwhile, so-called “red teams” provoke the program in order to uncover any possible harmful outputs
  • This combination of human adjustments and guardrails is designed to ensure alignment of AI with human values and overall safety. So far, this seems to have worked reasonably well.
  • ...22 more annotations...
  • At some point they will be able to, for example, suggest recipes for novel cyberattacks or biological attacks—all based on publicly available knowledge.
  • But as models become more sophisticated, this approach may prove insufficient. Some models are beginning to exhibit polymathic behavior: They appear to know more than just what is in their training data and can link concepts across fields, languages, and geographies.
  • We need to adopt new approaches to AI safety that track the complexity and innovation speed of the core models themselves.
  • What’s much harder to test for is what’s known as “capability overhang”—meaning not just the model’s current knowledge, but the derived knowledge it could potentially generate on its own.
  • Red teams have so far shown some promise in predicting models’ capabilities, but upcoming technologies could break our current approach to safety in AI. For one, “recursive self-improvement” is a feature that allows AI systems to collect data and get feedback on their own and incorporate it to update their own parameters, thus enabling the models to train themselves
  • This could result in, say, an AI that can build complex system applications (e.g., a simple search engine or a new game) from scratch. But, the full scope of the potential new capabilities that could be enabled by recursive self-improvement is not known.
  • Another example would be “multi-agent systems,” where multiple independent AI systems are able to coordinate with each other to build something new.
  • This so-called “combinatorial innovation,” where systems are merged to build something new, will be a threat simply because the number of combinations will quickly exceed the capacity of human oversight.
  • Short of pulling the plug on the computers doing this work, it will likely be very difficult to monitor such technologies once these breakthroughs occur
  • Current regulatory approaches are based on individual model size and training effort, and are based on passing increasingly rigorous tests, but these techniques will break down as the systems become orders of magnitude more powerful and potentially elusive
  • AI regulatory approaches will need to evolve to identify and govern the new emergent capabilities and the scaling of those capabilities.
  • But the AI Act has already fallen behind the frontier of innovation, as open-source AI models—which are largely exempt from the legislation—expand in scope and number
  • Europe has so far attempted the most ambitious regulatory regime with its AI Act,
  • both Biden’s order and Europe’s AI Act lack intrinsic mechanisms to rapidly adapt to an AI landscape that will continue to change quickly and often.
  • a gathering in Palo Alto organized by the Rand Corp. and the Carnegie Endowment for International Peace, where key technical leaders in AI converged on an idea: The best way to solve these problems is to create a new set of testing companies that will be incentivized to out-innovate each other—in short, a robust economy of testing
  • To check the most powerful AI systems, their testers will also themselves have to be powerful AI systems, precisely trained and refined to excel at the single task of identifying safety concerns and problem areas in the world’s most advanced models.
  • To be trustworthy and yet agile, these testing companies should be checked and certified by government regulators but developed and funded in the private market, with possible support by philanthropy organizations
  • The field is moving too quickly and the stakes are too high for exclusive reliance on typical government processes and timeframes.
  • One way this can unfold is for government regulators to require AI models exceeding a certain level of capability to be evaluated by government-certified private testing companies (from startups to university labs to nonprofit research organizations), with model builders paying for this testing and certification so as to meet safety requirements.
  • As AI models proliferate, growing demand for testing would create a big enough market. Testing companies could specialize in certifying submitted models across different safety regimes, such as the ability to self-proliferate, create new bio or cyber weapons, or manipulate or deceive their human creators
  • Much ink has been spilled over presumed threats of AI. Advanced AI systems could end up misaligned with human values and interests, able to cause chaos and catastrophe either deliberately or (often) despite efforts to make them safe. And as they advance, the threats we face today will only expand as new systems learn to self-improve, collaborate and potentially resist human oversight.
  • If we can bring about an ecosystem of nimble, sophisticated, independent testing companies who continuously develop and improve their skill evaluating AI testing, we can help bring about a future in which society benefits from the incredible power of AI tools while maintaining meaningful safeguards against destructive outcomes.
Javier E

Neal Stephenson's Most Stunning Prediction - The Atlantic - 0 views

  • Think about any concept that we might want to teach somebody—for instance, the Pythagorean theorem. There must be thousands of old and new explanations of the Pythagorean theorem online. The real thing we need is to understand each child’s learning style so we can immediately connect them to the one out of those thousands that is the best fit for how they learn. That to me sounds like an AI kind of project, but it’s a different kind of AI application from DALL-E or large language models.
  • Right now a lot of generative AI is free, but the technology is also very expensive to run. How do you think access to generative AI might play out?
  • Stephenson: There was a bit of early internet utopianism in the book, which was written during that era in the mid-’90s when the internet was coming online. There was a tendency to assume that when all the world’s knowledge comes online, everyone will flock to it
  • ...3 more annotations...
  • It turns out that if you give everyone access to the Library of Congress, what they do is watch videos on TikTok
  • A chatbot is not an oracle; it’s a statistics engine that creates sentences that sound accurate. Right now my sense is that it’s like we’ve just invented transistors. We’ve got a couple of consumer products that people are starting to adopt, like the transistor radio, but we don’t yet know how the transistor will transform society
  • We’re in the transistor-radio stage of AI. I think a lot of the ferment that’s happening right now in the industry is venture capitalists putting money into business plans, and teams that are rapidly evaluating a whole lot of different things that could be done well. I’m sure that some things are going to emerge that I wouldn’t dare try to predict, because the results of the creative frenzy of millions of people is always more interesting than what a single person can think of.
Javier E

OpenAI Just Gave Away the Entire Game - The Atlantic - 0 views

  • If you’re looking to understand the philosophy that underpins Silicon Valley’s latest gold rush, look no further than OpenAI’s Scarlett Johansson debacle.
  • the situation is also a tidy microcosm of the raw deal at the center of generative AI, a technology that is built off data scraped from the internet, generally without the consent of creators or copyright owners. Multiple artists and publishers, including The New York Times, have sued AI companies for this reason, but the tech firms remain unchastened, prevaricating when asked point-blank about the provenance of their training data.
  • At the core of these deflections is an implication: The hypothetical superintelligence they are building is too big, too world-changing, too important for prosaic concerns such as copyright and attribution. The Johansson scandal is merely a reminder of AI’s manifest-destiny philosophy: This is happening, whether you like it or not.
  • ...7 more annotations...
  • Altman and OpenAI have been candid on this front. The end goal of OpenAI has always been to build a so-called artificial general intelligence, or AGI, that would, in their imagining, alter the course of human history forever, ushering in an unthinkable revolution of productivity and prosperity—a utopian world where jobs disappear, replaced by some form of universal basic income, and humanity experiences quantum leaps in science and medicine. (Or, the machines cause life on Earth as we know it to end.) The stakes, in this hypothetical, are unimaginably high—all the more reason for OpenAI to accelerate progress by any means necessary.
  • As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • In response to one question about AGI rendering jobs obsolete, Jeff Wu, an engineer for the company, confessed, “It’s kind of deeply unfair that, you know, a group of people can just build AI and take everyone’s jobs away, and in some sense, there’s nothing you can do to stop them right now.” He added, “I don’t know. Raise awareness, get governments to care, get other people to care. Yeah. Or join us and have one of the few remaining jobs. I don’t know; it’s rough.”
  • Part of Altman’s reasoning, he told Andersen, is that AI development is a geopolitical race against autocracies like China. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than that of “authoritarian governments,” he said. He noted that, in an ideal world, AI should be a product of nations. But in this world, Altman seems to view his company as akin to its own nation-state.
  • Wu’s colleague Daniel Kokotajlo jumped in with the justification. “To add to that,” he said, “AGI is going to create tremendous wealth. And if that wealth is distributed—even if it’s not equitably distributed, but the closer it is to equitable distribution, it’s going to make everyone incredibly wealthy.”
  • This is the unvarnished logic of OpenAI. It is cold, rationalist, and paternalistic. That such a small group of people should be anointed to build a civilization-changing technology is inherently unfair, they note. And yet they will carry on because they have both a vision for the future and the means to try to bring it to fruition
  • Wu’s proposition, which he offers with a resigned shrug in the video, is telling: You can try to fight this, but you can’t stop it. Your best bet is to get on board.
Javier E

The AI Revolution Is Already Losing Steam - WSJ - 0 views

  • Most of the measurable and qualitative improvements in today’s large language model AIs like OpenAI’s ChatGPT and Google’s Gemini—including their talents for writing and analysis—come down to shoving ever more data into them. 
  • AI could become a commodity
  • To train next generation AIs, engineers are turning to “synthetic data,” which is data generated by other AIs. That approach didn’t work to create better self-driving technology for vehicles, and there is plenty of evidence it will be no better for large language models,
  • ...25 more annotations...
  • AIs like ChatGPT rapidly got better in their early days, but what we’ve seen in the past 14-and-a-half months are only incremental gains, says Marcus. “The truth is, the core capabilities of these systems have either reached a plateau, or at least have slowed down in their improvement,” he adds.
  • the gaps between the performance of various AI models are closing. All of the best proprietary AI models are converging on about the same scores on tests of their abilities, and even free, open-source models, like those from Meta and Mistral, are catching up.
  • models work by digesting huge volumes of text, and it’s undeniable that up to now, simply adding more has led to better capabilities. But a major barrier to continuing down this path is that companies have already trained their AIs on more or less the entire internet, and are running out of additional data to hoover up. There aren’t 10 more internets’ worth of human-generated content for today’s AIs to inhale.
  • A mature technology is one where everyone knows how to build it. Absent profound breakthroughs—which become exceedingly rare—no one has an edge in performance
  • companies look for efficiencies, and whoever is winning shifts from who is in the lead to who can cut costs to the bone. The last major technology this happened with was electric vehicles, and now it appears to be happening to AI.
  • the future for AI startups—like OpenAI and Anthropic—could be dim.
  • Microsoft and Google will be able to entice enough users to make their AI investments worthwhile, doing so will require spending vast amounts of money over a long period of time, leaving even the best-funded AI startups—with their comparatively paltry warchests—unable to compete.
  • Many other AI startups, even well-funded ones, are apparently in talks to sell themselves.
  • the bottom line is that for a popular service that relies on generative AI, the costs of running it far exceed the already eye-watering cost of training it.
  • That difference is alarming, but what really matters to the long-term health of the industry is how much it costs to run AIs. 
  • Changing people’s mindsets and habits will be among the biggest barriers to swift adoption of AI. That is a remarkably consistent pattern across the rollout of all new technologies.
  • the industry spent $50 billion on chips from Nvidia to train AI in 2023, but brought in only $3 billion in revenue.
  • For an almost entirely ad-supported company like Google, which is now offering AI-generated summaries across billions of search results, analysts believe delivering AI answers on those searches will eat into the company’s margins
  • Google, Microsoft and others said their revenue from cloud services went up, which they attributed in part to those services powering other company’s AIs. But sustaining that revenue depends on other companies and startups getting enough value out of AI to justify continuing to fork over billions of dollars to train and run those systems
  • three in four white-collar workers now use AI at work. Another survey, from corporate expense-management and tracking company Ramp, shows about a third of companies pay for at least one AI tool, up from 21% a year ago.
  • OpenAI doesn’t disclose its annual revenue, but the Financial Times reported in December that it was at least $2 billion, and that the company thought it could double that amount by 2025. 
  • That is still a far cry from the revenue needed to justify OpenAI’s now nearly $90 billion valuation
  • the company excels at generating interest and attention, but it’s unclear how many of those users will stick around. 
  • AI isn’t nearly the productivity booster it has been touted as
  • While these systems can help some people do their jobs, they can’t actually replace them. This means they are unlikely to help companies save on payroll. He compares it to the way that self-driving trucks have been slow to arrive, in part because it turns out that driving a truck is just one part of a truck driver’s job.
  • Add in the myriad challenges of using AI at work. For example, AIs still make up fake information,
  • getting the most out of open-ended chatbots isn’t intuitive, and workers will need significant training and time to adjust.
  • That’s because AI has to think anew every single time something is asked of it, and the resources that AI uses when it generates an answer are far larger than what it takes to, say, return a conventional search result
  • None of this is to say that today’s AI won’t, in the long run, transform all sorts of jobs and industries. The problem is that the current level of investment—in startups and by big companies—seems to be predicated on the idea that AI is going to get so much better, so fast, and be adopted so quickly that its impact on our lives and the economy is hard to comprehend. 
  • Mounting evidence suggests that won’t be the case.
Javier E

OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times - 0 views

  • A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created.
  • The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous.
  • The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.
  • ...21 more annotations...
  • They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.
  • “OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” said Daniel Kokotajlo, a former researcher in OpenAI’s governance division and one of the group’s organizers.
  • Other members include William Saunders, a research engineer who left OpenAI in February, and three other former OpenAI employees: Carroll Wainwright, Jacob Hilton and Daniel Ziegler. Several current OpenAI employees endorsed the letter anonymously because they feared retaliation from the company,
  • At OpenAI, Mr. Kokotajlo saw that even though the company had safety protocols in place — including a joint effort with Microsoft known as the “deployment safety board,” which was supposed to review new models for major risks before they were publicly released — they rarely seemed to slow anything down.
  • So was the departure of Dr. Leike, who along with Dr. Sutskever had led OpenAI’s “superalignment” team, which focused on managing the risks of powerful A.I. models. In a series of public posts announcing his departure, Dr. Leike said he believed that “safety culture and processes have taken a back seat to shiny products.”
  • “When I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward,’” Mr. Saunders said.
  • Mr. Kokotajlo, 31, joined OpenAI in 2022 as a governance researcher and was asked to forecast A.I. progress. He was not, to put it mildly, optimistic.In his previous job at an A.I. safety organization, he predicted that A.G.I. might arrive in 2050. But after seeing how quickly A.I. was improving, he shortened his timelines. Now he believes there is a 50 percent chance that A.G.I. will arrive by 2027 — in just three years.
  • He also believes that the probability that advanced A.I. will destroy or catastrophically harm humanity — a grim statistic often shortened to “p(doom)” in A.I. circles — is 70 percent.
  • Last month, two senior A.I. researchers — Ilya Sutskever and Jan Leike — left OpenAI under a cloud. Dr. Sutskever, who had been on OpenAI’s board and voted to fire Mr. Altman, had raised alarms about the potential risks of powerful A.I. systems. His departure was seen by some safety-minded employees as a setback.
  • Mr. Kokotajlo said, he became so worried that, last year, he told Mr. Altman that the company should “pivot to safety” and spend more time and resources guarding against A.I.’s risks rather than charging ahead to improve its models. He said that Mr. Altman had claimed to agree with him, but that nothing much changed.
  • In April, he quit. In an email to his team, he said he was leaving because he had “lost confidence that OpenAI will behave responsibly" as its systems approach human-level intelligence.
  • “The world isn’t ready, and we aren’t ready,” Mr. Kokotajlo wrote. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”
  • On his way out, Mr. Kokotajlo refused to sign OpenAI’s standard paperwork for departing employees, which included a strict nondisparagement clause barring them from saying negative things about the company, or else risk having their vested equity taken away.
  • Many employees could lose out on millions of dollars if they refused to sign. Mr. Kokotajlo’s vested equity was worth roughly $1.7 million, he said, which amounted to the vast majority of his net worth, and he was prepared to forfeit all of it.
  • Mr. Altman said he was “genuinely embarrassed” not to have known about the agreements, and the company said it would remove nondisparagement clauses from its standard paperwork and release former employees from their agreements.)
  • In their open letter, Mr. Kokotajlo and the other former OpenAI employees call for an end to using nondisparagement and nondisclosure agreements at OpenAI and other A.I. companies.
  • “Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,”
  • They also call for A.I. companies to “support a culture of open criticism” and establish a reporting process for employees to anonymously raise safety-related concerns.
  • They have retained a pro bono lawyer, Lawrence Lessig, the prominent legal scholar and activist
  • Mr. Kokotajlo and his group are skeptical that self-regulation alone will be enough to prepare for a world with more powerful A.I. systems. So they are calling for lawmakers to regulate the industry, too.
  • “There needs to be some sort of democratically accountable, transparent governance structure in charge of this process," Mr. Kokotajlo said. “Instead of just a couple of different private companies racing with each other, and keeping it all secret.”
« First ‹ Previous 441 - 460 of 460
Showing 20 items per page