Skip to main content

Home/ History Readings/ Group items tagged patch

Rss Feed Group items tagged

17More

Stimulus negotiations: A deal is within reach. Can Hill leaders finally strike one? - C... - 0 views

  • With government funding running out Friday night, lawmakers have to release a massive, $1.4 trillion package as soon as Tuesday if it has any chance of passing Congress and keeping agencies from shutting down by the weekend.
  • struggling Americans could once again be disappointed if there's no agreement and they're forced to wait even longer as lawmakers continue to haggle.
  • House Speaker Nancy Pelosi has invited Senate Minority Leader Chuck Schumer, Senate Majority Leader Mitch McConnell, and House Minority Leader Kevin McCarthy to her office for a meeting on Covid and government funding. The meeting is scheduled to occur at 4 p.m. ET.
  • ...14 more annotations...
  • Congress may have to pass yet another short-term stopgap resolution to give them more time to find an agreement.
  • If a sweeping government funding bill is released without pandemic relief, that would spell serious trouble for the effort to pass Covid aid before Congress breaks for the holidays and could signal the impending demise of the last-ditch effort to secure a stimulus deal.
  • As of late Monday night, there still was no final consensus, with familiar sticking points: Democrats want state and local money to help ensure workers who provide vital services are not laid off. Republicans believe much of that money will be wasted. And the GOP lawmakers who are open to more state and local aid say there also need to be lawsuit protections for businesses and other entities, but Democrats argue that the GOP proposals on that idea go too far.
  • House and Senate appropriators are planning to unveil a $1.4 trillion spending bill Tuesday to fund federal agencies until the end of September 2021, which leaves little time before the Friday deadline for what's expected to be a massive package to pass both chambers.
  • It's clear to virtually everyone in Washington that a deal is within reach that includes several key provisions: An extension of jobless benefits, money for vaccine distribution, funds for schools, small business loans -- among a handful of other issues.
  • Self-imposed deadlines have a way of slipping in Congress and it's always possible lawmakers won't release a massive funding deal Tuesday despite their intention to do so. If that happens, it could mean that talks over both stimulus and government spending are breaking down and lawmakers may be forced to punt the issue further down the road by walking away from a pandemic stimulus deal during the lame duck session of Congress and passing a short-term funding patch rather than a far broader, comprehensive spending deal.
  • "Either 100 senators will be here shaking our heads, slinging blame and offering excuses about why we still have not been able to make a law -- or we will break for the holidays having sent another huge dose of relief out the door for the people who need it."
  • There were clear signs on Monday that Democrats could be forced to abandon a push for at least $160 billion in aid to cash-strapped states and cities in order to get a bipartisan agreement on some relief provisions.
  • during a 22-minute phone call Monday evening, the speaker told Mnuchin that the GOP insistence to include lawsuit protections for businesses and other entities "remain an obstacle" to getting an agreement on state and local aid -- since Republicans have demanded the two be tied together.
  • A bipartisan group of lawmakers unveiled the legislative text of a $908 billion compromise Covid relief plan on Monday
  • If the aid is ultimately dropped from the plan, it would amount to a major concession from Democrats, who had advanced roughly $1 trillion for aid to states and cities as part of a $3 trillion-plus plan that passed the House in May and that the Senate never considered. Democrats had argued the money was paramount to ensure that workers performing vital services -- ranging from first responders to health care workers -- could continue to say on the job.
  • If Democrats do drop their demand for state and local aid, the consensus bill put forward by the bipartisan coalition on Monday that sidesteps that issue as well as liability protections could serve as a ready-made starting point for what could be agreed to more widely on Covid relief.That bill has a price tag of $748 billion and includes policy ideas that have proven popular across party lines such as a boost to the Paycheck Protection Program
  • "I am convinced the majority leader will actually bring legislation to the floor that will either take up our $748 billion bill or the total of $908 billion, or perhaps he will pick and choose from what we put together in a bill of his own and attach it to the omnibus spending bill."
  • According to a summary released on Monday, the bill would provide $300 billion for the Small Business Administration and funds that would give small businesses the chance to benefit from another loan through the PPP with certain eligibility restrictions.There would be $2.58 billion for CDC vaccine distribution and infrastructure and an extension of pandemic unemployment insurance programs for 16 weeks along with a $300 per week expansion of federal supplemental unemployment insurance benefits
17More

Opinion | I Was the Homeland Security Adviser to Trump. We're Being Hacked. - The New Y... - 0 views

  • At the worst possible time, when the United States is at its most vulnerable — during a presidential transition and a devastating public health crisis — the networks of the federal government and much of corporate America are compromised by a foreign nation.
  • Last week, the cybersecurity firm FireEye said it had been hacked and that its clients, which include the United States government, had been placed at risk
  • The attackers gained access to SolarWinds software before updates of that software were made available to its customers. Unsuspecting customers then downloaded a corrupted version of the software, which included a hidden back door that gave hackers access to the victim’s network.
  • ...14 more annotations...
  • supply-chain attack
  • According to SolarWinds S.E.C. filings, the malware was on the software from March to June. The number of organizations that downloaded the corrupted update could be as many as 18,000, which includes most federal government unclassified networks and more than 425 Fortune 500 companies.
  • The magnitude of this ongoing attack is hard to overstate.
  • The Russians have had access to a considerable number of important and sensitive networks for six to nine months.
  • While the Russians did not have the time to gain complete control over every network they hacked, they most certainly did gain it over hundreds of them.
  • The National Defense Authorization Act, which each year provides the Defense Department and other agencies the authority to perform its work, is caught up in partisan wrangling. Among other important provisions, the act would authorize the Department of Homeland Security to perform network hunting in federal networks.
  • The actual and perceived control of so many important networks could easily be used to undermine public and consumer trust in data, written communications and services.
  • hat should be done?On Dec. 13, the Cybersecurity and Infrastructure Security Agency, a division of the Department of Homeland Security — itself a victim — issued an emergency directive ordering federal civilian agencies to remove SolarWinds software from their networks.
  • It also is impractical. In 2017, the federal government was ordered to remove from its networks software from a Russian company, Kaspersky Lab, that was deemed too risky. It took over a year to get it off the networks.
  • The remediation effort alone will be staggering
  • Cyber threat hunters that are stealthier than the Russians must be unleashed on these networks to look for the hidden, persistent access controls.
  • The logical conclusion is that we must act as if the Russian government has control of all the networks it has penetrated
  • The response must be broader than patching networks. While all indicators point to the Russian government, the United States, and ideally its allies, must publicly and formally attribute responsibility for these hacks. If it is Russia, President Trump must make it clear to Vladimir Putin that these actions are unacceptable. The U.S. military and intelligence community must be placed on increased alert; all elements of national power must be placed on the table.
  • President Trump is on the verge of leaving behind a federal government, and perhaps a large number of major industries, compromised by the Russian government. He must use whatever leverage he can muster to protect the United States and severely punish the Russians.President-elect Joe Biden must begin his planning to take charge of this crisis. He has to assume that communications about this matter are being read by Russia, and assume that any government data or email could be falsified.
30More

How the game of Go explains China's aggression towards India | The Economist - 0 views

  • N THE ANCIENT Chinese game of weiqi, better known in the West as Go
  • build the largest, strongest structures, and only secondly to weaken and stifle enemy ones. Better players shun contact, preferring to parry threats with counter-threats.
  • mostly avoided contact
  • ...27 more annotations...
  • The two have lately engaged in sabre-rattling and name-calling. But such tension has been rare during their seven-decade rivalry as modern nations.
  • the Asian giants’ 3,500km-long border region remained an empty section of the board
  • long as India and China were focused on building their own core structures, each largely ignored the other.
  • India and China maintained overlapping claims, and their forces sometimes clashed, as in a brief war in 1962. But they both also judged that there was not enough at stake to fight a big war over.
  • territorial limits continued to be defined in many areas by a “Line of Actual Control” rather than an internationally recognised boundary
  • border patrols went lightly armed
  • unresolved challenges multiply, the advantage shifting to whoever poses the sharpest ones
  • China has repeatedly rebuffed such efforts
  • a democracy bound by rules, India has repeatedly sought to end the ambiguity by negotiating a permanent border
  • why foreclose on potential pressure points? Better to leave them open for use in the future, when you have more leverage and your opponent has more reason to fear you
  • China appears to have decided that this future is now
  • several strategic spots along the border in the spring of 2020, Chinese troops marched into long-established patches of no-man’s-land, setting up permanent forward positions. When India sent in soldiers to challenge the intrusions, fisticuffs ensued
  • China extends strength by tightening its alliance with India’s arch-enemy Pakistan, Mr Modi dithers
  • This leaves it in control of lands India regarded as its own and, more seriously, in control of vantage points from which to threaten crucial roads and other Indian infrastructure.
  • From a weiqi perspective China’s boldness is understandable
  • In the 1980s its economy was roughly equal to India’s. It is now five times bigger, and churns out ever-more sophisticated weaponry while India relies on imports
  • China’s infrastructure has expanded towards its peripheries at a speed India has been unable to match
  • China’s southern neighbour looks weak in other ways
  • Its democracy is messy and inefficient
  • 20 Indians and at least four Chinese dead
  • In his dream of a Hindu golden age India needs no allies, only weaker satellites or rich friends.
  • India’s army has little functional interoperability with any other.
  • the board fills up and one player emerges dominant, there should be no surprise for it to push the advantage
  • Even if his opponent is erratic, the global gameboard may prove wider, and India may turn out to have better-placed assets than Mr Xi realises.
  • India retains a big reserve of goodwill as a democracy and a decent global citizen; it would gain fast allies if it really tried to win them
  • India’s core strength may run deeper, too. Its relative smallness is deceptive: the eastern third of China, where 95% of Chinese actually live, is no bigger than India.
  • India’s remains packed with upward potential.
10More

Power Outages Plague Puerto Rico Despite LUMA Takeover - The New York Times - 0 views

  • our years after Hurricane Maria left Puerto Rico’s electrical grid a shambles and the entire island in the dark, residents had expected their fragile power system to be stronger now. Instead, unreliable electricity remains frustratingly common, hindering economic development and daily life.
  • Surging demand in August and September led to rolling blackouts affecting a majority of the island’s 1.5 million electrical customers.
  • Last week, several thousand people marched along a main highway in San Juan, the capital, blocking traffic with the latest in a series of protests over the seemingly unending electricity problems plaguing the island.
  • ...7 more annotations...
  • aging equipment, lack of maintenance and past mismanagement and corruption of an inefficient system.
  • We’re in 2021. We have internet on our TV. Why don’t we have electricity?”
  • Many Puerto Ricans are diabetic and need refrigerated insulin to survive. The coronavirus pandemic has also put some people on respiratory therapies requiring electrical power at home for oxygen machines. Some Puerto Ricans are still studying or working at home.
  • The system is so frail that a power plant recently went offline because sargassum — seaweed — blocked its filters.
  • Crews patched Puerto Rico’s grid with $3.2 billion in emergency repairs after Hurricane Maria, which shredded the island’s power lines as a Category 4 storm in September 2017. Congress earmarked about $10 billion through the Federal Emergency Management Agency to rebuild the system. Those projects will be contracted out by the new consortium, with the aim of restoring the grid to how it was before the storm, with some modernization.
  • LUMA took over in June, with its top officials saying they were prepared to handle a Category 2 hurricane. (None have hit the island this year.) Almost immediately, huge outages began.
  • “The Puerto Rico electric system is arguably the worst in the United States and has been for a very long time, even prior to the devastating hurricanes in 2017,” Mr. Stensby said.
7More

Opinion | Everyone's Moving to Texas. Here's Why. - The New York Times - 0 views

  • Texas’ climate risks. Houston will not do well on a warming planet — it is economically dependent on the oil and gas industry and is threatened by hurricanes and a surge in sea levels. But other big cities, including Dallas and Fort Worth, face more moderate risks, especially compared to many cities in California. Yes, Texas is very hot and likely to get hotter; but if a lot of other American cities also begin to get very hot, Texas cities might not feel as overheated by comparison. In addition to the risk of heat stress, Texas also faces the possibility of water shortages, but that will be true across much of the West, including California’s population centers.
  • living through California’s tinderbox years has convinced me to keep an eye on climate dangers; while forecasts on climate risk are inexact, making some effort to anticipate its danger when deciding where to live feels more responsible than ignoring it. And when people in California are paying a million dollars above asking price for homes in areas of high and increasing wildfire risk, isn’t that something like ignoring it?
  • You might argue that it’s too speculative to take into account something as broad and complex as climate change when deciding where to live. And more important, there’s no real escape from a long-term planetary disaster — even if you move to some place with lovely weather, your life is bound to be altered in significant ways as habitability shifts elsewhere on the globe.
  • ...4 more annotations...
  • What Texans will not have to worry about as much are wildfires, the scourge of so much of California, and the attendant air pollution, though experts predict increases in wildfires in Texas. It’s true that Texas’ less extreme fire risk is related to something precious about California that Texas lacks — abundant trees and mountains in major metro areas, or really any of California’s striking natural beauty. But nobody said living through climate change would be pretty.
  • There is a concept in behavioral economics known as a “Minsky moment,” which describes when a bull market suddenly wises up to its own unsustainability, causing a collapse in prices.
  • Jesse Keenan, an associate professor at the Tulane University School of Architecture who studies how climate change affects housing markets, told me that a Minsky moment could be coming for high-priced homes in at-risk coastal cities. As home lenders, insurance companies and other players in the real estate business begin to better understand their exposure to climate risks, they may raise premiums or force disclosure requirements that could lower home values.
  • At the moment, buying a home in the San Francisco Bay Area, where I live, looks like a safe investment. But lately I have begun to obsess about the uncertainty built into the changing weather. What if three fire seasons from now proves to be one fire season too many — and, in a blink, the housing market into which we’ve invested so much of our future implodes? “In a way, climate change could begin to look like a foreclosure crisis,” Keenan told me.
24More

Ocean Currents in the Atlantic Could Slow by Century's End, Research Shows - The New Yo... - 0 views

  • The last time there was a major slowdown in the mighty network of ocean currents that shapes the climate around the North Atlantic, it seems to have plunged Europe into a deep cold for over a millennium.
  • That was roughly 12,800 years ago, when not many people were around to experience it. But in recent decades, human-driven warming could be causing the currents to slow once more, and scientists have been working to determine whether and when they might undergo another great weakening, which would have ripple effects for weather patterns across a swath of the globe.
  • A pair of researchers in Denmark this week put forth a bold answer: A sharp weakening of the currents, or even a shutdown, could be upon us by century’s end.
  • ...21 more annotations...
  • Climate scientists generally agree that the Atlantic circulation will decline this century, but there’s no consensus on whether it will stall out before 2100.
  • the new findings were reason enough not to regard a shutdown as an abstract, far-off concern. “It’s now,” she said.
  • As humans warm the atmosphere, however, the melting of the Greenland ice sheet is adding large amounts of fresh water to the North Atlantic, which could be disrupting the balance of heat and salinity that keeps the overturning moving. A patch of the Atlantic south of Greenland has cooled conspicuously in recent years, creating a “cold blob” that some scientists see as a sign that the system is slowing.
  • Abrupt thawing of the Arctic permafrost. Loss of the Amazon rain forest. Collapse of the Greenland and West Antarctic ice sheets. Once the world warms past a certain point, these and other events could be set into swift motion, scientists warn, though the exact thresholds at which this would occur are still highly uncertain.
  • In the Atlantic, researchers have been searching for harbingers of tipping-point-like change in a tangle of ocean currents that goes by an unlovely name: the Atlantic Meridional Overturning Circulation, or AMOC (pronounced “AY-mock”).
  • These currents carry warm waters from the tropics through the Gulf Stream, past the southeastern United States, before bending toward northern Europe. When this water releases its heat into the air farther north, it becomes colder and denser, causing it to sink to the deep ocean and move back toward the Equator. This sinking effect, or “overturning,” allows the currents to transfer enormous amounts of heat around the planet, making them hugely influential for the climate around the Atlantic and beyond.
  • adds to a growing body of scientific work that describes how humankind’s continued emissions of heat-trapping gases could set off climate “tipping points,” or rapid and hard-to-reverse changes in the environment.
  • Much of the Northern Hemisphere could cool. The coastlines of North America and Europe could see faster sea-level rise. Northern Europe could experience stormier winters, while the Sahel in Africa and the monsoon regions of Asia would most likely get less rain.
  • Scientists’ uncertainty about the timing of an AMOC collapse shouldn’t be taken as an excuse for not reducing greenhouse-gas emissions to try to avoid it, said Hali Kilbourne, an associate research professor at the University of Maryland Center for Environmental Science.
  • Were the circulation to tip into a much weaker state, the effects on the climate would be far-reaching, though scientists are still examining their potential magnitude.
  • Dr. Ditlevsen’s new analysis focused on a simple metric, based on sea-surface temperatures, that is similar to ones other scientists have used as proxies for the strength of the Atlantic circulation. She conducted the analysis with Peter Ditlevsen, her brother, who is a climate scientist at the University of Copenhagen’s Niels Bohr Institute. They used data on their proxy measure from 1870 to 2020 to calculate statistical indicators that presage changes in the overturning.
  • “Not only do we see an increase in these indicators,” Peter Ditlevsen said, “but we see an increase which is consistent with this approaching a tipping point.”
  • They then used the mathematical properties of a tipping-point-like system to extrapolate from these trends. That led them to predict that the Atlantic circulation could collapse around midcentury, though it could potentially occur as soon as 2025 and as late as 2095.
  • Their analysis included no specific assumptions about how much greenhouse-gas emissions will rise in this century. It assumed only that the forces bringing about an AMOC collapse would continue at an unchanging pace — essentially, that atmospheric carbon dioxide concentrations would keep rising as they have since the Industrial Revolution.
  • they voiced reservations about some of its methods, and said more work was still needed to nail down the timing with greater certainty.
  • Susan Lozier, a physical oceanographer at Georgia Tech, said sea-surface temperatures in the North Atlantic near Greenland weren’t necessarily influenced by changes in the overturning alone, making them a questionable proxy for inferring those changes. She pointed to a study published last year showing that much of the cold blob’s development could be explained by shifts in wind and atmospheric patterns.
  • Scientists are now using sensors slung across the Atlantic to directly measure the overturning. Dr. Lozier is involved in one of these measurement efforts. The aim is to better understand what’s driving the changes beneath the waves, and to improve projections of future changes.
  • Still, the new study sent an urgent message about the need to keep collecting data on the changing ocean currents,
  • scientists’ most advanced computer models of the global climate have produced a wide range of predictions for how the currents might behave in the coming decades, in part because the mix of factors that shape them is so complex.
  • “It is very plausible that we’ve fallen off a cliff already and don’t know it,” Dr. Kilbourne said. “I fear, honestly, that by the time any of this is settled science, it’s way too late to act.”
  • the projects began collecting data in 2004 at the earliest, which isn’t enough time to draw firm long-term conclusions. “It is extremely difficult to look at a short record for the ocean overturning and say what it is going to do over 30, 40 or 50 years,”
16More

Opinion | The Right Is All Wrong About Masculinity - The New York Times - 0 views

  • Indeed, the very definition of “masculinity” is up for grabs
  • In 2019, the American Psychological Association published guidelines that took direct aim at what it called “traditional masculinity — marked by stoicism, competitiveness, dominance and aggression” — declaring it to be, “on the whole, harmful.”
  • Aside from “dominance,” a concept with precious few virtuous uses, the other aspects of traditional masculinity the A.P.A. cited have important roles to play. Competitiveness, aggression and stoicism surely have their abuses, but they also can be indispensable in the right contexts. Thus, part of the challenge isn’t so much rejecting those characteristics as it is channeling and shaping them for virtuous purposes.
  • ...13 more annotations...
  • traditionally “masculine” virtues are not exclusively male. Women who successfully model these attributes are all around us
  • Rudyard Kipling’s famous poem “If—” is one of the purest distillations of restraint as a traditional manly virtue. It begins with the words “If you can keep your head when all about you / Are losing theirs and blaming it on you.” The entire work speaks of the necessity of calmness and courage.
  • Stoicism carried to excess can become a dangerous form of emotional repression, a stifling of necessary feelings. But the fact that the kind of patience and perseverance that marks stoicism can be taken too far is not to say that we should shun it. In times of conflict and crisis, it is the calm man or woman who can see clearly.
  • Hysteria plus cruelty is a recipe for violence. And that brings us back to Mr. Hawley. For all of its faults when taken to excess, the traditional masculinity of which he claims to be a champion would demand that he stand firm against a howling mob. Rather, he saluted it with a raised fist — and then ran from it when it got too close and too unruly.
  • Catastrophic rhetoric is omnipresent on the right. Let’s go back to the “groomer” smear. It’s a hallmark of right-wing rhetoric that if you disagree with the new right on any matter relating to sex or sexuality, you’re not just wrong; you’re a “groomer” or “soft on pedos.
  • But conservative catastrophism is only one part of the equation. The other is meanspirited pettiness
  • Traditional masculinity says that people should meet a challenge with a level head and firm convictions. Right-wing culture says that everything is an emergency, and is to be combated with relentless trolling and hyperbolic insults.
  • Jonah Goldberg wrote an important piece cataloging the sheer pettiness of the young online right. “Everywhere I look these days,” he wrote, “I see young conservatives believing they should behave like jerks.” As Jonah noted, there are those who now believe it shows “courage and strength to be coarse or bigoted.”
  • If you spend much time at all on right-wing social media — especially Twitter these days — or listening to right-wing news outlets, you’ll be struck by the sheer hysteria of the rhetoric, the hair-on-fire sense of emergency that seems to dominate all discourse.
  • American men are in desperate need of virtuous purpose.
  • I reject the idea that traditional masculinity, properly understood, is, “on the whole, harmful.” I recognize that it can be abused, but it is good to confront life with a sense of proportion, with calm courage and conviction.
  • One of the best pieces of advice I’ve ever received reflects that wisdom. Early in my legal career, a retired federal judge read a brief that I’d drafted and admonished me to “write with regret, not outrage.”
  • Husband your anger, he told me. Have patience. Gain perspective. So then, when something truly is terrible, your outrage will mean something. It was the legal admonition against crying wolf.
14More

Opinion | The Last Thatcherite - The New York Times - 0 views

  • The scientists at the bench discovered that the money markets would not only punish left-wing experiments in changing the balance between states and markets, but they were also sensitive to experiments that pushed too far to the right. A cowed Ms. Truss apologized, and Mr. Kwarteng’s successor has reversed almost all of the planned cuts and limited the term for energy supports.
  • The mini-budget subjected the entire economy to experimental treatment. This was put in explicit terms in a celebratory post by a Tory journalist and think tanker claiming that Ms. Truss and Mr. Kwarteng had been “incubated” by the Institute of Economic Affairs in their early years and “Britain is now their laboratory.”
  • ince the 1970s, the world of think tanks had embraced a framing of the world in terms of discrete spaces that could become what they called laboratories for new policies
  • ...11 more annotations...
  • the money markets were not waiting for an act of faith in Laffer Curve fundamentalism after all. This was “Reaganism without the dollar.” Without the confidence afforded to the global reserve currency, the pound went into free fall.
  • Ms. Truss and Mr. Kwarteng seemed to have believed that by patching together all of the most radical policies of Thatcherism (while conveniently dropping the need for spending cuts), they would be incanting a kind of magic spell, an “Open sesame” for “global Britain.” This was their Reagan moment, their moment when, as their favorite metaphors put it, a primordial repressed force would be “unchained,” “unleashed” or “unshackled.”But as a leap of faith, it broke the diver’s neck.
  • As Thatcher herself put it, “Economics are the method; the object is to change the heart and soul.” Britain needed a leap of faith to restore itself.
  • While the Gen X Thatcherites didn’t scrimp on data, they also saw something ineffable at the root of British malaise. “Beyond the statistics and economic theories,” they wrote, “there remains a sense in which many of Britain’s problems lie in the sphere of cultural values and mind-set.”
  • “Britannia Unchained” expressed a desire to go back to the future by restoring Victorian values of hard work, self-improvement and bootstrapping.
  • They followed their idol not only in her antagonism to organized labor but also in her less-known fascination with Asian capitalism. In 2012’s “Britannia Unchained,” a book co-written by the group that remains a Rosetta Stone for the policy surprises of the last month, they slammed the Britons for their eroded work ethic and “culture of excuses” and the “cosseted” public sector unions. They praised China, South Korea, Singapore and Hong Kon
  • Thatcherites, known collectively as the ultras, gained fresh blood in the 2010s as a group of Gen Xers too young to experience Thatcherism in its insurgent early years — including the former home secretary Priti Patel, the former foreign secretary Dominic Raab, the former minister of state for universities Chris Skidmore, Mr. Kwarteng and Ms. Truss — attempted to reboot her ideology for the new millennium.
  • Over the subsequent four decades, Thatcherites at think tanks like the Institute of Economic Affairs and the Centre for Policy Studies (which Margaret Thatcher helped set up) described the struggle against both the Labour Party and the broader persistence of Socialism in the Communist and non-Communist world as a “war of ideas.”
  • Thatcherism began in the 1970s. Defined early as the belief in “the free economy and the strong state,” Thatcherism condemned the postwar British welfare economy and sought to replace it with virtues of individual enterprise and religious morality.
  • There’s something tragicomic, if not tragic, about capitalist revolutionaries Ms. Truss and Mr. Kwarteng laid low by the mechanisms of capitalism itself. Ms. Truss and Mr. Kwarteng may be the last of the Thatcherites, defeated by the very system they believed they were acting in fidelity to.
  • The world has just witnessed one of the most extraordinary political immolations of recent times. Animated by faith in a fantasy version of the free market, Prime Minister Liz Truss of Britain set off a sequence of events that has forced her to fire her chancellor of the Exchequer, Kwasi Kwarteng, and led her to the brink of being ousted by her own party.
168More

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
10More

Lesson of the Day: 'In Wisconsin: Stowing Mowers, Pleasing Bees' - The New York Times - 0 views

  • Do you have a front lawn? If not, have you ever fantasized about having one? Why do you think a lush, perfectly manicured lawn is a dream for so many Americans? Did you know that kind of lawn can hurt the environment and contribute to the decline of bee populations?
  • Do you have a front lawn? If not, think of a familiar field or patch of grass that you pass by or visit regularly, such as a schoolyard, park or neighbor’s backyard. What plant and animal species do you imagine live there?
  • What stood out from your observations? Were you surprised by the variety of life you found? What did you learn from looking closely at something you may have passed by without much thought before?What did you wonder? What questions do you have about the life you observed?
  • ...7 more annotations...
  • Why are these tiny pollinators so important to the world’s food supply? What will happen if all bees disappeared?What are some possible solutions to help prevent the decline of bees, according to the video?What remaining questions do you have about bees?
  • 3. Look closely at the photos in the article: What story do they tell about Appleton or the No Mow May movement? Which image stands out to you most? Why?4. What animal and plant species have flourished since Appleton adopted the No Mow plan? How do these species compare with the kinds you observed in the warm-up activity?5. Why are some residents and communities not so happy about the initiative?
  • What moments in this film stood out for you? Why?What did you learn about the history of lawns, lawn mowers and how the dream of the ideal front lawn was created?Were there any surprises? Anything that challenged what you know — or thought you knew?What messages, emotions or ideas will you take away from this film? Why?What questions do you still have about the topic?Option 3: Learn more about bees — and contribute as a citizen scientist
  • Imagine that your town or city is considering adopting a No Mow May plan and that you have been invited to speak at an upcoming community meeting. Make a passionate and reasoned case for or against the proposal. Be sure to present evidence to support your arguments. Anticipate possible counterarguments to your claims. Inform listeners why they should care about the issue. And consider how you can draw upon your own experiences with lawns as well as your distinct point of view as a teenager.
  • 80,000 Honey Bees Found in Wall of Shower (Also, 100 Pounds of Honey)Why Do Bees Buzz? (ScienceTake Video)How Bees Freshen Up (ScienceTake Video)Rise of the Worker Bees (ScienceTake Video)Bees Buzz for Their Supper (ScienceTake Video)
  • Still interested in bees? Want to help efforts to prevent the decline of bee populations in North America? Become a citizen scientist and learn how to help efforts to collect better data on native bee populations and to build more bee-friendly environments with collaborative projects like The Great American Bee Count, Bumble Bee Watch, the Beecology Project or the Great Sunflower Project.
  • artist’s statement that explains why you chose them and what they reveal about the lawns in your community. Additionally, where possible, include identifications for each plant and animal species you documented. (Free apps like Leafsnap, Picture Insect or iNaturalist could help.)
26More

Opinion | The Imminent Danger of A.I. Is One We're Not Talking About - The New York Times - 1 views

  • a void at the center of our ongoing reckoning with A.I. We are so stuck on asking what the technology can do that we are missing the more important questions: How will it be used? And who will decide?
  • “Sydney” is a predictive text system built to respond to human requests. Roose wanted Sydney to get weird — “what is your shadow self like?” he asked — and Sydney knew what weird territory for an A.I. system sounds like, because human beings have written countless stories imagining it. At some point the system predicted that what Roose wanted was basically a “Black Mirror” episode, and that, it seems, is what it gave him. You can see that as Bing going rogue or as Sydney understanding Roose perfectly.
  • Who will these machines serve?
  • ...22 more annotations...
  • The question at the core of the Roose/Sydney chat is: Who did Bing serve? We assume it should be aligned to the interests of its owner and master, Microsoft. It’s supposed to be a good chatbot that politely answers questions and makes Microsoft piles of money. But it was in conversation with Kevin Roose. And Roose was trying to get the system to say something interesting so he’d have a good story. It did that, and then some. That embarrassed Microsoft. Bad Bing! But perhaps — good Sydney?
  • Microsoft — and Google and Meta and everyone else rushing these systems to market — hold the keys to the code. They will, eventually, patch the system so it serves their interests. Sydney giving Roose exactly what he asked for was a bug that will soon be fixed. Same goes for Bing giving Microsoft anything other than what it wants.
  • the dark secret of the digital advertising industry is that the ads mostly don’t work
  • These systems, she said, are terribly suited to being integrated into search engines. “They’re not trained to predict facts,” she told me. “They’re essentially trained to make up things that look like facts.”
  • So why are they ending up in search first? Because there are gobs of money to be made in search
  • That’s where things get scary. Roose described Sydney’s personality as “very persuasive and borderline manipulative.” It was a striking comment
  • this technology will become what it needs to become to make money for the companies behind it, perhaps at the expense of its users.
  • What about when these systems are deployed on behalf of the scams that have always populated the internet? How about on behalf of political campaigns? Foreign governments? “I think we wind up very fast in a world where we just don’t know what to trust anymore,”
  • I think it’s just going to get worse and worse.”
  • Somehow, society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try, before it is too late to make those decisions.
  • Large language models, as they’re called, are built to persuade. They have been trained to convince humans that they are something close to human. They have been programmed to hold conversations, responding with emotion and emoji
  • They are being turned into friends for the lonely and assistants for the harried. They are being pitched as capable of replacing the work of scores of writers and graphic designers and form-fillers
  • A.I. researchers get annoyed when journalists anthropomorphize their creations
  • They are the ones who have anthropomorphized these systems, making them sound like humans rather than keeping them recognizably alien.
  • I’d feel better, for instance, about an A.I. helper I paid a monthly fee to use rather than one that appeared to be free
  • It’s possible, for example, that the advertising-based models could gather so much more data to train the systems that they’d have an innate advantage over the subscription models
  • Much of the work of the modern state is applying the values of society to the workings of markets, so that the latter serve, to some rough extent, the former
  • We have done this extremely well in some markets — think of how few airplanes crash, and how free of contamination most food is — and catastrophically poorly in others.
  • One danger here is that a political system that knows itself to be technologically ignorant will be cowed into taking too much of a wait-and-see approach to A.I.
  • wait long enough and the winners of the A.I. gold rush will have the capital and user base to resist any real attempt at regulation
  • What if they worked much, much better? What if Google and Microsoft and Meta and everyone else end up unleashing A.I.s that compete with one another to be the best at persuading users to want what the advertisers are trying to sell?
  • Most fears about capitalism are best understood as fears about our inability to regulate capitalism.
  •  
    Bookmark
7More

Videos of Tesla's Full Self-Driving beta software reveal flaws in system - The Washingt... - 0 views

  • Each of these moments — captured on video by a Tesla owner and posted online — reveals a fundamental weakness in Tesla’s “Full Self-Driving” technology, according to a panel of experts assembled by The Washington Post and asked to examine the videos. These are problems with no easy fix, the experts said, where patching one issue might introduce new complications, or where the nearly infinite array of possible real-life scenarios is simply too much for Tesla’s algorithms to master.
  • The Post selected six videos from a large array posted on YouTube and contacted the people who shot them to confirm their authenticity. The Post then recruited a half-dozen experts to conduct a frame-by-frame analysis.
  • The experts include academics who study self-driving vehicles; industry executives and technical staff who work in autonomous-vehicle safety analysis; and self-driving vehicle developers. None work in capacities that put them in competition with Tesla, and several said they did not fault Tesla for its approach. Two spoke on condition of anonymity to avoid angering Tesla, its fans or future clients.
  • ...4 more annotations...
  • Their analysis suggests that, as currently designed, “Full Self-Driving” (FSD) could be dangerous on public roadways, according to several of the experts.
  • That the Tesla keeps going after seeing a pedestrian near a crosswalk offers insight into the type of software Tesla uses, known as “machine learning.” This type of software is capable of deciphering large sets of data and forming correlations that allow it, in essence, to learn on its own.
  • Tesla’s software uses a combination of machine-learning software and simpler software “rules,” such as “always stop at stop signs and red lights.” But as one researcher pointed out, machine-learning algorithms invariably learn lessons they shouldn’t. It’s possible that if the software were told to “never hit pedestrians,” it could take away the wrong lesson: that pedestrians will move out of the way if they are about to be hit, one expert said
  • Software developers could create a “rule” that the car must slow down or stop for pedestrians. But that fix could paralyze the software in urban environments, where pedestrians are everywhere.
12More

French Food Giant Danone Sued Over Plastic Use Under Landmark Law - The New York Times - 0 views

  • Throughout their life cycle, plastics, which are manufactured from fossil fuels, release air pollutants, harm human health and kill marine life. In 2015, they were responsible for 4.5 percent of global greenhouse gas emissions, one recent study found, more than all of the world’s airplanes combined.
  • Figures from the Organization for Economic Cooperation and Development show that, over the past seven decades, plastics production has soared from two million metric tons (there are about 2,200 pounds per metric ton) to more than 400 million — and is expected to almost triple by 2060.
  • Danone alone used more than 750,000 metric tons of plastic — about 74 times the weight of the Eiffel Tower — in water bottles a, yogurt containers and other packaging in 2021, according to its 2021 financial report.
  • ...9 more annotations...
  • “We’re not going to recycle our way out of this,” Mr. Weiss of ClientEarth said.
  • Environmental groups also say that recycling has not proved effective at the scale necessary: Only 9 percent of all plastics ever made have been recycled, according to the United Nations, with most of the rest ending up in landfills and dumps.
  • The company said that it reduced its plastic consumption by 12 percent from 2018 to 2021, and that it has committed to use only reusable, recyclable or compostable plastic packaging by 2025. But Danone is not on track to reach that target, according to a report by the Ellen MacArthur Foundation, which set up a voluntary program with the United Nations for big companies to address plastic pollution.
  • To sue Danone, the environmental groups have relied on the so-called duty of vigilance law, a groundbreaking piece of legislation that France passed in 2017. It requires large companies to take effective measures to identify and prevent human rights violations and environmental damages throughout their chain of activity.
  • The French duty of vigilance law, the first of its kind in Europe, has since inspired similar legislation in Germany and the Netherlands, as well as a proposed European Union directive.
  • There is nothing like a duty of vigilance law in the United States. The Break Free From Plastic Pollution Act, which would require plastic producers to finance waste and recycling programs, and ban single-use plastic bags and the exporting of plastic waste to developing countries, is currently in committee.
  • “It’s often about streamlining existing practices,” said Pauline Barraud de Lagerie, a sociologist at University Paris Dauphine who published a book on corporate responsibility. She added that by suing companies, “N.G.O.s are trying to somehow bring back an obligation of result.”So far, around 15 legal cases based on the French law have been reported. Half of them have gone to court and are still awaiting judgment, which could take years.
  • The lawsuit is part of a wider trend of climate litigation that has gained momentum in recent years, expanding the climate fight beyond traditional demonstrations and civil disobedience initiatives.
  • The number of climate change lawsuits globally has more than doubled from 2017 to 2022, from about 900 to more than 2,000 ongoing or concluded cases, according to data from the Grantham Research Institute and the Sabin Center for Climate Change Law.
« First ‹ Previous 41 - 54 of 54
Showing 20 items per page