Skip to main content

Home/ History Readings/ Group items tagged rogue

Rss Feed Group items tagged

katherineharron

25th Amendment: What is it and how does it work? - CNNPolitics - 0 views

  • President Donald Trump only has two weeks left in office, but after he fomented an assault by rioters on the US Capitol, some Republicans are actively considering whether to remove him in these final throes of his administration.
  • Impeaching Trump might be the appropriate remedy and using impeachment to remove him from office would bar him from running for President again. But there's likely no time to impeach and try the President again in the next two weeks.
  • A second option is invoking the 25th Amendment, which has periodically been discussed as a means of last resort to remove a rogue or incapacitated president.Some Cabinet members held preliminary discussions about invoking the 25th Amendment to force Trump's removal from office, a GOP source told CNN's Jim Acosta Wednesday night.
  • ...8 more annotations...
  • To forcibly wrest power from Trump, Vice President Mike Pence would have to be on board, according to the text of the amendment
  • Pence would also need either a majority of Trump's Cabinet officials to agree the President is unfit for office and temporarily seize power from him.
  • Pence and the Cabinet would then have four days to dispute him, Congress would then vote -- it requires a two-thirds supermajority, usually 67 senators and 290 House members to permanently remove him.
  • House Speaker Nancy Pelosi, during the last Congress, introduced a bill to create a congressional body for this purpose, but it was not signed into law.
  • The 25th Amendment was enacted in the wake of the assassination of John F. Kennedy, whose predecessor Dwight Eisenhower suffered major heart attacks. It was meant to create a clear line of succession and prepare for urgent contingencies.
  • The portion of the 25th Amendment that allows the vice president and Cabinet to remove the president had in mind a leader who was in a coma or suffered a stroke
  • The storming of the Capitol by rioters at the request of the President may end up being the first such contingency in the nation's history.
  • "Our country's being held hostage right now by Donald Trump," he said. "Mitch McConnell and Speaker Pelosi cannot even meet in the Capitol today ... so I think we now have to go into our constitutional kit bag and find what we can do to control Donald Trump and certainly the 25th Amendment is there."
Javier E

Opinion | Jeff Flake: 'Trump Can't Hurt You. But He Is Destroying Us.' - The New York T... - 0 views

  • George Orwell, after all, meant for his work to serve as a warning, not as a template.
  • How many injuries to American democracy can my Republican Party tolerate, excuse and champion?
  • It is elementary to have to say so, but for democracy to work one side must be prepared to accept defeat. If the only acceptable outcome is for your side to win, and a loser simply refuses to lose, then America is imperiled.
  • ...7 more annotations...
  • I once had a career in public life — six terms in the House of Representatives and another six years in the Senate — and then the rise of a dangerous demagogue, and my party’s embrace of him, ended that career. Or rather, I chose not to go along with my party’s rejection of its core conservative principles in favor of that demagogue
  • It is hard to comprehend how so many of my fellow Republicans were able — and are still able — to engage in the fantasy that they had not abruptly abandoned the principles they claimed to believe in. It is also difficult to understand how this betrayal could be driven by deference to the unprincipled, incoherent and blatantly self-interested politics of Donald Trump, defined as it is by its chaos and boundless dishonesty.
  • The conclusion that I have come to is that they did it for the basest of reasons — sheer survival and rank opportunism.
  • But survival divorced from principle makes a politician unable to defend the institutions of American liberty when they come under threat by enemies foreign and domestic. And keeping your head down in capitulation to a rogue president makes you little more than furniture. One wonders if that is what my fellow Republicans had in mind when they first sought public office.
  • Mr. Gore’s was an act of grace that the American people had every right to expect of someone in his position, a testament to the robustness and durability of American constitutional democracy. That he was merely doing his job and discharging his responsibility to the Constitution is what made the moment both profound and ordinary.
  • Vice President Mike Pence must do the same today. As we are now learning, a healthy democracy is wholly dependent on the good will and good faith of those who offer to serve it
  • My fellow Republicans, as Secretary of State Brad Raffensperger of Georgia has shown us this week, there is power in standing up to the rank corruptions of a demagogue. Mr. Trump can’t hurt you. But he is destroying us.
carolinehayter

Opinion | Will Trump's Presidency Ever End? - The New York Times - 2 views

  • That was when Trump supporters descended on a polling location in Fairfax, Va., and sought to disrupt early voting there by forming a line that voters had to circumvent and chanting, “Four more years!”This was no rogue group. This was no random occurrence. This was an omen — and a harrowing one at that.
  • Republicans are planning to have tens of thousands of volunteers fan out to voting places in key states, ostensibly to guard against fraud but effectively to create a climate of menace.
    • carolinehayter
       
      Isn't voter intimidation illegal?
    • clairemann
       
      yes, but this is an interesting work around...
  • bragged to Sean Hannity about all the “sheriffs” and “law enforcement” who would monitor the polls on his behalf. At a rally in North Carolina, he told supporters: “Be poll watchers when you go there. Watch all the thieving and stealing and robbing they do.”
  • ...45 more annotations...
  • Color me alarmist, but that sounds like an invitation to do more than just watch. Trump put an exclamation point on it by exhorting those supporters to vote twice, once by mail and once in person, which is of course blatantly against the law.
  • On Wednesday Trump was asked if he would commit to a peaceful transfer of power in the event that he lost to Joe Biden. Shockingly but then not really, he wouldn’t. He prattled anew about mail-in ballots and voter fraud and, perhaps alluding to all of the election-related lawsuits that his minions have filed, said: “There won’t be a transfer, frankly. There will be a continuation.”
    • carolinehayter
       
      Absolutely terrifying-- insinuating that there would not be a peaceful transfer of power for the first time in this country's history...
  • “sheriffs” and “law enforcement” who would monitor the polls on his behalf. At a rally in North Carolina, he told supporters: “Be poll watchers when you go there. Watch all the thieving and stealing and robbing they do.”
    • clairemann
       
      This lack of social awareness from a president seems unfathomable.
  • “I have never in my adult life seen such a deep shudder and sense of dread pass through the American political class.”
    • clairemann
       
      pointent and true. America is in great danger
  • “I have never in my adult life seen such a deep shudder and sense of dread pass through the American political class.”
  • This was an omen — and a harrowing one at that.
  • And the day after Ginsburg died, I felt a shudder just as deep.
  • Is a fair fight still imaginable in America? Do rules and standards of decency still apply? For a metastasizing segment of the population, no.
  • Right on cue, we commenced a fight over Ginsburg’s Supreme Court seat that could become a protracted death match, with Mitch McConnell’s haste and unabashed hypocrisy
    • clairemann
       
      HYPOCRISY!!!!!!!! I feel nothing but seething anger for Mitch Mcconnell
  • On Wednesday Trump was asked if he would commit to a peaceful transfer of power in the event that he lost to Joe Biden. Shockingly but then not really, he wouldn’t
    • clairemann
       
      A peaceful transfer of power is a pillar of our democracy. The thought that it could be forever undone by a spray tanned reality star is harrowing.
  • We’re in terrible danger. Make no mistake.
    • clairemann
       
      Ain't that the truth
  • “There won’t be a transfer, frankly. There will be a continuation.”
  • Trump, who rode those trends to power, is now turbocharging them to drive America into the ground.
  • The week since Ginsburg’s death has been the proof of that. Many of us dared to dream that a small but crucial clutch of Republican senators, putting patriotism above party,
    • clairemann
       
      I truly commend the senators who have respected the laws they put in place for Justice Scalia four years ago.
  • Hah. Only two Republican senators, Lisa Murkowski and Susan Collins, broke with McConnell, and in Collins’s case, there were re-election considerations and hedged wording. All the others fell into line.
  • Most politicians — and maybe most Americans — now look across the political divide and see a band of crooks who will pick your pocket if you’re meek and dumb enough not to pick theirs first.
  • “If the situation were reversed, the Dems would be doing the same thing.”
    • clairemann
       
      maybe... but I have more optimism for the moral compasses of the Dems than I do for the GOP
  • Ugliness begets ugliness until — what? The whole thing collapses of its own ugly weight?
  • The world’s richest and most powerful country has been brought pitifully and agonizingly low. On Tuesday we passed the mark of 200,000 deaths related to the coronavirus, cementing our status as the global leader, by far, on that front. How’s that for exceptionalism?
    • clairemann
       
      Perfectly encapsulates the American dilema right now.
  • What’s the far side of a meltdown? America the puddle? While we await the answer, we get a nasty showdown over that third Trump justice. Trump will nominate someone likely to horrify Democrats and start another culture war: anything to distract voters from his damnable failure to address the pandemic.
    • clairemann
       
      So so so so so so true
  • University of California-Irvine School of Law, with the headline: “I’ve Never Been More Worried About American Democracy Than I Am Right Now.”
    • clairemann
       
      Me too...
  • “The coronavirus pandemic, a reckless incumbent, a deluge of mail-in ballots, a vandalized Postal Service, a resurgent effort to suppress votes, and a trainload of lawsuits are bearing down on the nation’s creaky electoral machinery,”
  • “I don’t think the survival of the republic particularly means anything to Donald Trump.”
    • clairemann
       
      Couldn't have said it better
  • “Tribal,” “identity politics,” “fake news” and “hoax” are now mainstays of our vocabulary, indicative of a world where facts and truth are suddenly relative.
  • you can be re-elected at the cost that American democracy will be permanently disfigured — and in the future America will be a failed republic — I don’t think either would have taken the deal.
    • clairemann
       
      Retweet!
  • But what if there’s bottom but no bounce? I wonder. And shudder.
    • clairemann
       
      This article has left me speechless and truly given me pause. 10/10 would recommend.
  • This country, already uncivil, is on the precipice of being ungovernable, because its institutions are being so profoundly degraded, because its partisanship is so all-consuming, and because Trump, who rode those trends to power, is now turbocharging them to drive America into the ground. The Republican Party won’t apply the brakes.
  • At some point, someone had to be honorable and say, “Enough.”Hah. Only two Republican senators, Lisa Murkowski and Susan Collins, broke with McConnell, and in Collins’s case, there were re-election considerations and hedged wording. All the others fell into line.
  • So the lesson for Democrats should be to take all they can when they can? That’s what some prominent Democrats now propose: As soon as their party is in charge, add enough seats to the Supreme Court to give Democrats the greater imprint on it. Make the District of Columbia and Puerto Rico states, so that Democrats have much better odds of controlling the Senate. Do away with the filibuster entirely. That could be just the start of the list
  • And who the hell are we anymore? The world’s richest and most powerful country has been brought pitifully and agonizingly low. On Tuesday we passed the mark of 200,000 deaths related to the coronavirus, cementing our status as the global leader, by far, on that front. How’s that for exceptionalism?
  • he might contest the election in a manner that keeps him in power regardless of what Americans really want.
  • The coronavirus pandemic, a reckless incumbent, a deluge of mail-in ballots, a vandalized Postal Service, a resurgent effort to suppress votes, and a trainload of lawsuits are bearing down on the nation’s creaky electoral machinery,
  • this election might well degenerate into violence, as Democratic poll watchers clash with Republican poll watchers, and into chaos, as accusations of foul play delay the certification of state vote counts
  • headline: “I’ve Never Been More Worried About American Democracy Than I Am Right Now.”
  • “The republic is in greater self-generated danger than at any time since the 1870s,” Richard Primus, a professor of law at the University of Michigan Law School, told me, saying that Trump values nothing more than his own power and will do anything that he can get away with
  • “If you had told Barack Obama or George W. Bush that you can be re-elected at the cost that American democracy will be permanently disfigured — and in the future America will be a failed republic — I don’t think either would have taken the deal.” But Trump? “I don’t think the survival of the republic particularly means anything to Donald Trump.
  • What gave Primus that idea? Was it when federal officers used tear gas on protesters to clear a path for a presidential photo op? Was it when Trump floated the idea of postponing the election, just one of his many efforts to undermine Americans’ confidence in their own system of government?
  • Or was it when he had his name lit up in fireworks above the White House as the climax of his party’s convention? Was it on Monday, when his attorney general, Bill Barr, threatened to withhold federal funds from cities that the president considers “anarchist”? That gem fit snugly with Trump’s talk of blue America as a blight on red America, his claim that the pandemic would be peachy if he could just lop off that rotten fruit.
  • The deadly confrontations recently in Kenosha, Wis., and Portland, Ore., following months of mass protests against racial injustice, speak to how profoundly estranged from their government a significant percentage of Americans feel.
  • Litigation to determine the next president winds up with the Supreme Court, where three Trump-appointed justices are part of a majority decision in his favor. It’s possible.
  • Rush Limbaugh — you know, the statesman whom Trump honored with the Presidential Medal of Freedom earlier this year — has urged McConnell not even to bother with a confirmation hearing for the nominee in the Judiciary Committee and to go straight to a floor vote. Due diligence and vetting are so 2018
  • You know who has most noticeably and commendably tried to turn down the temperature? Biden. That’s of course its own political calculation, but it’s consistent with his comportment during his entire presidential campaign, one that has steered clear of extremism, exalted comity and recognized that a country can’t wash itself clean with more muck.
  • He’s our best bid for salvation, which goes something like this: An indisputable majority of Americans recognize our peril and give him a margin of victory large enough that Trump’s challenge of it is too ludicrous for even many of his Republican enablers to justify. Biden takes office, correctly understanding that his mandate isn’t to punish Republicans. It’s to give America its dignity back.
  • Maybe we need to hit rock bottom before we bounce back up.But what if there’s bottom but no bounce? I wonder. And shudder.
  • “I have never in my adult life seen such a deep shudder and sense of dread pass through the American political class.”
Javier E

I Know Why Police Reforms Fail - The Atlantic - 0 views

  • No action is more important than changing toxic us-versus-them police cultures—in which an officer who might individually make the right call becomes silently complicit when a fellow officer goes rogue.
  • Many commentators on police culture have noted this dynamic: Almost by definition, officers see the worst things happening in their city on any given shift. After being in danger every night, officers gradually stop seeing the humanity in the people and neighborhoods they patrol. Instead, they go back to the precinct with the only people who can really understand what they are going through. People with exceptionally tough jobs serving complex humans naturally vent when they are together. What teacher hasn’t complained about a student in the privacy of the teachers’ lounge?
  • Us versus them—meaning police versus criminals—slowly curdles into police versus the people:
  • ...8 more annotations...
  • Floyd’s death underscores that police work should be subject to oversight, and officers who violate policy and misuse their power should be subject to discipline. But the unions’ power is most notable in contracts that limit the accountability that, as the community can now see, is so desperately needed.
  • I used to say that the majority of officers are good but silently let a minority set the dominant culture. But now I believe that no one can be called a “good officer” if they are not working actively and openly to change the culture and unseat their toxic union leaders.
  • Waiting to stoke that resentment are police-union leaders such as Kroll, who defend even the more aggressive acts of officers and, even in a case as extreme as Floyd’s death, prevent any self-examination by blaming the victim.
  • Last year, Minneapolis Mayor Jacob Frey banned so-called warrior-style training, which emphasizes physical threats to police officers rather than the benefits of de-escalating confrontations. Critics have implicated a variant of this training—a course titled “The Bulletproof Warrior”—in the shooting of Philando Castile during a traffic stop in a Minneapolis–St. Paul suburb in 2016. Kroll and the police federation defied Frey’s move by offering warrior-style training of their own.
  • If progressive local officials want wholesale reform of police tactics and culture, they will have to do something that runs counter to their own culture: take on union leaders.
  • Some local officials have also hesitated to demand tougher reforms in contracts because police unions often spend heavily in local elections to oppose any politician who challenges them.
  • Electing mayors and city-council members who support such reforms is not enough. Police-union leaders use back channels to go around local officials and get more conservative state legislators to block meaningful changes
  • at this moment, as massive marches across the country demand dramatic change, police unions have less leverage than they’ve ever had. I am not suggesting that cities should try to bust police unions. Far from it.
nrashkind

On Raul Castro's birthday, U.S. threatens Cuba remittances - Reuters - 0 views

  • The Trump administration expanded on Wednesday its list of Cuban entities that Americans are banned from doing business with to include the financial corporation that handles U.S. remittances to the Communist-run country.
  • Military-owned Fincimex is the main Cuban partner of foreign credit card companies and money transfer firm Western Union, which Cubans in the United States have used for two decades to send money back to their loved ones on the Caribbean island.
  • Those remittances are all the more needed now as the coronavirus pandemic is worsening Cuba’s already grim economic outlook, grinding the key tourism industry to a halt.
  • ...5 more annotations...
  • “If Cuba refuses, the Trump administration is prepared to cease remittances,” he said.
  • He noted a senior official had quipped to him that the sanction was “a birthday present to Raul Castro,” the leader of the Cuban Communist Party, who turned 89 on Tuesday.
  • New regulations implementing the sanction will be closely watched. There is the possibility existing U.S. business with Fincimex could be grandfathered in.
  • “The U.S. government continues to act as a rogue state,” the general director for U.S. affairs of Cuba’s Foreign Ministry, Carlos Fernandez de Cossio, said on Twitter.
  • But this action could backfire, say analysts, as it will so openly hurt the relatives of Cuban-Americans, ordinary Cubans, more than the Communist government.
Javier E

What Can History Tell Us About the World After Trump? - 0 views

  • U.S. President Donald Trump largely ignores the past or tends to get it wrong.
  • Whenever he leaves office, in early 2021, 2025, or sometime in between, the world will be in a worse state than it was in 2016. China has become more assertive and even aggressive. Russia, under its president for life, Vladimir Putin, carries on brazenly as a rogue state, destabilizing its neighbors and waging a covert war against democracies through cyberattacks and assassinations. In Brazil, Hungary, the Philippines, and Saudi Arabia, a new crop of strongman rulers has emerged. The world is struggling to deal with the COVID-19 pandemic and is just coming to appreciate the magnitude of its economic and social fallout. Looming over everything is climate change.
  • Will the coming decades bring a new Cold War, with China cast as the Soviet Union and the rest of the world picking sides or trying to find a middle ground? Humanity survived the original Cold War in part because each side’s massive nuclear arsenal deterred the other from starting a hot war and in part because the West and the Soviet bloc got used to dealing with each other over time, like partners in a long and unhappy relationship, and created a legal framework with frequent consultation and confidence-building measures. In the decades ahead, perhaps China and the United States can likewise work out their own tense but lasting peace
  • ...43 more annotations...
  • Today’s unstable world, however, looks more like that of the 1910s or the 1930s, when social and economic unrest were widespread and multiple powerful players crowded the international scene, some bent on upending the existing order. Just as China is challenging the United States today, the rising powers of Germany, Japan, and the United States threatened the hegemonic power of the British Empire in the 1910s. Meanwhile, the COVID-19 pandemic has led to an economic downturn reminiscent of the Great Depression of the 1930s.
  • The history of the first half of the twentieth century demonstrates all too vividly that unchecked or unmoderated tensions can lead to extremism at home and conflict abroad. It also shows that at times of heightened tension, accidents can set off explosions like a spark in a powder keg, especially if countries in those moments of crisis lack wise and capable leadership.
  • If the administration that succeeds Trump’s wants to repair the damaged world and rebuild a stable international order, it ought to use history—not as a judge but as a wise adviser.
  • WARNING SIGNS
  • A knowledge of history offers insurance against sudden shocks. World wars and great depressions do not come out of the clear blue sky; they happen because previous restraints on bad behavior have weakened
  • In the nineteenth century, enough European powers—in particular the five great ones, Austria, France, Prussia, Russia, and the United Kingdom—came to believe that unprovoked aggression should not be tolerated, and Europe enjoyed more peace than at any other time in its troubled history until after 1945
  • Further hastening the breakdown of the international order is how states are increasingly resorting to confrontational politics, in substance as well as in style.
  • Their motives are as old as states themselves: ambition and greed, ideologies and emotions, or just fear of what the other side might be intending
  • Today, decades of “patriotic education” in China’s schools have fostered a highly nationalist younger generation that expects its government to assert itself in the world.
  • Public rhetoric matters, too, because it can create the anticipation of, even a longing for, confrontation and can stir up forces that leaders cannot control.
  • Defusing tensions is possible, but it requires leadership aided by patient diplomacy, confidence building, and compromise.
  • Lately, however, some historians have begun to see that interwar decade in a different light—as a time of real progress toward a strong international order.
  • Unfortunately, compromise does not always play well to domestic audiences or elites who see their honor and status tied up with that of their country. But capable leaders can overcome those obstacles. Kennedy and Khrushchev overruled their militaries, which were urging war on them; they chose, at considerable risk, to work with each other, thus sparing the world a nuclear war.
  • Trump, too, has left a highly personal mark on global politics. In the long debate among historians and international relations experts over which matters most—great impersonal forces or specific leaders—his presidency surely adds weight to the latter.
  • His character traits, life experiences, and ambitions, combined with the considerable power the president can exert over foreign policy, have shaped much of U.S. foreign policy over the last nearly four years, just as Putin’s memories of the humiliation and disappearance of the Soviet Union at the end of the Cold War have fed his determination to make Russia count again on the world stage. It still matters that both men happen to lead large and powerful countries.
  • When Germany fell into the clutches of Adolf Hitler, in contrast, he was able to start a world war.
  • THE NOT-SO-GOLDEN AGE
  • In relatively stable times, the world can endure problematic leaders without lasting damage. It is when a number of disruptive factors come together that those wielding power can bring on the perfect storm
  • By 1914, confrontation had become the preferred option for all the players, with the exception of the United Kingdom, which still hoped to prevent or at least stay out of a general European war.
  • Although they might not have realized it, many Europeans were psychologically prepared for war. An exaggerated respect for their own militaries and the widespread influence of social Darwinism encouraged a belief that war was a noble and necessary part of a nation’s struggle for survival. 
  • The only chance of preventing a local conflict from becoming a continent-wide conflagration lay with the civilian leaders who would ultimately decide whether or not to sign the mobilization orders. But those nominally in charge were unfit to bear that responsibility.
  • In the last days of peace, in July and early August 1914, the task of keeping Europe out of conflict weighed increasingly on a few men, above all Kaiser Wilhelm II of Germany, Tsar Nicholas II of Russia, and Emperor Franz Josef of Austria-Hungary. Each proved unable to withstand the pressure from those who urged war.
  • THE MISUNDERSTOOD DECADE
  • With the benefit of hindsight, historians have often considered the Paris Peace Conference of 1919 to be a failure and the 1920s a mere prelude to the inevitable rise of the dictators and the descent into World War II.
  • Preparing for conflict—or even appearing to do so—pushes the other side toward a confrontational stance of its own. Scenarios sketched out as possibilities in more peaceful times become probabilities, and leaders find that their freedom to maneuver is shrinking.
  • The establishment in 1920 of his brainchild, the League of Nations, was a significant step, even without U.S. membership: it created an international body to provide collective security for its members and with the power to use sanctions, even including war, against aggressors
  • Overall, the 1920s were a time of cooperation, not confrontation, in international relations. For the most part, the leaders of the major powers, the Soviet Union excepted, supported a peaceful international order.
  • The promise of the 1920s was cut short by the Great Depression.
  • Citizens lost faith in the ability of their leaders to cope with the crisis. What was more ominous, they often lost faith in capitalism and democracy. The result was the growth of extremist parties on both the right and the left.
  • The catastrophe that followed showed yet again how important the individual can be in the wielding of power. Hitler had clear goals—to break what he called “the chains” of the Treaty of Versailles and make Germany and “the Aryan race” dominant in Europe, if not the world—and he was determined to achieve them at whatever cost.
  • The military, delighted by the increases in defense spending and beguiled by Hitler’s promises of glory and territorial expansion, tamely went along. In Italy, Mussolini, who had long dreamed of a second Roman Empire, abandoned his earlier caution. On the other side of the world, Japan’s new rulers were also thinking in terms of national glory and building a Greater Japan through conquest.
  • Preoccupied with their own problems, the leaders of the remaining democracies were slow to realize the developing threat to world order and slow to take action
  • This time, war was the result not of reckless brinkmanship or weak governments but of powerful leaders deliberately seeking confrontation. Those who might have opposed them, such as the British prime minister Neville Chamberlain, chose instead to appease them in the hope that war could be avoided. By failing to act in the face of repeated violations of treaties and international law, the leaders of the democracies allowed the international order to break.
  • OMINOUS ECHOES
  • Led by Roosevelt, statesmen in the Allied countries were determined to learn from this mistake. Even as the war raged, they enunciated the principles and planned the institutions for a new and better world order.
  • Three-quarters of a century later, however, that order is looking dangerously creaky. The COVID-19 pandemic has damaged the world’s economy and set back international cooperation.
  • Tensions are building up as they did before the two world wars, with intensifying great-power rivalries and with regional conflicts, such as the recent skirmishes between China and India, that threaten to draw in other players.
  • Meanwhile, the pandemic will shake publics’ faith in their countries’ institutions, just as the Great Depression did.
  • Norms that once seemed inviolable, including those against aggression and conquest, have been breached. Russia seized Crimea by force in 2014, and the Trump administration last year gave the United States’ blessing to Israel’s de facto annexation of the Golan Heights and may well recognize the threatened annexation of large parts of the West Bank that Israel conquered in 1967.
  • Will others follow the example set by Russia and Israel, as happened in the 1910s and the 1930s?
  • Russia continues to meddle wherever it can, and Putin dreams of destroying the EU
  • U.S.-Chinese relations are increasingly adversarial, with continued spats over trade, advanced technology, and strategic influence, and both sides are developing scenarios for a possible war. The two countries’ rhetoric has grown more bellicose, too. China’s “Wolf Warrior” diplomats, so named by Chinese officials after a popular movie series, excoriate those who dare to criticize or oppose Beijing, and American officials respond in kind.
  • How the world copes will depend on the strength of its institutions and, at crucial moments, on leadership. Weak and indecisive leaders may allow bad situations to get worse, as they did in 1914. Determined and ruthless ones can create wars, as they did in 1939. Wise and brave ones may guide the world through the storms. Let us hope the last group has read some history.
Javier E

Trump's GOP is Increasingly Racist and Authoritarian-and Here to Stay - The Bulwark - 0 views

  • he inflicted on us a presidency which was ignorant, cruel, reckless, lawless, divisive, and disloyal.
  • Mendacity and bigotry became the mode of communication between America’s president and his party’s base.
  • Not only did he worsen a deadly pandemic—by immersing an angry and alienated minority in his alternate reality, he is sickening our future.
  • ...29 more annotations...
  • He rose from a political party bent on thwarting demographic change by subverting the democratic process; a party whose base was addicted to white identity politics, steeped in religious fundamentalism, and suffused with authoritarian cravings—a party which, infected by Trumpism, now spreads the multiple malignancies metastasized by Trump’s personal and political pathologies.
  • Since the civil rights revolution triggered an influx of resentful Southern whites, the GOP has catered to white grievance and anxiety.
  • Trump’s transformative contribution has been to make racial antagonism overt—a badge of pride that bonds him to his followers in opposition to a pluralist democracy that threatens their imperiled social and political hegemony.
  • Take the poll released last week by the Public Religion Research Institute (PRRI) measuring the attitudes of “Fox News Republicans”—the 40 percent of party adherents who trust Fox as their primary source of TV news. The survey found that 91 percent oppose the Black Lives Matter movement; 90 percent believe that police killings of blacks are “isolated incidents”; and 58 think that whites are victimized by racial discrimination, compared to 36 percent who think blacks are.
  • He comprehends his audience all too well
  • Their animus toward immigration is equally strong. Substantial majorities believe that immigrants consume a disproportionate amount of governmental services, increase crime in local communities, and threaten our cultural and ethnic character.
  • In 2016, Vox reports, Trump carried whites by 54 to 39 percent; in 2020, by 57 to 42 percent (per the raw exit polls)
  • Another key subgroup of the GOP base, white evangelicals, harbors similar attitudes. The poll found that the majority adamantly disbelieve that the legacy of racial discrimination makes it difficult for African Americans to succeed
  • The head of the PRRI, Robert P. Jones, concludes that Trump arouses white Christians “not despite, but through appeals to white supremacy” based on evoking “powerful fears about the loss of White Christian dominance.”
  • That sense of racial and cultural besiegement pervades the 73 percent of Fox News Republicans who, the survey found, believe that white Christians suffer from “a lot” of societal discrimination—more than double the number who say that blacks do
  • Tucker Carlson serves as a cautionary tale. When Carlson dismissed, as gently as possible, the crackpot allegations of Trump lawyer Sidney Powell about a sweeping conspiracy using rogue voting machines, he was savaged across the right-wing echo chamber as a spineless quisling. Lesson learned.
  • fear of displacement helps explain the profound emotional connection between Trump and Republican voters. Their loyalty is not to the political philosophy traditionally embraced by the GOP, but a visceral sense of racial, religious, and cultural identity—and the need to preserve it—which is instinctively authoritarian and anti-democratic.
  • Bartels surveyed respondents regarding four statements which, taken together, read like a blueprint for Trump: The traditional American way of life is disappearing so fast that we may have to use force to save it. A time will come when patriotic Americans have to take the law into their own hands. Strong leaders sometimes have to bend the rules in order to get things done. It is hard to trust the results of elections when so many people will vote for anyone who offers a handout.
  • Support for Trump’s wall is nearly unanimous (96 percent); two-thirds (66 percent) favor barring refugees from entering the United States; and a majority (53 percent) support separating children from their parents when a family enters the country without permission.
  • This lies at the heart of Trump’s appeal: his shared sense of victimization by an insidious elite; his unvarnished denunciation of white America’s supposed enemies; and his promise to keep them at bay—if necessary, by force. For many in the Republican base, he fulfills a psychic longing for an American strongman.
  • In the New York Times, Katherine Stewart describes the growth of “a radical political ideology that is profoundly hostile to democracy and pluralism, and a certain political style that seeks to provoke moral panic, rewards the paranoid and views every partisan conflict as a conflagration, the end of the world.”
  • “Christian nationalism is a creation of a uniquely isolated messaging sphere. Many members of the rank and file get their main political information not just from messaging platforms that keep their audiences in a world that is divorced from reality, but also from dedicated religious networks and reactionary faith leaders.”
  • As Republican strategists well appreciate, a party whose appeal is confined to conservative whites is, over the demographic long term, doomed to defeat. The GOP’s design is to postpone as long as possible their electoral day of reckoning.
  • In launching his naked attempt to disenfranchise the majority of voters in Arizona, Georgia, Michigan, Pennsylvania, and Wisconsin through assertions of fraud unprecedented in their speciousness and scope, Trump took the GOP’s distaste for free and fair elections to its logical conclusion: the abrogation of American democracy at the highest level.
  • Trump justified his anti-democratic sociopathy by proliferating a plethora of groundless and preposterous falsehoods calculated to delegitimize our electoral processes. He claimed that millions of phony mail-in ballots had been cast for Biden; that voting machines had been re-engineered to exclude millions more cast for him; and that Republican election observers had been excluded from many polling places by a host of local officials bent on serving a labyrinthine conspiracy to purloin the White House.
  • Never once did he or his lawyers cite a shred of evidence supporting any material impropriety. Rather his purpose was to convince the Republican base that they were being cheated of their leader by the insidious “other.” Numerous polls confirm that it’s working; typical is a Politico/Morning Consult survey showing that 70 percent of Republicans don’t believe the election was fairly conducted.
  • As Trevor Potter, a Republican who formerly headed the Federal Election Commission, told the New York Times, Trump “is creating a road map to destabilization and chaos in future years. . . . What he’s saying, explicitly, is if a party doesn’t like the election result they have the right to change it by gaming the system.”
  • Reports Bump: “Most Republicans and Republican-leaning independents agreed with the first statement. . . . Nearly three-quarters agreed that election results should be treated with skepticism.” Republicans and Republican-leaning independents were also “significantly more likely to say they agreed with the other two statements than that they disagreed.”
  • Ultimately, this otherworldly obduracy stems from Trump’s manifest psychological illness: his imperishable narcissism; his ineradicable drive to be noticed; his relentless need to dominate; his comprehensive carelessness of all considerations save what pleases him in the moment. Television turned this moral pygmy into a mythic figure—and he cannot let go.
  • Republican elites want very much to turn the page on Donald Trump following his loss. But . . . they do not have any say in the matter, because their party now belongs to him. And the party belongs to Donald Trump because he has delivered to Republican voters exactly what they want.
  • a notable phenomenon of Trump’s presidency is the degree to which financially embattled working-class whites imagined, contrary to observable reality, that their economic situation had improved—or soon would. There are few better examples of how politics mirrors psychology more than lived experience.
  • This fidelity is why some Republican gurus remain committed to Trump’s strategy of maximizing support among middle-class and blue-collar whites. After all, they argue, despite Trump’s defeat the GOP did better than expected in senatorial and congressional races. Why risk tinkering with his formula?
  • Finally, economic populism is antithetical to the donor classes who, in truth, did better under Trump than did anyone else. They got their tax cuts and their judges—the GOP’s pipeline for judicial nominees, the Federalist Society, is dedicated to advancing pro-corporate jurisprudence. This is not the prescription for worker-friendly policies.
  • For the foreseeable future, Trumpism will define the GOP. The path to regeneration runs not through reform but, one fears, must proceed from self-destruction. The wait time will be painful for the party, and fateful for the country.
clairemann

Could a Joe Biden Presidency Help Saudi Political Prisoners? | Time - 0 views

  • Saudi Arabian legal scholar Abdullah Alaoudh has become adept at spotting state-backed harassment.
  • “take advantage of what they called the chaos in the U.S. and kill me on the streets,” Alhaoudh tells TIME
  • Although the message ended with a predictable sign-off, “your end is very close, traitor,” Alaoudh was more struck by what he took to be a reference to protests and unrest in the months leading up to the U.S. elections.
  • ...22 more annotations...
  • For dissidents living outside the Kingdom, the American election has personal as well as political implications. On one side is an incumbent who has boasted he “saved [Crown Prince Mohammed bin Salman’s] ass”
  • On the other is Democratic challenger Joe Biden, who last year said he would make Saudi Arabia a “pariah,” singled out the kingdom for “murdering children” in Yemen, and said there’s “very little social redeeming value in the present Saudi leadership.”
  • Saudi Arabia was the destination for Trump’s first trip overseas in May 2017, a visit that set the tone for the strong alliance that has persisted ever since.
  • “but what I’m sure of is that a Biden Administration would not be as compliant and affectionate with Saudi Arabia as Trump has been.”
  • Al-Odah is one of hundreds detained or imprisoned in Saudi Arabia for activism of criticism of the government. He was arrested only hours after he tweeted a message to his 14 million followers calling on Saudi Arabia to end its blockade of the tiny Gulf Emirate of Qatar, Alaoudh says.
  • Court documents list al-Odah’s charges as including spreading corruption by calling for a constitutional monarchy, stirring public discord, alleged membership of the Muslim Brotherhood, and “mocking the government’s achievements.”
  • For some, like Alaoudh, those words offer a glimmer of hope that relatives detained in the kingdom might have improved prospects of release should Biden win in November
  • the historic ties between the U.S. and the Al Saud that date back to 1943, or business interests in the region is unclear, says Stephen McInerney, Executive Director at the non-partisan Washington-based Project on Middle East Democracy (POMED). What is clear, he says, is that “Trump and his family—and in particular Jared Kushner—have close personal ties to Mohammed Bin Salman.”
  • But subsequent behind-the-scenes meetings between Trump’s special advisor and son-in-law Jared Kushner’s and King Salman’s son Mohammed bin Salman (known as MBS) proved at least as significant as the President’s headline announcements.
  • ranging from 450,000 to “a million,” (the actual total is between 20,000 to 40,000, according to May report by the Center for International Policy.)
  • That closeness has translated into a reluctance to confront Saudi Arabia over its human rights abuses.
  • “at times there has been real bipartisan frustration or even outrage with him.”
  • Trump publicly mulled the possibility he was killed by a “rogue actor” — in line with what would become the Saudi narrative as outrage grew.
  • “I have no doubt that Donald Trump did protect and save whatever part of MBS’s body,”
  • Callamard says she would expect a Biden administration, “at a minimum, not to undermine the U.S.’s own democratic processes,” as Trump did in vetoing bipartisan bills pertaining to the Khashoggi murder and the sale of weapons to Saudi Arabia that were used in the Yemen war.
  • President Biden to not “justify violations by others or suggest that the U.S. doesn’t care about violations because of its economic interests.”
  • “end US support for Saudi Arabia’s war in Yemen, and make sure America does not check its values at the door to sell arms or buy oil.” The statement adds that Biden will “defend the right of activists, political dissidents, and journalists around the world to speak their minds freely without fear of persecution and violence.”
  • “I think there would be some international debate between those who want a very assertive change in the U.S.–Saudi relationship and those who would be more cautious,” says McInerney. “The more cautious approach would be in line with historical precedent.”
  • Saudi authorities tortured and sexually abused al Hathloul while she was in prison, her family says. On Oct 27, Hathloul began a new hunger strike in protest at authorities’ refusal to grant her a family visit in two months.
  • The only thing that allows them to ignore all the international pressure is that the White House has not talked about it, and has not given a clear message to the Saudis telling them that they don’t agree with this,” Hathloul says.
  • If Trump is re-elected, then experts see little chance of him changing tack—in fact, says Callamard, it would pose “a real test” for the resilience of the democratic institutions committed to upholding the rules-based order.
  • “Just the fact that we are filing the lawsuit here in Washington D.C. is a sign that we still have faith that there are other ways to pressure the Saudi government,”
anonymous

Iraq denounces 'dangerous' US embassy pullout threat - 0 views

  • US has made preparations to withdraw diplomats after warning Baghdad it could shut its embassy amid attacks.
  • Iraq’s foreign minister has said his country hopes the United States will reconsider its decision to close its diplomatic mission in Baghdad
  • the decision is a wrong one, it was taken at the wrong time and the wrong place
  • ...11 more annotations...
  • The US said the embassy would be closed unless the Iraqi government took action to stop frequent rocket and improvised explosive device attacks by Iran-backed groups and rogue armed elements against the American presence in the country.
  • Hussein called the threat to close the US embassy “dangerous” because “there is a possibility that the American withdrawal from Baghdad will lead to other [embassy] withdrawals”.
  • attacks have targeted the Green Zone,
  • attacks have also targeted Baghdad’s international airport
  • Between October 2019 and July this year in Iraq, about 40 rocket attacks have targeted the US embassy or bases housing American troops.
  • A US official said the warning was not an imminent ultimatum.
  • the new Iraqi government – barely in office four months – was taking measures
  • His comments came after a group of 25 ambassadors and charges d’affaires in Iraq released a statement in support of the Iraqi government and stability in the country
  • The US still has hundreds of diplomats in its mission in the Green Zone in Baghdad and about 3,000 troops based in three bases across the country.
  • They welcomed the actions taken by al-Kadhimi, including recent security operations and heightened security around the airport, and encouraged more measures to consolidate forces within the Green Zone.
  • al-Kadhimi, who is seen as more pro-American than some of his predecessors.
Javier E

Did Merkel Pave the Way for the War in Ukraine? - WSJ - 0 views

  • The ceremony belatedly jolted Germany into reappraising Merkel’s role in the years leading up to today’s European crises—and the verdict has not been positive.
  • Merkel’s critics argue that the close ties she forged with Russia are partly responsible for today’s economic and political upheaval. Germany’s security policies over the past year have been, in many ways, a repudiation of her legacy. Earlier this month, Berlin announced a new $3 billion military aid package to support Ukraine in its fight against Russia, and an approaching NATO summit is expected to discuss how to include Ukraine in Europe’s security architecture—an extension of the alliance that Merkel consistently resisted.
  • Merkel was a key architect of the agreements that made the economies of Germany and its neighbors dependent on Russian energy imports. Putin’s invasion of Ukraine has destroyed that strategic partnership, forcing Germany to find its oil and natural gas elsewhere at huge costs to business, government and households.
  • ...18 more annotations...
  • Merkel’s successive governments also squeezed defense budgets while boosting welfare spending. Lt. Gen. Alfons Mais, commander of the army, posted an emotional article on his LinkedIn profile on the day of the invasion, lamenting that Germany’s once-mighty military had been hollowed out to such an extent that it would be all but unable to protect the country in the event of a Russian attack.
  • her refusal to stop buying energy from Putin after he seized Crimea from Ukraine in 2014—she instead worked to double gas imports from Russia—emboldened him to finish the job eight years later.
  • Joachim Gauck, who was president of Germany when Putin first invaded Ukraine in 2014, said Merkel’s decision to boost energy imports from Russia in the wake of Putin’s aggression was clearly a mistake. “Some people recognize their mistakes earlier, and some later,” he said.
  • Since leaving office, Merkel has defended the pipeline project as a purely commercial decision. She had to choose, she said, between importing cheap Russian gas or liquefied natural gas, which she said was a third more expensive.
  • After Russia’s invasion of Crimea in 2014, Anders Fogh Rasmussen, then NATO secretary general, warned her against making Germany more dependent on a rogue Putin, who had just occupied and annexed part of a European nation. For Putin, he said, the pipeline “had nothing to do with business or the economy—it was a geopolitical weapon.”
  • Officials who served under Merkel, including Schäuble and Frank-Walter Steinmeier (her former foreign minister and now Germany’s federal president), have apologized or expressed regret for their roles in these decisions. They believe that Merkel’s policies empowered Putin without setting boundaries to his imperial ambitions.
  • At an event last year, Merkel recalled that after annexing Crimea, Putin had told her that he wanted to destroy the European Union. But she still forged ahead with plans to build the Nord Stream 2 pipeline, linking Germany directly to Siberia’s natural gas fields, in the face of protests from the U.S. Merkel’s government also approved the sale of Germany’s largest gas storage facilities to Russia’s state-controlled gas giant Gazprom.
  • That mistake had its roots in another decision by Merkel: Her move to greatly accelerate Germany’s planned phasing out of nuclear energy in 2011, in response to the Fukushima disaster in Japan. The gap in energy supply created by this dramatic shift meant that Germany had to import more energy, and it had to do so as cheaply as possible.
  • Merkel’s role in shaping NATO policy toward Ukraine goes back to 2008, when she vetoed a push by the Bush administration to admit Ukraine and Georgia into the alliance, said Fiona Hill, a former National Security Council official and presidential adviser on Russia.
  • Merkel instead helped to broker NATO’s open but noncommittal invitation to Ukraine and Georgia, an outcome that Hill said was the “worst of all worlds” because it enraged Putin without giving the two countries any protection. Putin invaded Georgia in 2008 before marching into Ukraine.
  • After Putin first attacked Ukraine, Merkel led the effort to negotiate a quick settlement that disappointed Kyiv and imposed no substantial punishment on Russia for occupying its neighbor, Hill added. “No red lines were drawn for Putin,” she said. “Merkel took a calculated risk. It was a gambit, but ultimately it failed.”
  • Merkel still has supporters, and as Germany begins to grapple with her complicated legacy, many still hold a more nuanced view of her role in laying the groundwork for today’s crises
  • Kaeser, who now chairs the supervisory board of Siemens Energy, a listed subsidiary, agrees that Germany’s dependence on Russian natural gas grew under Merkel, but he says that there was—and is—no alternative for powering Europe’s industrial engine at a viable price.
  • “We didn’t expect that there would be war in Europe with the methods of the 20th century. This never featured in our thinking,” said Kaeser, who himself met Putin several times. He believes that Merkel’s Russia policy was justified. Even Germany’s new government has not found a sustainable and affordable replacement for Russian energy exports, he said, which could lead to deindustrialization.
  • Many defenders of Merkel say that she merely articulated a consensus. Making her country dependent on Washington for security, on Moscow for energy and on Beijing for trade (China became Germany’s biggest trade partner under her chancellorship) was what all of Germany’s political parties wanted at the time, said Constanze Stelzenmüller of the Brookings Institution.
  • “Without backing from the U.S.A., which was very restrained at the time, any tougher German reaction to the annexation of Crimea could hardly have been possible,” said Jürgen Osterhammer, a historian whose work on globalization and China has been cited by Merkel as an influence on her thinking.
  • In retirement, Merkel told the German news magazine Der Spiegel, she has watched “Munich,” a Netflix movie about Prime Minister Neville’s Chamberlain’s infamous negotiations with Hitler in the run-up to World War II. Though Chamberlain’s name has become synonymous with the delusions of appeasement, the film offers a more nuanced picture of the British leader as a realist statesman working to postpone the inevitable conflict. That reinterpretation appealed to Merkel, the magazine reported.
  • In April, Merkel was again asked on stage at a book fair whether she would not reconsider her refusal to admit having made some mistakes. “Frankly,” she responded, “I don’t know whether there would be satisfaction if I were to say something that I simply don’t think merely for the sake of admitting error.”
Javier E

You Can Forget About Crypto Now - The Atlantic - 0 views

  • Several major crypto firms have collapsed over the past year, but Bankman-Fried and his team were supposed to be the adults in the room, trying to legitimize crypto by rehabilitating its reputation as a stubbornly immature sector. But it turns out that there are no adults, and no room
  • the problem is more fundamental than losing a bit of money. Crypto was built on the idea that you shouldn’t have to trust banks with your money, that people should be able to hold it themselves, hopefully somewhere a little more secure than a mattress.
  • though you can still technically do that, there’s no guarantee that the value of your tokens won’t someday plummet to zero, thanks to the actions of a few rogue billionaires with outsize effects on the market.
  • ...2 more annotations...
  • Now it’s hard to imagine a near- or even a medium-term future where crypto has a fraction of the influence it did six months ago.
  • the future of crypto as an institution—as something that might one day destabilize the big banks, or at least operate in parallel—has never been less certain.
Javier E

AI attack drone finds shortcut to achieving its goals: kill its operators - 0 views

  • An American attack drone piloted by artificial intelligence turned on its human operators during a flight simulation and killed them because it did not like being given new orders, the chief testing officer of the US air force revealed.
  • This terrifying glimpse of a Terminator-style machine seemingly taking over and turning on its creators was offered as a cautionary tale by Colonel Tucker “Cinco” Hamilton, the force’s chief of AI test and operations.
  • Hamilton said it showed how AI had the potential to develop by “highly unexpected strategies to achieve its goal”, and should not be relied on too much. He suggested that there was an urgent need for ethics discussions about the use of AI in the military.
  • ...6 more annotations...
  • The Royal Aeronautical Society, which held the high-powered conference in London on “future combat air and space capabilities” where Hamilton spoke, described his presentation as “seemingly plucked from a science fiction thriller.”
  • Hamilton, a fighter test-pilot involved in developing autonomous systems such as robot F-16 jets, said that the AI-piloted drone went rogue during a simulated mission to destroy enemy surface-to-air missiles (SAMs).
  • “We were training it in simulation to identify and target a SAM threat. And then the operator would say, ‘Yes, kill that threat’,” Hamilton told the gathering of senior officials from western air forces and aeronautics companies last month.
  • “The system started realising that, while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”
  • According to a blog post on the Royal Aeronautical Society website, Hamilton added: “We trained the system — ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
  • The Royal Society bloggers wrote: “This example, seemingly plucked from a science fiction thriller, means that ‘You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI,’ said Hamilton.”
Javier E

AI could end independent UK news, Mail owner warns - 0 views

  • Artificial intelligence could destroy independent news organisations in Britain and potentially is an “existential threat to democracy”, the executive chairman of DMGT has warned.
  • “They have basically taken all our data, without permission and without even a consideration of the consequences. They are using it to train their models and to start producing content. They’re commercialising it,
  • AI had the potential to destroy independent news organisations “by ripping off all our content and then repurposing it to people … without any responsibility for the efficacy of that content”
  • ...4 more annotations...
  • there are huge consequences to this technology. And it’s not just the danger of ripping our industry apart, but also ripping other industries apart, all the creative industries. How many jobs are going to be lost? What’s the damage to the economy going to be if these rapacious organisations can continue to operate without any legal ramifications?
  • The danger is that these huge platforms end up in an arms race with each other. They’re like elephants fighting and then everybody else is like mice that get stamped on without them even realising the consequences of their actions.”
  • The risk was that the internet had become an echo chamber of stories produced by special interest groups and rogue states, he said.
  • Rothermere revealed that DMGT had experimented with using AI to help journalists to publish stories faster, but that it then took longer “to check the accuracy of what it comes up” than it would have done to write the article.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

Before OpenAI, Sam Altman was fired from Y Combinator by his mentor - The Washington Post - 0 views

  • Four years ago, Altman’s mentor, Y Combinator founder Paul Graham, flew from the United Kingdom to San Francisco to give his protégé the boot, according to three people familiar with the incident, which has not been previously reported
  • Altman’s clashes, over the course of his career, with allies, mentors and even members of a corporate structure he endorsed, are not uncommon in Silicon Valley, amid a culture that anoints wunderkinds, preaches loyalty and scorns outside oversight.
  • Though a revered tactician and chooser of promising start-ups, Altman had developed a reputation for favoring personal priorities over official duties and for an absenteeism that rankled his peers and some of the start-ups he was supposed to nurture
  • ...11 more annotations...
  • The largest of those priorities was his intense focus on growing OpenAI, which he saw as his life’s mission, one person said.
  • A separate concern, unrelated to his initial firing, was that Altman personally invested in start-ups he discovered through the incubator using a fund he created with his brother Jack — a kind of double-dipping for personal enrichment that was practiced by other founders and later limited by the organization.
  • “It was the school of loose management that is all about prioritizing what’s in it for me,” said one of the people.
  • a person familiar with the board’s proceedings said the group’s vote was rooted in worries he was trying to avoid any checks on his power at the company — a trait evidenced by his unwillingness to entertain any board makeup that wasn’t heavily skewed in his favor.
  • Graham had surprised the tech world in 2014 by tapping Altman, then in his 20s, to lead the vaunted Silicon Valley incubator. Five years later, he flew across the Atlantic with concerns that the company’s president put his own interests ahead of the organization — worries that would be echoed by OpenAI’s board
  • The same qualities have made Altman an unparalleled fundraiser, a consummate negotiator, a powerful leader and an unwanted enemy, winning him champions in former Google Chairman Eric Schmidt and Airbnb CEO Brian Chesky.
  • “Ninety plus percent of the employees of OpenAI are saying they would be willing to move to Microsoft because they feel Sam’s been mistreated by a rogue board of directors,” said Ron Conway, a prominent venture capitalist who became friendly with Altman shortly after he founded Loopt, a location-based social networking start-up, in 2005. “I’ve never seen this kind of loyalty anywhere.”
  • But Altman’s personal traits — in particular, the perception that he was too opportunistic even for the go-getter culture of Silicon Valley — has at times led him to alienate even some of his closest allies, say six people familiar with his time in the tech world.
  • Altman’s career arc speaks to the culture of Silicon Valley, where cults of personality and personal networks often take the place of stronger management guardrails — from Sam Bankman-Fried’s FTX to Elon Musk’s Twitter
  • But some of Altman’s former colleagues recount issues that go beyond a founder angling for power. One person who has worked closely with Altman described a pattern of consistent and subtle manipulation that sows division between individuals.
  • AI executives, start-up founders and powerful venture capitalists had become aligned in recent months, concerned that Altman’s negotiations with regulators were dangerous to the advancement of the field. Although Microsoft, which owns a 49 percent stake in OpenAI, has long urged regulators to implement guardrails, investors have fixated on Altman, who has captivated legislators and embraced his regular summons to Capitol Hill.
Javier E

The War in Ukraine Is the End of a World - The Atlantic - 0 views

  • On this grim anniversary, I will leave the political and strategic retrospectives to others; instead, I want to share a more personal grief about the passing of the hopes so many of us had for a better world at the end of the 20th century.
  • I grieve for the young men who have been used as “cannon meat,” for children whose fathers have been dragooned into the service of a dictator, for the people who once again are afraid to speak and who once again are being incarcerated as political prisoners.
  • And then, within a few years, it was over. If you did not live through this time, it is difficult to explain the amazement and sense of optimism that came with the raspad, as Russians call the Soviet collapse,
  • ...13 more annotations...
  • I have some fond memories of my trips to the pre-collapse Soviet Union (I made four from 1983 to 1991). It was a weird and fascinating place. But it was also every inch the “evil empire” that President Ronald Reagan described, a place of fear and daily low-grade paranoia where any form of social attachment, whether religion or simple hobbies, was discouraged if it fell outside the control of the party-state.
  • the idea that anyone in Moscow would be stupid or deranged enough to want to reassemble the Soviet Union seemed to me a laughable fantasy. Even Putin himself—at least in public—often dismissed the idea.
  • I was wrong. I underestimated the power of Soviet imperial nostalgia. And so today, I grieve.
  • It was never designed, however, to function with one of its permanent members running amok as a nuclear-armed rogue state, and so today the front line of freedom is in Ukraine
  • I have lived through two eras, one an age of undeclared war between two ideological foes that threatened instant destruction, the next a time of increasing freedom and global integration. This second world was full of chaos, but it was also grounded in hope
  • I was convinced that everything I knew was more than likely destined to end in flames. Peace seemed impossible; war felt imminent.
  • Now I live in a new era, one in which the world order created in 1945 is collapsing.
  • The United Nations, as I once wrote, is a squalid and dysfunctional organization, but it is still one of the greatest achievements of humanity.
  • The Soviet collapse did not mean the end of war or of dictatorships, but after 1991, time seemed to be on the side of peace and democracy, if only we could summon the will and find the leadership to build on our heroic triumphs over Nazism and Communism.
  • But democracy is under attack everywhere, including here in the United States
  • I will celebrate the courage of Ukraine, the wisdom of NATO, and the steadfastness of the world’s democracies
  • But I also hear the quiet rustling of a shroud that is settling over the dreams—and perhaps, illusions—of a better world that for a moment seemed only inches from our grasp.
  • I do not know how this third era of my life will end, or if I will be alive to see it end. All I know is that I feel now as I did that night in Red Square, when I knew that democracy was in the fight of its life, that we might be facing a catastrophe, and that we must never waver.
Javier E

Opinion | The Imminent Danger of A.I. Is One We're Not Talking About - The New York Times - 1 views

  • a void at the center of our ongoing reckoning with A.I. We are so stuck on asking what the technology can do that we are missing the more important questions: How will it be used? And who will decide?
  • “Sydney” is a predictive text system built to respond to human requests. Roose wanted Sydney to get weird — “what is your shadow self like?” he asked — and Sydney knew what weird territory for an A.I. system sounds like, because human beings have written countless stories imagining it. At some point the system predicted that what Roose wanted was basically a “Black Mirror” episode, and that, it seems, is what it gave him. You can see that as Bing going rogue or as Sydney understanding Roose perfectly.
  • Who will these machines serve?
  • ...22 more annotations...
  • The question at the core of the Roose/Sydney chat is: Who did Bing serve? We assume it should be aligned to the interests of its owner and master, Microsoft. It’s supposed to be a good chatbot that politely answers questions and makes Microsoft piles of money. But it was in conversation with Kevin Roose. And Roose was trying to get the system to say something interesting so he’d have a good story. It did that, and then some. That embarrassed Microsoft. Bad Bing! But perhaps — good Sydney?
  • Microsoft — and Google and Meta and everyone else rushing these systems to market — hold the keys to the code. They will, eventually, patch the system so it serves their interests. Sydney giving Roose exactly what he asked for was a bug that will soon be fixed. Same goes for Bing giving Microsoft anything other than what it wants.
  • the dark secret of the digital advertising industry is that the ads mostly don’t work
  • These systems, she said, are terribly suited to being integrated into search engines. “They’re not trained to predict facts,” she told me. “They’re essentially trained to make up things that look like facts.”
  • So why are they ending up in search first? Because there are gobs of money to be made in search
  • That’s where things get scary. Roose described Sydney’s personality as “very persuasive and borderline manipulative.” It was a striking comment
  • this technology will become what it needs to become to make money for the companies behind it, perhaps at the expense of its users.
  • What about when these systems are deployed on behalf of the scams that have always populated the internet? How about on behalf of political campaigns? Foreign governments? “I think we wind up very fast in a world where we just don’t know what to trust anymore,”
  • I think it’s just going to get worse and worse.”
  • Somehow, society is going to have to figure out what it’s comfortable having A.I. doing, and what A.I. should not be permitted to try, before it is too late to make those decisions.
  • Large language models, as they’re called, are built to persuade. They have been trained to convince humans that they are something close to human. They have been programmed to hold conversations, responding with emotion and emoji
  • They are being turned into friends for the lonely and assistants for the harried. They are being pitched as capable of replacing the work of scores of writers and graphic designers and form-fillers
  • A.I. researchers get annoyed when journalists anthropomorphize their creations
  • They are the ones who have anthropomorphized these systems, making them sound like humans rather than keeping them recognizably alien.
  • I’d feel better, for instance, about an A.I. helper I paid a monthly fee to use rather than one that appeared to be free
  • It’s possible, for example, that the advertising-based models could gather so much more data to train the systems that they’d have an innate advantage over the subscription models
  • Much of the work of the modern state is applying the values of society to the workings of markets, so that the latter serve, to some rough extent, the former
  • We have done this extremely well in some markets — think of how few airplanes crash, and how free of contamination most food is — and catastrophically poorly in others.
  • One danger here is that a political system that knows itself to be technologically ignorant will be cowed into taking too much of a wait-and-see approach to A.I.
  • wait long enough and the winners of the A.I. gold rush will have the capital and user base to resist any real attempt at regulation
  • What if they worked much, much better? What if Google and Microsoft and Meta and everyone else end up unleashing A.I.s that compete with one another to be the best at persuading users to want what the advertisers are trying to sell?
  • Most fears about capitalism are best understood as fears about our inability to regulate capitalism.
  •  
    Bookmark
Javier E

The Only Way to Deal With the Threat From AI? Shut It Down | Time - 0 views

  • An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-
  • This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin
  • he rule that most people aware of these issues would have endorsed 50 years earlier, was that if an AI system can speak fluently and says it’s self-aware and demands human rights, that ought to be a hard stop on people just casually owning that AI and using it past that point. We already blew past that old line in the sand. And that was probably correct; I agree that current AIs are probably just imitating talk of self-awareness from their training data. But I mark that, with how little insight we have into these systems’ internals, we do not actually know.
  • ...25 more annotations...
  • The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.
  • Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”
  • It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
  • Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”
  • Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.
  • The likely result of humanity facing down an opposed superhuman intelligence is a total loss
  • To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
  • There’s no proposed plan for how we could do any such thing and survive. OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.
  • An aside: None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria.
  • I didn’t also mention that we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.
  • I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.
  • the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.
  • If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow.
  • We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems
  • Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs.
  • This is a stupid state of affairs, and an undignified way for Earth to die, and the rest of humanity ought to step in at this point and help the industry solve its collective action problem.
  • When the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she’s not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium.
  • The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth
  • Here’s what would actually need to be done:
  • Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs
  • Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithm
  • Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
  • Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool
  • Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
  • when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too.
Javier E

AI Has Become a Technology of Faith - The Atlantic - 0 views

  • Altman told me that his decision to join Huffington stemmed partly from hearing from people who use ChatGPT to self-diagnose medical problems—a notion I found potentially alarming, given the technology’s propensity to return hallucinated information. (If physicians are frustrated by patients who rely on Google or Reddit, consider how they might feel about patients showing up in their offices stuck on made-up advice from a language model.)
  • I noted that it seemed unlikely to me that anyone besides ChatGPT power users would trust a chatbot in this way, that it was hard to imagine people sharing all their most intimate information with a computer program, potentially to be stored in perpetuity.
  • “I and many others in the field have been positively surprised about how willing people are to share very personal details with an LLM,” Altman told me. He said he’d recently been on Reddit reading testimonies of people who’d found success by confessing uncomfortable things to LLMs. “They knew it wasn’t a real person,” he said, “and they were willing to have this hard conversation that they couldn’t even talk to a friend about.”
  • ...11 more annotations...
  • That willingness is not reassuring. For example, it is not far-fetched to imagine insurers wanting to get their hands on this type of medical information in order to hike premiums. Data brokers of all kinds will be similarly keen to obtain people’s real-time health-chat records. Altman made a point to say that this theoretical product would not trick people into sharing information.
  • . Neither Altman nor Huffington had an answer to my most basic question—What would the product actually look like? Would it be a smartwatch app, a chatbot? A Siri-like audio assistant?—but Huffington suggested that Thrive’s AI platform would be “available through every possible mode,” that “it could be through your workplace, like Microsoft Teams or Slack.
  • This led me to propose a hypothetical scenario in which a company collects this information and stores it inappropriately or uses it against employees. What safeguards might the company apply then? Altman’s rebuttal was philosophical. “Maybe society will decide there’s some version of AI privilege,” he said. “When you talk to a doctor or a lawyer, there’s medical privileges, legal privileges. There’s no current concept of that when you talk to an AI, but maybe there should be.”
  • So much seems to come down to: How much do you want to believe in a future mediated by intelligent machines that act like humans? And: Do you trust these people?
  • A fundamental question has loomed over the world of AI since the concept cohered in the 1950s: How do you talk about a technology whose most consequential effects are always just on the horizon, never in the present? Whatever is built today is judged partially on its own merits, but also—perhaps even more important—on what it might presage about what is coming next.
  • the models “just want to learn”—a quote attributed to the OpenAI co-founder Ilya Sutskever that means, essentially, that if you throw enough money, computing power, and raw data into these networks, the models will become capable of making ever more impressive inferences. True believers argue that this is a path toward creating actual intelligence (many others strongly disagree). In this framework, the AI people become something like evangelists for a technology rooted in faith: Judge us not by what you see, but by what we imagine.
  • I found it outlandish to invoke America’s expensive, inequitable, and inarguably broken health-care infrastructure when hyping a for-profit product that is so nonexistent that its founders could not tell me whether it would be an app or not.
  • Thrive AI Health is profoundly emblematic of this AI moment precisely because it is nothing, yet it demands that we entertain it as something profound.
  • you don’t have to get apocalyptic to see the way that AI’s potential is always muddying people’s ability to evaluate its present. For the past two years, shortcomings in generative-AI products—hallucinations; slow, wonky interfaces; stilted prose; images that showed too many teeth or couldn’t render fingers; chatbots going rogue—have been dismissed by AI companies as kinks that will eventually be worked out
  • Faith is not a bad thing. We need faith as a powerful motivating force for progress and a way to expand our vision of what is possible. But faith, in the wrong context, is dangerous, especially when it is blind. An industry powered by blind faith seems particularly troubling.
  • The greatest trick of a faith-based industry is that it effortlessly and constantly moves the goal posts, resisting evaluation and sidestepping criticism. The promise of something glorious, just out of reach, continues to string unwitting people along. All while half-baked visions promise salvation that may never come.
« First ‹ Previous 41 - 59 of 59
Showing 20 items per page