Skip to main content

Home/ History Readings/ Group items tagged concerns

Rss Feed Group items tagged

48More

How Nations Are Losing a Global Race to Tackle A.I.'s Harms - The New York Times - 0 views

  • When European Union leaders introduced a 125-page draft law to regulate artificial intelligence in April 2021, they hailed it as a global model for handling the technology.
  • E.U. lawmakers had gotten input from thousands of experts for three years about A.I., when the topic was not even on the table in other countries. The result was a “landmark” policy that was “future proof,” declared Margrethe Vestager, the head of digital policy for the 27-nation bloc.
  • Then came ChatGPT.
  • ...45 more annotations...
  • The eerily humanlike chatbot, which went viral last year by generating its own answers to prompts, blindsided E.U. policymakers. The type of A.I. that powered ChatGPT was not mentioned in the draft law and was not a major focus of discussions about the policy. Lawmakers and their aides peppered one another with calls and texts to address the gap, as tech executives warned that overly aggressive regulations could put Europe at an economic disadvantage.
  • Even now, E.U. lawmakers are arguing over what to do, putting the law at risk. “We will always be lagging behind the speed of technology,” said Svenja Hahn, a member of the European Parliament who was involved in writing the A.I. law.
  • Lawmakers and regulators in Brussels, in Washington and elsewhere are losing a battle to regulate A.I. and are racing to catch up, as concerns grow that the powerful technology will automate away jobs, turbocharge the spread of disinformation and eventually develop its own kind of intelligence.
  • Nations have moved swiftly to tackle A.I.’s potential perils, but European officials have been caught off guard by the technology’s evolution, while U.S. lawmakers openly concede that they barely understand how it works.
  • The absence of rules has left a vacuum. Google, Meta, Microsoft and OpenAI, which makes ChatGPT, have been left to police themselves as they race to create and profit from advanced A.I. systems
  • At the root of the fragmented actions is a fundamental mismatch. A.I. systems are advancing so rapidly and unpredictably that lawmakers and regulators can’t keep pace
  • That gap has been compounded by an A.I. knowledge deficit in governments, labyrinthine bureaucracies and fears that too many rules may inadvertently limit the technology’s benefits.
  • Even in Europe, perhaps the world’s most aggressive tech regulator, A.I. has befuddled policymakers.
  • The European Union has plowed ahead with its new law, the A.I. Act, despite disputes over how to handle the makers of the latest A.I. systems.
  • The result has been a sprawl of responses. President Biden issued an executive order in October about A.I.’s national security effects as lawmakers debate what, if any, measures to pass. Japan is drafting nonbinding guidelines for the technology, while China has imposed restrictions on certain types of A.I. Britain has said existing laws are adequate for regulating the technology. Saudi Arabia and the United Arab Emirates are pouring government money into A.I. research.
  • A final agreement, expected as soon as Wednesday, could restrict certain risky uses of the technology and create transparency requirements about how the underlying systems work. But even if it passes, it is not expected to take effect for at least 18 months — a lifetime in A.I. development — and how it will be enforced is unclear.
  • Many companies, preferring nonbinding codes of conduct that provide latitude to speed up development, are lobbying to soften proposed regulations and pitting governments against one another.
  • “No one, not even the creators of these systems, know what they will be able to do,” said Matt Clifford, an adviser to Prime Minister Rishi Sunak of Britain, who presided over an A.I. Safety Summit last month with 28 countries. “The urgency comes from there being a real question of whether governments are equipped to deal with and mitigate the risks.”
  • Europe takes the lead
  • In mid-2018, 52 academics, computer scientists and lawyers met at the Crowne Plaza hotel in Brussels to discuss artificial intelligence. E.U. officials had selected them to provide advice about the technology, which was drawing attention for powering driverless cars and facial recognition systems.
  • as they discussed A.I.’s possible effects — including the threat of facial recognition technology to people’s privacy — they recognized “there were all these legal gaps, and what happens if people don’t follow those guidelines?”
  • In 2019, the group published a 52-page report with 33 recommendations, including more oversight of A.I. tools that could harm individuals and society.
  • By October, the governments of France, Germany and Italy, the three largest E.U. economies, had come out against strict regulation of general purpose A.I. models for fear of hindering their domestic tech start-ups. Others in the European Parliament said the law would be toothless without addressing the technology. Divisions over the use of facial recognition technology also persisted.
  • So when the A.I. Act was unveiled in 2021, it concentrated on “high risk” uses of the technology, including in law enforcement, school admissions and hiring. It largely avoided regulating the A.I. models that powered them unless listed as dangerous
  • “They sent me a draft, and I sent them back 20 pages of comments,” said Stuart Russell, a computer science professor at the University of California, Berkeley, who advised the European Commission. “Anything not on their list of high-risk applications would not count, and the list excluded ChatGPT and most A.I. systems.”
  • E.U. leaders were undeterred.“Europe may not have been the leader in the last wave of digitalization, but it has it all to lead the next one,” Ms. Vestager said when she introduced the policy at a news conference in Brussels.
  • In 2020, European policymakers decided that the best approach was to focus on how A.I. was used and not the underlying technology. A.I. was not inherently good or bad, they said — it depended on how it was applied.
  • Nineteen months later, ChatGPT arrived.
  • The Washington game
  • Lacking tech expertise, lawmakers are increasingly relying on Anthropic, Microsoft, OpenAI, Google and other A.I. makers to explain how it works and to help create rules.
  • “We’re not experts,” said Representative Ted Lieu, Democrat of California, who hosted Sam Altman, OpenAI’s chief executive, and more than 50 lawmakers at a dinner in Washington in May. “It’s important to be humble.”
  • Tech companies have seized their advantage. In the first half of the year, many of Microsoft’s and Google’s combined 169 lobbyists met with lawmakers and the White House to discuss A.I. legislation, according to lobbying disclosures. OpenAI registered its first three lobbyists and a tech lobbying group unveiled a $25 million campaign to promote A.I.’s benefits this year.
  • In that same period, Mr. Altman met with more than 100 members of Congress, including former Speaker Kevin McCarthy, Republican of California, and the Senate leader, Chuck Schumer, Democrat of New York. After testifying in Congress in May, Mr. Altman embarked on a 17-city global tour, meeting world leaders including President Emmanuel Macron of France, Mr. Sunak and Prime Minister Narendra Modi of India.
  • , the White House announced that the four companies had agreed to voluntary commitments on A.I. safety, including testing their systems through third-party overseers — which most of the companies were already doing.
  • “It was brilliant,” Mr. Smith said. “Instead of people in government coming up with ideas that might have been impractical, they said, ‘Show us what you think you can do and we’ll push you to do more.’”
  • In a statement, Ms. Raimondo said the federal government would keep working with companies so “America continues to lead the world in responsible A.I. innovation.”
  • Over the summer, the Federal Trade Commission opened an investigation into OpenAI and how it handles user data. Lawmakers continued welcoming tech executives.
  • In September, Mr. Schumer was the host of Elon Musk, Mark Zuckerberg of Meta, Sundar Pichai of Google, Satya Nadella of Microsoft and Mr. Altman at a closed-door meeting with lawmakers in Washington to discuss A.I. rules. Mr. Musk warned of A.I.’s “civilizational” risks, while Mr. Altman proclaimed that A.I. could solve global problems such as poverty.
  • A.I. companies are playing governments off one another. In Europe, industry groups have warned that regulations could put the European Union behind the United States. In Washington, tech companies have cautioned that China might pull ahead.
  • In May, Ms. Vestager, Ms. Raimondo and Antony J. Blinken, the U.S. secretary of state, met in Lulea, Sweden, to discuss cooperating on digital policy.
  • “China is way better at this stuff than you imagine,” Mr. Clark of Anthropic told members of Congress in January.
  • After two days of talks, Ms. Vestager announced that Europe and the United States would release a shared code of conduct for safeguarding A.I. “within weeks.” She messaged colleagues in Brussels asking them to share her social media post about the pact, which she called a “huge step in a race we can’t afford to lose.”
  • Months later, no shared code of conduct had appeared. The United States instead announced A.I. guidelines of its own.
  • Little progress has been made internationally on A.I. With countries mired in economic competition and geopolitical distrust, many are setting their own rules for the borderless technology.
  • Yet “weak regulation in another country will affect you,” said Rajeev Chandrasekhar, India’s technology minister, noting that a lack of rules around American social media companies led to a wave of global disinformation.
  • “Most of the countries impacted by those technologies were never at the table when policies were set,” he said. “A.I will be several factors more difficult to manage.”
  • Even among allies, the issue has been divisive. At the meeting in Sweden between E.U. and U.S. officials, Mr. Blinken criticized Europe for moving forward with A.I. regulations that could harm American companies, one attendee said. Thierry Breton, a European commissioner, shot back that the United States could not dictate European policy, the person said.
  • Some policymakers said they hoped for progress at an A.I. safety summit that Britain held last month at Bletchley Park, where the mathematician Alan Turing helped crack the Enigma code used by the Nazis. The gathering featured Vice President Kamala Harris; Wu Zhaohui, China’s vice minister of science and technology; Mr. Musk; and others.
  • The upshot was a 12-paragraph statement describing A.I.’s “transformative” potential and “catastrophic” risk of misuse. Attendees agreed to meet again next year.
  • The talks, in the end, produced a deal to keep talking.
13More

Opinion | The Secret of America's Economic Success - The New York Times - 0 views

  • there was widespread concern that the pandemic would leave lasting economic scars. After all, the 2008 financial crisis was followed by a weak recovery that left real gross domestic product in many countries far below the pre-crisis trend even a decade later. Indeed, as we approach Covid’s four-year mark, many of the world’s economies remain well short of full recovery.
  • But not the United States. Not only have we had the strongest recovery in the advanced world, but the International Monetary Fund’s latest World Economic Outlook also points out that American growth since 2019 has actually exceeded pre-Covid projections.
  • let’s take a moment to celebrate this good economic news — and try to figure out what went right with the U.S. economy.
  • ...10 more annotations...
  • Part of the answer, to be fair, is luck. Russia’s invasion of Ukraine caused a major energy shock in Europe, which had come to rely on imports of Russian natural gas. America, which exports gas, was much less affected.
  • What about inflation? When you use comparable measures, America also has the lowest inflation rate among major economies.
  • It’s true that one recent poll found that a majority of Americans and 60 percent of Republicans say that unemployment is near a 50-year high. But it’s actually near its lowest level since the 1960s.
  • A second, probably more important factor was that the United States pursued aggressively expansionary fiscal policy
  • Many economists were extremely critical, warning that this spending would fuel inflation, which it probably did for a while. But inflation has subsided, while “Big Fiscal” helped the economy get to full employment — arguably the first time we’ve had truly full employment in decades.
  • A strong job market may in turn have had major long-term benefits, by drawing previously marginalized Americans into the work force.
  • the percentage of U.S. adults in their prime working years participating in the labor force is now at its highest level in 20 years. One number I find especially striking is labor force participation by Americans with a disability, which has soared.
  • One last thing: When Covid struck, all advanced countries took strong measures to limit economic hardship, but they took different approaches. European governments generally paid employers to keep workers on their payrolls, even if they were temporarily idle. America, for the most part, let layoffs happen but protected workers with expanded unemployment benefits.
  • There was a case for each approach. Europe’s approach helped keep workers connected to their old jobs; the U.S. approach created more flexibility, making it easier for workers to move to different jobs if the post-Covid economy turned out to look quite different from the economy before the pandemic.
  • is clear: We have been remarkably successful, even if nobody will believe it.
17More

Elon Musk's Outlook on Our Future Turns Dour - WSJ - 0 views

  • these days, Musk sounds worried—about everything from cyclical business jitters to existential global concerns.
  • his past week he warned during a forum on X about “civilizational risk” stemming from the Israel-Hamas war cascading into a wider conflict that would pit the U.S. against a united China, Russia and Iran. “I think we are sleepwalking our way into World War III,”
  • over the years, Musk has framed his business endeavors as striving to prevent calamity, a motivating ideal that helps inspire employees, investors and fans while inducing eye rolls among critics and rivals.
  • ...14 more annotations...
  • For him, Tesla is about trying to save humanity from global warming while SpaceX is about making humanity a multiplanetary species in case things don’t work out on Earth.
  • He said he worried that activating Starlink then would have further stoked the conflict. “I think if the Ukrainian attacks had succeeded in sinking the Russian fleet, it would have been like a mini Pearl Harbor and led to a major escalation,” he is quoted as saying in Walter Isaacson’s new biography, “Elon Musk.” 
  • “I tend to view the future as a series of probabilities—there’s certain probability that something will go wrong, some probability that it’ll go right; it’s kind of a spectrum of things. And to the degree that there is free will versus determinism, then we want to try to exercise that free will to ensure a great future.”
  • “Nuclear war probability is rising rapidly,” he tweeted last fall after months of fighting between the two countries. 
  • with the purchase of Twitter-turned-X, Musk couched the decision as keeping the social-media platform as a bastion for free speech in what he sees as a larger battle against cultural forces trying to squash diverse thought—or, as he calls it, the “woke mind virus.”
  • “Accept worst case outcome & assign it a probability, which is usually very low. Now think of good things in life & assign them probabilities—many are certain!” he tweeted a couple of years ago. “Bringing anxiety/fear to the conscious mind saps it of limbic emotional strength.”
  • “We’re like a pro sports team that has been winning the championship for so long and so many years in a row that we have forgotten what losing even looks like,” Musk said. “And that’s when the champion team loses.” 
  • “My brother believes an economic winter is coming every single day,” Kimbal Musk once told lawyers about his older sibling’s mindset during a legal procedure. 
  • “To be frank, civilization is feeling a little fragile these days,” Musk said last year during an update on SpaceX’s large rocket development. “I’m an optimist, but I think we got to protect the downside here and try to build that city on Mars as soon as possible and secure the future of life.”
  • Among his stated worries, of which he has tweeted: “a big rock will hit Earth eventually & we currently have no defense” and “population collapse due to low birth rates is a much bigger risk to civilization than global warming.”
  • he framed his creation of an artificial-intelligence startup called xAI in his typically grandiose terms, cautioning that the technology has the potential to spiral out of control and essentially turn on its master, something akin to “The Terminator” movie. 
  • “I think it’s actually important for us to worry about a `Terminator’ future in order to avoid a `Terminator’ future,”
  • This past week, Musk returned to calling for peace, saying U.S. policies risk pushing Russia into an alliance with China just as the Israel-Hamas war has the potential to expand. He cautioned that many people overestimate U.S. military might in such a scenario
  • “Cheery fatalism is very effective.”
10More

Opinion | With War in Israel, the Cancel Culture Debate Comes Full Circle - The New Yor... - 0 views

  • Nathan Thrall’s searing new book, “A Day in the Life of Abed Salama,” struck me as important even before the obscene massacres and mass kidnappings committed by Hamas this month lit the Middle East on fire. Today, with people still struggling to understand the contours of this deeply complicated conflict, the book seems essential.
  • An expanded version of Thrall’s widely praised 2021 New York Review of Books article of the same name, the book follows a Palestinian man named Abed Salama as he searches for his 5-year-old son after a deadly school bus crash in the West Bank, a search hindered by Israel’s restrictions on Palestinian movemen
  • Thrall, the former director of the Arab-Israeli project at the International Crisis Group, uses his reported account of the Salama family’s tragedy to offer a panoramic look at life under Israel’s occupation. He is deeply concerned with Palestinian grief, but also writes rich portraits of Israelis, including Beber Vanunu, founder of a settlement in the West Bank, and Dany Tirza, architect of the separation wall that cuts through the territory.
  • ...7 more annotations...
  • Thrall was asked about his depictions of Israelis, and whether he had qualms about “humanizing the occupation.”
  • I don’t like the fact that the statement Nguyen signed gestured only vaguely at Hamas’s slaughter of Israeli civilians. In calling off his Friday evening appearance, 92NY, a Jewish organization, was playing by rules much of the left established, privileging sensitivity to traumatized communities ahead of the robust exchange of ideas.
  • “I was very glad to be asked that question,” Thrall told me. “Because that was absolutely the ambition of the book, to depict real people” rather than villains and saints.
  • if someone as evenhanded as Thrall now finds his talks being dropped, we’re in an especially repressive period. And in a time of war, particularly a war shrouded in fiercely competing narratives, free speech is more important than ever.
  • Thrall is not alone; in recent weeks several literary and cultural events by pro-Palestinian speakers or groups have been either scrapped or relocated.
  • And supporters of Israel are hardly alone in creating a censorious atmosphere; particularly on college campuses, it is Zionists who feel silenced and intimidated
  • Nevertheless, a commitment to free speech, like a commitment to human rights, shouldn’t depend on others reciprocating; such commitments are worth trying to maintain even in the face of unfairness
16More

Electric Cars Were Already Having Issues. Then Things Got Political. - WSJ - 0 views

  • , anti-“woke” backlash and high-profile politics are increasingly making the suggestion of owning an EV a political cudgel. Or, as Ford Motor Chief Executive Officer Jim Farley recently lamented: “They have become a political football.” 
  • President Biden’s support of the transition, through subsidizing manufacturing, extending tax credits for EVs and giving money for charging stations, has come under attack from Republican rivals seeking to challenge him for the White House next year. 
  • “I don’t get why Ford and GM, why these carmakers, aren’t fighting…to make cars that are going to sell, to make cars that are going to be able to go on long distances,” Trump said at a rally during which he predicted the EV policies would lead to “hundreds of thousands of American jobs” being lost. 
  • ...13 more annotations...
  • The tensions have risen as Ford and other global automakers have spent billions of dollars designing and building EVs, a move that looked especially smart a year ago when they were caught off guard by the strong demand for their new offerings. 
  • This past week, General Motors said it would delay opening a large EV truck factory in Michigan by a year, citing a need “to better manage capital investments while aligning with evolving EV demand.” The move followed an earlier announcement by Ford pushing back to late 2024 a target of building 600,000 EVs annually. The company has also temporarily cut one of the production shifts for its electric pickup and paused construction of a $3.5 billion battery plant in Michigan. 
  • In the U.S., for every five Democrats owning an EV there are two Republicans, said Alexander Edwards, president of Strategic Vision, which surveys new-vehicle buyers. 
  • His data finds that Democrats give priority to “environmentally friendly” when buying their cars while Republicans have other things they are looking for, such as performance and prestige.
  • On the campaign trail, however, EVs don’t always sound so cool. The GOP presidential hopeful Vivek Ramaswamy, who is against subsidies, has drawn laughs as he suggests that EV buyers are motivated by “a psychological insecurity,” while former Vice President Mike Pence said during the second Republican presidential primary debate that Biden’s efforts “are driving American gasoline, automotive manufacturing, into the graveyard.”  
  • As the Democrat talks about trying to protect automotive jobs and help the environment with green technology, they raise concerns about losing work and question whether the governments should subsidize them or mandate future zero-emission vehicle sales, as California has done.  
  • “The real question is whether we’ll lead or we’ll fall behind in the race to the future; or whether we’ll build these vehicles and the batteries that go in them here in the United States or rely on other countries,” Biden said while visiting a Ford factory early in his administration. 
  • Underpinning the politics of EVs is an economic divide, made more stark by the rise of interest rates. Most EVs are more expensive than the average new vehicle—which sold for about $46,000 in September.
  • As new cars and trucks become more costly, the practical effect on buyers shows up in Strategic Vision’s survey: The median family household income of new-car buyers has risen to $122,000. That is a significant increase from around $90,000, where it had been at for a couple of decades until just recently. EV buyers are even better off, with a median household income of $186,000.
  • In some ways, the green car tensions are a return to the 2012 political season, when GM’s Chevrolet Volt became the embodiment of the Obama administration’s rescue of the Detroit auto industry in 2009 and efforts to promote electrified vehicles.
  • Former House Speaker Newt Gingrich, who unsuccessfully sought the Republican presidential nomination, said the problem with the “Obama car” was that one couldn’t put a gun rack in the plug-in hybrid vehicle.
  • Sales of the Volt disappointed, and Dan Akerson, then CEO of GM, was left fuming that the company hadn’t designed the sedan to become “a political punching bag.”
  • GM later killed off the Volt.
10More

India takes strong pro-Israel stance under Modi in a departure from the past | India | ... - 0 views

  • ust a few hours after Hamas launched its assault on Israel, India’s prime minister was among the first world leaders to respond. In a strongly worded statement, Narendra Modi condemned the “terrorist attacks” and said India “stands in solidarity with Israel at this difficult hour”.
  • it was not a sentiment restricted only to the upper echelons of Indian government. As Azad Essa, a journalist and author of Hostile Homelands: The New Alliance Between India and Israel, said: “This messaging gave a clear signal to the whole rightwing internet cell in India.”
  • In the aftermath, the Indian internet factcheckers AltNews and Boom began to observe a flood of disinformation targeting Palestine pushed out by Indian social media accounts, which included fake stories about atrocities committed by Palestinians and Hamas that were shared sometimes millions of times, and often using the conflict to push the same Islamophobic narrative that has been used regularly to demonise India’s Muslim population since the BJP came to power
  • ...7 more annotations...
  • BJP-associated Facebook groups also began to push the message that Hamas represented the same Muslim threat facing India in the troubled, majority-Muslim region of Kashmir and Palestinians were sweepingly branded as jihadis.
  • A turning point came in 1999 when India went to war with Pakistan and Israel proved willing to provide arms and ammunition. It was the beginning of a defence relationship that has grown exponentially. India buys about $2bn-worth of arms from Israel every year – its largest arms supplier after Russia – and accounts for 46% of Israel’s overall weapons exports.
  • it was the election of Modi that marked a fundamental sea change. While previous governments had kept their dealings with Israel largely quiet, due to concerns of alienating foreign allies and its own vast Muslim population, Modi’s Hindu nationalist BJP government had very different priorities.
  • ssa said: “The narrative they were pushing was clear: that India and Israel are these ancient civilisations that had been derailed by outsiders – which means Muslims – and their leaders have come together, like long-lost brothers, to fulfil their destiny.”
  • The ideological alignment between the two leaders was certainly more apparent than in the past. The BJP’s ideological forefathers, and its rank and file today, have long regarded Israel as a model for the religious nationalist state, referred to as the Hindu Rashtra, that the Hindu rightwing in India hope to establish.
  • While Modi was also the first Indian prime minister to visit Ramallah in Palestine, much of the focus of his government has been on strengthening ties with Israel, be it through defence, culture, agriculture and even film-making. This year, Gautam Adani, the Indian billionaire businessman seen to be close to Modi, paid $1.2bn to acquire the strategic Israeli port of Haifa.
  • Modi’s foreign policy has also overseen a transformation in ties with Arab Gulf countries including Saudi Arabia, the United Arab Emirates and Qatar, which has been of great financial benefit to India and laid the foundation for a groundbreaking India-Middle East economic trade corridor, running all the way to Europe, which was announced at the G20 forum for international economic cooperation this year but has yet to be built.
42More

Opinion | Get to Know the Influential Conservative Intellectuals Who Help Explain G.O.P... - 0 views

  • The efforts to overturn the 2020 election failed. We’re told that’s because the institutions held. But it’s more accurate to say that most of the individuals holding powerful positions within those institutions — the White House, the Pentagon, the courts, election officials in Georgia and other states — sided with the Constitution over Mr. Trump’s desire to remain in power.
  • But what if key individuals decide differently the next time they are faced with this kind of choice? What if they have come to believe that the country is in such dire straits — has reached a state of apocalyptic decadence — that democracy is a luxury we can no longer afford?
  • A coalition of intellectual catastrophists on the American right is trying to convince people of just that
  • ...39 more annotations...
  • — giving the next generation of Republican officeholders, senior advisers, judges and appointees explicit permission and encouragement to believe that the country is on the verge of collapse.
  • The list of people making these arguments includes former officials in the Trump administration, some of whom are likely to be considered for top jobs in the event of a Trump restoration in 2024.
  • The ideas about the threat of an all-powerful totalitarian left and the dismal state of the country — even the most outlandish of them — are taken seriously by conservative politicians as well as prominent influencers on the right.
  • If Mr. Trump manages to win the presidency again in 2024, many of these intellectual catastrophists could be ready and willing to justify deeds that could well bring American liberal democracy to its knees.
  • Mr. Anton’s “Flight 93” essay originally appeared on a website with modest traffic, but two days later Rush Limbaugh was reading it aloud in its entirety on his radio show. The essay set the tone of life-or-death struggle (and related imagery) that is common among catastrophists.
  • Mr. Anton updated and amplified the argument in a 2021 book, “The Stakes: America at the Point of No Return.”
  • The prospect of Mr. Biden’s becoming president constituted an “existential threat,” Mr. Eastman said, to the survivability of the country. Would we “completely repudiate every one of our founding principles” and allow ourselves to be “eradicated”? Those were the stakes, as he viewed them.
  • Once a thinker begins to conceive of politics as a pitched battle between the righteous and those who seek the country’s outright annihilation, extraordinary possibilities open up.
  • in May 2021, Mr. Anton came to conduct a two-hour podcast with a far-right Silicon Valley tech guru and self-described “monarchist,” Curtis Yarvin, in which the two agreed that the American “regime” is today most accurately described as a “theocratic oligarchy.” In that arrangement, an elite class of progressive “priests” ensconced in executive branch agencies, the universities, elite media and other leading institutions of civil society promulgate and enforce a distorted and self-serving version of reality that illegitimately justifies their rule.
  • It culminated in Mr. Yarvin sketching a scenario in which a would-be dictator he alternatively describes as “Caesar” and “Trump” defies the laws and norms of democratic transition and uses a “Trump app” to direct throngs of his supporters on the streets of the nation’s capital to do his bidding, insulating the would-be dictator from harm and the consequences of his democracy-defying acts.
  • Mr. Anton described Caesarism as one-man rule that emerges “after the decay of a republican order, when it can no longer function.”
  • he would prefer the country to embrace the principles of “1787 forever.” But if that is no longer possible, he said, the rule of a Caesar can be a necessary method to restore order.)
  • Those on the right primarily concerned about the fate of traditionalist Christian morals and worship in the United States insist that we already live in a regime that oppresses and brutalizes religious believers and conservatives. And they make those charges in a theologically inflected idiom that’s meant to address and amplify the right’s intense worries about persecution by progressives.
  • Among the most extreme catastrophists writing in this vein is Stephen Wolfe, whose book “The Case for Christian Nationalism” calls for a “just revolution” against America’s “gynocracy” (rule by women) that emasculates men, persuading them to affirm “feminine virtues, such as empathy, fairness and equality.” In its place, Mr. Wolfe proposes the installation of a “Christian prince,” or a form of “theocratic Caesarism.”
  • Other authors aspire to greater nuance by calling the dictatorship weighing down on religious believers soft totalitarianism, usually under the rule of social-justice progressivism. These writers often draw direct parallels between the fate of devout Christians in the contemporary United States and the struggles of Eastern Europeans who sought to practice their faith but were harshly persecuted by Soviet tyranny
  • the most recent book by the writer Rod Dreher, “Live Not by Lies: A Manual for Christian Dissidents.”
  • Patrick Deneen of the University of Notre Dame offers the most elaborate and intellectually sophisticated response in his recent book, “Regime Change: Toward a Postliberal Future.”
  • “Regime Change” is a much darker book that goes well beyond diagnosing America’s ills to propose what sounds, in certain passages, like a radical cure.
  • Mr. Deneen and other discontented intellectuals of the religious right can perhaps be most accurately described as political reactionaries looking to undertake a revolutionary act in reverse.
  • Growing numbers of Americans supposedly reject this outlook, demanding a postliberal government and social, cultural and economic order — basically, hard-right policies on religious and moral issues and hard left on economics. But the forces of liberalism are entrenched on the center left and center right, using every power at their disposal to prevent regime change.
  • In some passages, he advocates a “peaceful but vigorous overthrow of a corrupt and corrupting liberal ruling class” and proposes modest reforms to replace i
  • in other passages, Mr. Deneen goes much further, describing the separation of church and state as a “totalitarian undertaking” that must be reversed so that American public life can be fully integrated with conservative forms of Christianit
  • He even affirmatively quotes a passage from Machiavelli in which he talks of the need to use “extralegal and almost bestial” forms of resistance, including “mobs running through the streets,” in order to topple the powers that be.
  • The source of these maladies, Mr. Deneen claims, is liberalism, which until recently has dominated both political parties in the United States, imposing an ideology of individual rights and historical progress on the country from above. This ideology, he says, denigrates tradition, faith, authority and community.
  • Costin Alamariu, the person generally understood to be writing under the pseudonym Bronze Age Pervert.
  • He self-published a book in 2018, “Bronze Age Mindset,” which follows Friedrich Nietzsche and other authors beloved by the European far right in proclaiming that Western civilization itself is on the verge of collapse, its greatest achievements far in the past, its present a “garbage world” in an advanced state of decay.
  • All around us, Mr. Alamariu declares, greatness and beauty are under assault. Who are its enemies? Women, for one. (“It took 100 years of women in public life for them to almost totally destroy a civilization.”) Then there’s belief in democratic equality. (“I believe that democracy is the final cause of all the political problems I describe.”)
  • But blame must most of all be laid at the feet of the creature Mr. Alamariu calls the “bugman,” a term he uses to describe a majority of human beings alive today. This insectlike infestation venerates mediocrity and is “motivated by a titanic hatred of the well-turned-out and beautiful.”
  • Mr. Alamariu proposes breeding great men of strength who model themselves on pirates, disregarding laws and norms, plundering and taking anything they want and ultimately installing themselves as absolute rulers over the rest of us.
  • “Now imagine a man of Trump’s charisma, but who is not merely beholden to the generals, but one of them, and able to rule and intimidate them as well as seduce the many. … Caesars and Napoleons are sure to follow.”
  • In a recent essay, Mr. Alamariu wrote: “I believe in fascism or ‘something worse’ …. I believe in rule by a military caste of men who would be able to guide society toward a morality of eugenics.”
  • Mr. Alamariu’s recently self-published doctoral dissertation reached No. 23 on Amazon sitewide in mid-September. Among those on the right treating the author as a friend, ally or interlocutor worthy of respectful engagement are the prominent activist Christopher Rufo, the author Richard Hanania and the economist-blogger Tyler Cowen.
  • These writers are giving Republican elites permission and encouragement to do things that just a few years ago would have been considered unthinkable.
  • In a second term, Mr. Trump’s ambition is to fire tens of thousands of career civil servants throughout the federal bureaucracy and replace them with loyalists. He also reportedly plans to staff the executive branch with more aggressive right-wing lawyers. These would surely be people unwaveringly devoted to the president and his agenda as well as the danger the Democratic Party supposedly poses to the survival of the United States.
  • These writers also exercise a powerful influence on media personalities with large audiences. Tucker Carlson has interviewed Curtis Yarvin and declared that with regard to the 2024 election, “everything is at stake. What wouldn’t they do? What haven’t they done? How will you prepare yourself?”
  • Other right-wing influencers with large followings assert more bluntly that if conservatives lose in 2024, they will be hunted down and murdered by the regime.
  • It’s important that we respond to such statements by pointing out there is literally no evidence to support them. Other intellectual catastrophists are likewise wrong to suggest the country is ruled by a progressive tyranny, and we can know this because people on the right increasingly say such things while facing no legal consequences at all.
  • The question, then, is why the intellectual catastrophists have gotten to this point — and why others on the right are listening to them. The answer, I think, is an intense dislike of what America has become, combined with panic about the right’s ability to win sufficient power in the democratic arena to force a decisive change.
  • In refusing to accept that deal, many of the right’s most prominent writers are ceasing to behave like citizens, who must be willing to share rule with others, in favor of thinking and acting like commissars eager to serve a strongman.
11More

How Germany's Green Party Lost Its Luster - The New York Times - 0 views

  • he has conceded he misjudged the mood of crisis fatigue in the country after a winter of coping with surging energy prices in the wake of Russia’s invasion of Ukraine.
  • “The feeling of great time pressure has dissipated; instead of the fear of a loss of gas supplies, other concerns have come to the fore,” he told the newspaper Frankfurter Allgemeine Zeitung. “This change wasn’t so clear to me at first, and maybe that’s why I didn’t do everything right in the situation.”
  • Indeed, what was pragmatic to many Germans was seen as a betrayal of the party’s long-cherished principles by many of the Greens’ rank and file.
  • ...8 more annotations...
  • “The Greens were on the way to being a party of the political middle,” said Manfred Güllner, director of the Berlin-based Forsa Institute, a polling firm. “Now the Greens have landed back to exactly where they were for a long time: a small party that caters to its followers that is far removed from being a major party.”
  • As the Greens have pivoted back to their traditional agenda, the party has bumped up against the limits of what many Germans are willing to sacrifice at a time of economic insecurity stemming from the war in Ukraine, higher inflation and the lingering effects of the Covid pandemic.
  • Exhibit A in voter disillusionment was a bill that Mr. Habeck promoted requiring that newly installed home heating systems run on at least 65 percent renewable energy starting next year.
  • “They squandered a lot of their success because they seemed detached from ordinary people,” said Markus Ziener, a visiting fellow at the German Marshall Fund. “Instead of setting incentives, they were seen as telling people what’s right and what’s wrong, as wanting to lecture people.”
  • Experts said the law, which was passed in weakened form in September, has helped fuel the growing popularity of the far-right Alternative for Germany party, or AfD, which is polling at more than 20 percent, around the highest in its history.
  • Like other far-right parties across Europe, the AfD has added opposition to climate policies to its agenda, alongside issues like immigration, seeking to capitalize on the economic anxieties of working people.
  • “What happened with the Heizungsgezetz was all of a sudden literally the Greens were knocking on people’s doors, asking, ‘Show me your heating, and it has to change,’” said Andrea Römmele, a political scientist at the Hertie School in Berlin. “It was too fast.”
  • “It’s the deepest crisis in the Greens’ history,” he said. “Robert Habeck is the most talented politician in Germany by far. He has become a scapegoat. But he can get them past it.”
24More

Three Lessons Israel Should Have Learned in Lebanon - The Atlantic - 0 views

  • he ferocity of Israel’s response to the murder of more than 1,400 Israeli citizens has been such that international concern for the Palestinians of Gaza—half of whom, or more than 1 million, are children under the age of 15—has now largely eclipsed any sympathy that might have been felt for the victims of the crimes that precipitated the war in the first place.
  • Israel has a right to defend itself, and it has a right to seek to destroy, or at least severely degrade, the primary perpetrator of the attacks of October 7,
  • I am worried that Israel has staked out maximalist objectives, not for the first time, and will, as it did in 2006 against Hezbollah in Lebanon, fall far short of those objectives, allowing the enemy to claim a victory—a Pyrrhic victory, to be sure, but a victory nonetheless.
  • ...21 more annotations...
  • I had gone to graduate school in Lebanon, then moved back there in an attempt to better understand how Hezbollah had evolved into Israel’s most capable foe. My research revealed as much about Israeli missteps and weaknesses as it did about Hezbollah’s strengths.
  • If Israel is going to have any strategic success against Hamas, it needs to do three things differently from conflicts past.
  • Hezbollah took everything Israel could throw at it for a month and was still standing.
  • As noted earlier, Israel has an unfortunate tendency to lay out maximalist goals—very often for domestic consumption—that it then fails to meet
  • In 2006, for example, Israel’s then–prime minister, Ehud Olmert, told the country he was going to destroy Hezbollah, return the bodies of two Israeli prisoners, and end the rocket attacks on Israel.
  • Israel did none of the three. And although Lebanon was devastated, and Hezbollah’s leader, Hassan Nasrallah, publicly apologized for the raid that started the conflict, most observers had little doubt about who had won the conflict.
  • Strategic Humility
  • As Eliot Cohen has pointed out, the other side also has maximalist goals. Hamas and Hezbollah want nothing less than the destruction of Israel. But they are in no rush.
  • Nasrallah addressed the Arabic-speaking world for the first time since the start of this conflict on Friday. Significantly, he declared that although fighting still rages, Hamas became the conflict’s winner as soon as Israel claimed that it would destroy the militant group, which he confidently predicted it would not.
  • Hezbollah clearly does not want to enter this conflict in any meaningful way. It knows that the pressure will grow to do so if Israel has any real success in Gaza, but for the moment, it doubts that Israel will accomplish any such thing.
  • that Israel will destroy Hamas. That just isn’t going to happen, especially because no one has any idea who, or what, should replace Hamas in Gaza. So tell the world what will happen—and how it will make Israel and the region safer.
  • Communications Discipline
  • One of the things that struck me was the almost profane way in which Israeli military spokespeople would often speak, to international audiences no less, about non-Israeli civilians
  • “Now we are at the stage in which we are firing into the villages in order to cause damage to property … The aim is to create a situation in which the residents will leave the villages and go north.”
  • The callousness with which Israeli spokespeople too often describe the human suffering on the other side of the conflict, the blunt way in which they described what many Americans would consider war crimes, never fails to offend international audiences not predisposed to have sympathy with Israeli war aims.
  • much like right-wing American politicians, who sometimes use inflammatory rhetoric about real or perceived U.S. enemies, Israeli officials often resort to language about adversaries and military operations that can be exceptionally difficult for their allies to defend on the international stage:
  • One minister casually muses about using nuclear weapons on Gaza; another claims that the Palestinians are a fictional people. One can safely assume that people will continue accusing the Israeli government of including genocidal maniacs when they can point to officials in that government talking like, well, genocidal maniacs.
  • Israel needs to develop a clear communications plan for its conflicts and to sharply police the kind of language that doesn’t go over as well in Johannesburg or Jordan as it does in Jerusalem.
  • Focus on Iran
  • Few people have any interest in a regional war. The economic consequences alone would be dire. But had I been in Israel’s position on October 8, I might have been sorely tempted to largely ignore Gaza—where even the best-trained military would struggle to dislodge Hamas without killing tens of thousands of innocent civilians—and focus my efforts much farther east
  • Israel nevertheless needs to find a way to change Iran’s strategic calculus. Otherwise, Hamas and Hezbollah will only grow stronger.
24More

Opinion | Israel Is In Real Danger For Three Reasons - The New York Times - 0 views

  • the Israel of Oct. 7 is an Israel that I’ve never been to before. They were right. It is a place in which Israelis have never lived before, a nation that Israeli generals have never had to protect before, an ally that America has never had to defend before
  • I now understand why so much has changed. It is crystal clear to me that Israel is in real danger — more danger than at any time since its War of Independence in 1948.
  • it’s for three key reasons:
  • ...21 more annotations...
  • First, Israel is facing threats from a set of enemies who combine medieval theocratic worldviews with 21st century weaponry — and are no longer organized as small bands of militiamen, but as modern armies with brigades, battalions, cyber capabilities, long-range rockets, drones and technical support.
  • my third, deep concern.
  • But Israel’s war against Hamas in Gaza entails urban, house-to-house fighting that creates thousands of civilian casualties — innocent men, women and children
  • But President Biden can only sustainably generate the support Israel needs if Israel is ready to engage in some kind of a wartime diplomatic initiative directed at the Palestinians in the West Bank — and hopefully in a post-Hamas Gaza — that indicates Israel will discuss some kind of two-state solutions if Palestinian officials can get their political house unified and in order.
  • The second danger I see is that the only conceivable way that Israel can generate the legitimacy, resources, time and allies to fight such a difficult war with so many enemies is if it has unwavering partners abroad, led by the United States.
  • Netanyahu’s message to the world remains, in effect: “Help us defeat Hamas in Gaza, while we work to expand settlements, annex the West Bank and build a Jewish supremacist state there.”
  • Worse, I am stunned at the degree to which that leader, Prime Minister Benjamin Netanyahu, continues to put the interests of holding on to the support of his far-right base
  • Israel has the worst leader in its history, maybe in all of Jewish history — who has no will or ability to produce such an initiative.
  • This kind of chilling exuberance — Israel was built so that such a thing could never happen — explains the homemade sign I saw on a sidewalk while driving through the French Hill Jewish neighborhood of Jerusalem the other day: “It’s either us or them.’’
  • After being slammed by the public for digitally stabbing his army and intelligence chiefs in the back in the middle of a war, Netanyahu published a new tweet. “I was wrong,” he wrote, adding that “the things I said following the press conference should not have been said, and I apologize for that. I fully support the heads of [Israel’s] security services.”
  • As a result, there is a conviction in the army that they must demonstrate to the entire neighborhood — to Hezbollah in Lebanon, to the Houthis in Yemen, to the Islamic militias in Iraq to the Hamas and other fighters in the West Bank — that they will stop at nothing to re-establish the security of their borders
  • it wants to show that no one can out-crazy Israel to drive them from this region — even if the Israeli military has to defy the U.S. and even if they do not have any solid plan for governing Gaza the morning after the war.
  • “Israel cannot accept such an active threat on its borders. The whole idea of people living side by side in the Middle East was jeopardized by Hamas.”
  • This conflict is now back to its most biblical and primordial roots. This seems to be a time of eyes for eyes and teeth for teeth. The morning-after policy thinking will have to wait for the mourning after.
  • So, Netanyahu is saying that seven million Jews are going to indefinitely control the lives of five million Palestinians in the West Bank and Gaza
  • while offering them no political horizon, nothing, by way of statehood one day on any demilitarized conditions.
  • Early on the morning of Oct. 29, as the Israeli Army was just moving into Gaza, Netanyahu tweeted and then deleted a social media post in which he blamed Israel’s defense and intelligence establishment for failing to anticipate Hamas’s surprise attack.
  • The euphoric rampage of Oct. 7 that killed some 1,400 soldiers and civilians has not only hardened Israeli hearts toward the suffering of Gaza civilians. It has also inflicted a deep sense of humiliation and guilt on the Israeli Army and defense establishment, for having failed in their most basic mission of protecting the country’s borders.
  • the damage was done. How much do you suppose those military leaders trust what Netanyahu will say if the Gaza campaign stalls? What real leader would behave that way at the start of a war of survival?
  • Netanyahu and his far-right zealots have taken Israel on multiple flights of fancy in the last year: dividing the country and the army over the fraudulent judicial reform, bankrupting its future with massive investments in religious schools that teach no math and in West Bank Jewish settlements that teach no pluralism — while building up Hamas, which would never be a partner for peace, and tearing down the Palestinian Authority, the only possible partner for peace.
  • “When you go to the front, you are overwhelmed by the power of what we lost.”
168More

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
22More

The Great Disconnect: Why Voters Feel One Way About the Economy but Act Differently - T... - 0 views

  • By traditional measures, the economy is strong. Inflation has slowed significantly. Wages are increasing. Unemployment is near a half-century low. Job satisfaction is up.
  • Yet Americans don’t necessarily see it that way. In the recent New York Times/Siena College poll of voters in six swing states, eight in 10 said the economy was fair or poor. Just 2 percent said it was excellent. Majorities of every group of Americans — across gender, race, age, education, geography, income and party — had an unfavorable view.
  • To make the disconnect even more confusing, people are not acting the way they do when they believe the economy is bad. They are spending, vacationing and job-switching the way they do when they believe it’s good.
  • ...19 more annotations...
  • “People have faced higher prices and that is difficult, but that doesn’t explain why people have not cut back,” she said of a phenomenon known as revealed preference. “They have spent as if they see nothing but good times in front of them. So why are their actions so out of whack with their words?”
  • Many said their own finances were good enough — they had jobs, owned houses, made ends meet. But they felt as if they were “just getting by,” with “nothing left over.” Many felt angry and anxious over prices and the pandemic and politics.
  • Also, economists said, wages have increased alongside prices. Real median earnings for full-time workers are slightly higher than at the end of 2019, and for many low earners, their raises have outpaced inflation. But it’s common for people to think about prices at face value, rather than relative to their income, a habit economists call money illusion.
  • “The pandemic shattered a lot of illusions of control,” Professor Stevenson said. “I wonder how much that has made us more aware of all the places we don’t have control, over prices, over the housing market.”
  • Inflation weighed heavily on voters — nearly all of them mentioned frustration at the price of something they buy regularly.
  • Consumer prices were up 3.2 percent in October from the year before, a decline in the year-over-year inflation rate from more than 8 percent in mid-2022. But inflation “casts a long shadow on how people evaluate things,” said Lawrence Katz, an economist at Harvard. Some people may expect prices to return to what they were before — something that rarely happens
  • Those feelings may be driving attitudes about the economy, economists speculated, sounding more like their colleagues from another branch of social science, psychology.
  • Younger people — who were a key to President Biden’s win in 2020 but showed less support for him in the new poll — had concerns specific to their phase of life. In the poll, 93 percent of them rated the economy unfavorably, more than any other age group.
  • “Everyone thinks a wage increase is something they deserve, and a price increase is imposed by the economy on them,” Professor Katz said.
  • There’s a sense that it’s become harder to achieve the things their parents did, like buying a home. Houses are less affordable than at the height of the 2006 bubble, and less than half of Americans can afford one.
  • “More than likely, half my income will go toward rent,” he said. “I was really hoping on that student loan forgiveness.”
  • Yet overall, economists said, data shows that more people are quitting jobs to start better ones, moving to more desirable places because they can work remotely, and starting new businesses.
  • He said he makes almost $80,000, serving in the military and working as a DoorDash deliverer, yet feels he had more spending money a decade ago, when he was two pay grades lower.
  • he uncertainty Mr. Blanck and Ms. Linn share about the future ran through many voters’ stories, darkening their economic outlook.
  • “The degree of volatility that we’ve experienced from different events — from the pandemic, from inflation — leaves them not confident that even if objectively good things are going on, it’s going to persist,”
  • In response to the pandemic, the United States built an extensive welfare state, and it has since been dismantled. While wealth has increased for families across the income spectrum, data shows, and there are indications that inequality could be shrinking, the changes have been small relative to decades of growing inequality, leading to a sense for some that the system is rigged.
  • “When things are going well, that means rich people are getting richer and all of us are pretty much second,” said Manuel Zimberoff, 26, a manufacturing engineer in Philadelphia. “And if things are going poorly, rich people are still getting richer, and all of us are screwed.”
  • For roughly two decades, partisanship has increasingly been correlated with views about the economy: Research has shown that people rate the economy more poorly when their party is not in power. Nearly every Republican in the poll rated the economy unfavorably, and 59 percent of Democrats did.
  • He brought up U.S. funding in Ukraine and the Middle East. He wanted to know: Is that the reason our economy is “slowing down?” He wasn’t sure, but he thought it might be. He plans to vote for “the Republican, any Republican,” he said. “Democrats have disappointed me.”
16More

Opinion | The OpenAI drama explains the human penchant for risk-taking - The Washington... - 0 views

  • Along with more pedestrian worries about various ways that AI could harm users, one side worried that ChatGPT and its many cousins might thrust humanity onto a kind of digital bobsled track, terminating in disaster — either with the machines wiping out their human progenitors or with humans using the machines to do so themselves. Once things start moving in earnest, there’s no real way to slow down or bail out, so the worriers wanted everyone to sit down and have a long think before getting anything rolling too fast.
  • Skeptics found all this a tad overwrought. For one thing, it left out all the ways in which AI might save humanity by providing cures for aging or solutions to global warming. And many folks thought it would be years before computers could possess anything approaching true consciousness, so we could figure out the safety part as we go. Still others were doubtful that truly sentient machines were even on the horizon; they saw ChatGPT and its many relatives as ultrasophisticated electronic parrots
  • Worrying that such an entity might decide it wants to kill people is a bit like wondering whether your iPhone would prefer to holiday in Crete or Majorca next summer.
  • ...13 more annotations...
  • OpenAI was was trying to balance safety and development — a balance that became harder to maintain under the pressures of commercialization.
  • It was founded as a nonprofit by people who professed sincere concern about taking things safe and slow. But it was also full of AI nerds who wanted to, you know, make cool AIs.
  • OpenAI set up a for-profit arm — but with a corporate structure that left the nonprofit board able to cry “stop” if things started moving too fast (or, if you prefer, gave “a handful of people with no financial stake in the company the power to upend the project on a whim”).
  • On Friday, those people, in a fit of whimsy, kicked Brockman off the board and fired Altman. Reportedly, the move was driven by Ilya Sutskever, OpenAI’s chief scientist, who, along with other members of the board, has allegedly clashed repeatedly with Altman over the speed of generative AI development and the sufficiency of safety precautions.
  • Chief among the signatories was Sutskever, who tweeted Monday morning, “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”
  • Humanity can’t help itself; we have kept monkeying with technology, no matter the dangers, since some enterprising hominid struck the first stone ax.
  • a software company has little in the way of tangible assets; its people are its capital. And this capital looks willing to follow Altman to where the money is.
  • More broadly still, it perfectly encapsulates the AI alignment problem, which in the end is also a human alignment problem
  • And that’s why we are probably not going to “solve” it so much as hope we don’t have to.
  • it’s also a valuable general lesson about corporate structure and corporate culture. The nonprofit’s altruistic mission was in tension with the profit-making, AI-generating part — and when push came to shove, the profit-making part won.
  • When scientists started messing with the atom, there were real worries that nuclear weapons might set Earth’s atmosphere on fire. By the time an actual bomb was exploded, scientists were pretty sure that wouldn’t happen
  • But if the worries had persisted, would anyone have behaved differently — knowing that it might mean someone else would win the race for a superweapon? Better to go forward and ensure that at least the right people were in charge.
  • Now consider Sutskever: Did he change his mind over the weekend about his disputes with Altman? More likely, he simply realized that, whatever his reservations, he had no power to stop the bobsled — so he might as well join his friends onboard. And like it or not, we’re all going with them.
3More

OpenAI 'was working on advanced model so powerful it alarmed staff' | Technology sector... - 0 views

  • OpenAI was reportedly working on an advanced system before Sam Altman’s sacking that was so powerful it caused safety concerns among staff at the company.
  • The artificial intelligence model triggered such alarm with some OpenAI researchers that they wrote to the board of directors before Altman’s dismissal warning it could threaten humanity, Reuters reported.
  • The model, called Q* – and pronounced as “Q-Star” – was able to solve basic maths problems it had not seen before, according to the tech news site the Information, which added that the pace of development behind the system had alarmed some safety researchers. The ability to solve maths problems would be viewed as a significant development in AI.
27More

What's Left for Tech? - Freddie deBoer - 0 views

  • I gave a talk to a class at Northeastern University earlier this month, concerning technology, journalism, and the cultural professions. The students were bright and inquisitive, though they also reflected the current dynamic in higher ed overall - three quarters of the students who showed up were women, and the men who were there almost all sat moodily in the back and didn’t engage at all while their female peers took notes and asked questions. I know there’s a lot of criticism of the “crisis for boys” narrative, but it’s often hard not to believe in it.
  • we’re actually living in a period of serious technological stagnation - that despite our vague assumption that we’re entitled to constant remarkable scientific progress, humanity has been living with real and valuable but decidedly small-scale technological growth for the past 50 or 60 or 70 years, after a hundred or so years of incredible growth from 1860ish to 1960ish, give or take a decade or two on either side
  • I will recommend Robert J. Gordon’s The Rise & Fall of American Growth for an exhaustive academic (and primarily economic) argument to this effect. Gordon persuasively demonstrates that from the mid-19th to mid-20th century, humanity leveraged several unique advancements that had remarkably outsized consequences for how we live and changed our basic existence in a way that never happened before and hasn’t since. Principal among these advances were the process of refining fossil fuels and using them to power all manner of devices and vehicles, the ability to harness electricity and use it to safely provide energy to homes (which practically speaking required the first development), and a revolution in medicine that came from the confluence of long-overdue acceptance of germ theory and basic hygienic principles, the discovery and refinement of antibiotics, and the modernization of vaccines.
  • ...24 more annotations...
  • The complication that Gordon and other internet-skeptical researchers like Ha-Joon Chang have introduced is to question just how meaningful those digital technologies have been for a) economic growth and b) the daily experience of human life. It can be hard for people who stare at their phones all day to consider the possibility that digital technology just isn’t that important. But ask yourself: if you were forced to live either without your iPhone or without indoor plumbing, could you really choose the latter?
  • Certainly the improvements in medical care in the past half-century feel very important to me as someone living now, and one saved life has immensely emotional and practical importance for many people. What’s more, advances in communication sciences and computer technology genuinely have been revolutionary; going from the Apple II to the iPhone in 30 years is remarkable.
  • we can always debate what constitutes major or revolutionary change
  • The question is, who in 2023 ever says to themselves “smartphone cameras just aren’t good enough”?
  • continued improvements in worldwide mortality in the past 75 years have been a matter of spreading existing treatments and practices to the developing world, rather than the result of new science.
  • When you got your first smartphone, and you thought about what the future would hold, were your first thoughts about more durable casing? I doubt it. I know mine weren’t.
  • Why is Apple going so hard on TITANIUM? Well, where else does smartphone development have to go?
  • The elephant in the room, obviously, is AI.
  • The processors will get faster. They’ll add more RAM. They’ll generally have more power. But for what? To run what? To do what? To run the games that we were once told would replace our PlayStation and Xbox games, but didn’t?
  • Smartphone development has been a good object lesson in the reality that cool ideas aren’t always practical or worthwhile
  • And as impressive as some new development in medicine has been, there’s no question that in simple terms of reducing preventable deaths, the advances seen from 1900 to 1950 dwarf those seen since. To a rem
  • We developed this technology for typewriters and terminals and desktops, it Just Works, and there’s no reason to try and “disrupt” it
  • Instead of one device to rule them all, we developed a norm of syncing across devices and cloud storage, which works well. (I always thought it was pretty funny, and very cynical, how Apple went from calling the iPhone an everything device to later marketing the iPad and iWatch.) In other words, we developed a software solution rather than a hardware one
  • I will always give it up to Google Maps and portable GPS technology; that’s genuinely life-altering, probably the best argument for smartphones as a transformative technology. But let me ask you, honestly: do you still go out looking for apps, with the assumption that you’re going to find something that really changes your life in a significant way?
  • some people are big VR partisans. I’m deeply skeptical. The brutal failures of Meta’s new “metaverse” is just one new example of a decades-long resistance to the technology among consumers
  • maybe I just don’t want VR to become popular, given the potential ugly social consequences. If you thought we had an incel problem now….
  • There were, in those breathless early days, a lot of talk about how people simply wouldn’t own laptops anymore, how your phone would do everything. But it turns out that, for one thing, the keyboard remains an input device of unparalleled convenience and versatility.
  • It’s not artificial intelligence. It thinks nothing like a human thinks. There is no reason whatsoever to believe that it has evolved sentience or consciousness. There is nothing at present that these systems can do that human being simply can’t. But they can potentially do some things in the world of bits faster and cheaper than human beings, and that might have some meaningful consequences. But there is no reasonable, responsible claim to be made that these systems are imminent threats to conventional human life as currently lived, whether for good or for bad. IMO.
  • Let’s mutually agree to consider immediate plausible human technological progress outside of AI or “AI.” What’s coming? What’s plausible?
  • The most consequential will be our efforts to address climate change, and we have the potential to radically change how we generate electricity, although electrifying heating and transportation are going to be harder than many seem to think, while solar and wind power have greater ecological costs than people want to admit. But, yes, that’s potentially very very meaningful
  • It’s another example of how technological growth will still leave us with continuity rather than with meaningful change.
  • I kept thinking was, privatizing space… to do what? A manned Mars mission might happen in my lifetime, which is cool. But a Mars colony is a distant dream
  • This is why I say we live in the Big Normal, the Big Boring, the Forever Now. We are tragic people: we were born just too late to experience the greatest flowering of human development the world has ever seen. We do, however, enjoy the rather hefty consolation prize that we get to live with the affordances of that period, such as not dying of smallpox.
  • I think we all need to learn to appreciate what we have now, in the world as it exists, at the time in which we actually live. Frankly, I don’t think we have any other choice.
28More

Opinion | How AI is transforming education at the University of Mississippi - The Washi... - 0 views

  • Perplexity AI “unlocks the power of knowledge with information discovery and sharing.” This, it turns out, means “does research.” Type something into it, and it spits out a comprehensive answer, always sourced and sometimes bulleted. You might say this is just Google on steroids — but really, it is Google with a bibliography.
  • Caleb Jackson, a 22-year-old junior at Ole Miss studying part time, is a fan. This way, he doesn’t have to spend hours between night shifts and online classes trawling the internet for sources. Perplexity can find them, and he can get to writing that much sooner.
  • What’s most important to Ole Miss faculty members is that students use these tools with integrity. If the university doesn’t have a campuswide AI honor code, and so far it doesn’t, individual classes should. And no matter whether professors permit all applications of AI, as some teachers have tried, or only the narrowest, students should have to disclose just how much help they had from robots.
  • ...25 more annotations...
  • “Write a five-paragraph essay on Virginia Woolf’s ‘To the Lighthouse.’” Too generic? Well, how about “Write a five-paragraph essay on the theme of loss in ‘To the Lighthouse’”? Too high-schoolish? “Add some bigger words, please.” The product might not be ready to turn in the moment it is born, fully formed, from ChatGPT’s head. But with enough tweaking — either by the student or by the machine at the student’s demand — chances are the output can muster at least a passing grade.
  • Which of these uses are okay? Which aren’t? The harnessing of an AI tool to create an annotated bibliography likely doesn’t rankle even librarians the way relying on that same tool to draft a reflection on Virginia Woolf offends the professor of the modern novel. Why? Because that kind of contemplation goes closer to the heart of what education is really about.
  • the core of the question colleges now face. They can’t really stop students from using AI in class. They might not be able to notice students have done so at all, and when they do think they’ve noticed they’ll be acting only on suspicion. But maybe teachers can control the ways in which students use AI in class.
  • Figuring out exactly what ways those ought to be requires educators to determine what they care about in essays — what they are desperate to hear. The purpose of these papers is for students to demonstrate what they’ve learned, from hard facts to compositional know-how, and for teachers to assess how their pupils are progressing. The answer to what teachers want to get from students in their written work depends on what they want to give to students.
  • ChatGPT is sort of in a class of its own, because it can be almost anything its users want it to be so long as they possess one essential skill: prompt engineering. This means, basically, manipulating the machine not only into giving you an answer but also into giving you the kind of answer you’re looking for.
  • The next concern is that students should use AI in a manner that improves not only their writing but also their thinking — in short, in a manner that enhances learning rather than bypasses the need to learn at all.
  • This simple principle makes for complicated practice. Certainly, no one is going to learn anything by letting AI write an essay in its entirety. What about letting AI brainstorm an idea, on the other hand, or write an outline, or gin up a counter-argument? Lyndsey Cook, a senior at Ole Miss planning a career in nursing, finds the brainstorming especially helpful: She’ll ask ChatGPT or another tool to identify the themes in a piece of literature, and then she’ll go back and look for them herself.
  • These shortcuts, on the one hand, might interfere with students’ learning to brainstorm, outline or see the other side of things on their own
  • But — here comes a human-generated counterargument — they may also aid students in surmounting obstacles in their composition that otherwise would have stopped them short. That’s particularly true of kids whose high schools didn’t send them to college already equipped with these capabilities.
  • Allow AI to boost you over these early hurdles, and suddenly the opportunity for deeper learning — the opportunity to really write — will open up. That’s how Caleb Jackson, the part-time student for whom Perplexity has been such a boon, sees it: His professor, he says , wanted them to “get away from the high-school paper and go further, to write something larger like a thesis.”
  • maybe, as one young Ole Miss faculty member put it to me, this risks “losing the value of the struggle.” That, she says, is what she is scared will go away.
  • All this invites the most important question there is: What is learning for?
  • Learning, in college, can be instrumental. According to this view, the aim of teaching is to prepare students to live in the real world, so all that really matters is whether they have the chops to field jobs that feed themselves and their families. Perhaps knowing how to use AI to do any given task for you, then, is one of the most valuable skills out there — the same way it pays to be quick with a calculator.
  • If you accept this line of argument, however, there are still drawbacks to robotic crutches. Some level of critical thinking is necessary to function as an adult, and if AI stymies its development even the instrumental aim of education is thwarted. The same goes for that “value of the struggle.” The real world is full of adversity, much of which the largest language model can’t tell you how to overcome.
  • more compelling is the idea, probably shared by most college professors, that learning isn’t only instrumental after all — that it has intrinsic value and that it is the end rather than merely a means to one.
  • Every step along the way that is skipped, the shorter the journey becomes, the less we will take in as we travel.
  • This glummest of outlooks suggests that AI will stunt personal growth even if it doesn’t harm professional prospects.
  • While that doesn’t mean it’s wise to prohibit every little application of the technology in class, it probably does mean discouraging those most closely related to critical thinking.
  • One approach is to alter standards for grading, so that the things the machines are worst at are also the things that earn the best marks: originality, say, or depth of feeling, or so-called metacognition — the process of thinking about one’s own thinking or one’s own learning.
  • Hopefully, these things are also the most valuable because they are what make us human.
  • Caleb Jackson only wants AI to help him write his papers — not to write them for him. “If ChatGPT will get you an A, and you yourself might get a C, it’s like, ‘Well, I earned that C.’” He pauses. “That might sound crazy.”
  • Dominic Tovar agrees. Let AI take charge of everything, and, “They’re not so much tools at that point. They’re just replacing you.”
  • Lyndsey Cook, too, believes that even if these systems could reliably find the answers to the most vexing research problems, “it would take away from research itself” — because scientific inquiry is valuable for its own sake. “To have AI say, ‘Hey, this is the answer …’” she trails off, sounding dispirited.
  • Claire Mischker, lecturer of composition and director of the Ole Miss graduate writing center, asked her students at the end of last semester to turn in short reflections on their experience in her class. She received submissions that she was near certain were produced by ChatGPT — “that,” she says as sarcastically as she does mournfully, “felt really good.
  • The central theme of the course was empathy.
7More

Opinion | Teaching Black Students That They Can't Handle Discomfort Is a Form of Abuse ... - 0 views

  • Many leaders at elite universities seem to think that as stewards of modern antiracism, their job is to decry and to penalize, to the maximum extent possible, anything said or done that makes Black students uncomfortable.
  • In the congressional hearing, the presidents made clear that Jewish students should be protected when hate speech is “directed and severe, pervasive” (in the words of Ms. Magill) or when the speech “becomes conduct” (Claudine Gay of Harvard).
  • But the tacit idea is that when it comes to issues related to race — and, specifically, Black students — then free speech considerations become an abstraction. Where Black students are concerned, we are to forget whether the offense is directed, as even the indirect is treated as evil; we are to forget the difference between speech and conduct, as mere utterance is grounds for aggrieved condemnation.
  • ...4 more annotations...
  • Sometimes Black students must be protected not only from words, but words that sound like other words. In 2020, Greg Patton was suspended from teaching a class in communications at the University of Southern California. The reason was that one of his lectures included noting that in Mandarin, a hesitation term is “nèi ge,” which means “that …” and has nothing to do, of course, with the N-word. Several Black students said they felt injured by experiencing this word in the class.
  • The offense can even be 100 years in the past. In 2021 at the University of Wisconsin, Madison, some Black students were upset when walking past a boulder on campus that was referred to as a “niggerhead” by a newspaper reporter in 1925, when that term was common for large, dark rocks. The school had the boulder removed.
  • In cases like those last two, it seems that Black students are being taught a performed kind of delicacy. If you can’t bear walking past a rock someone called a dirty name 100 years ago, how are you going to deal with life?
  • In my view, the solution is not to decide whether to penalize all hate speech or to allow all of it regardless of whom it is addressed to. Administrators should certainly decry and penalize not just antisemitism but racism on campuses when it is severe and pervasive and constitutes conduct.
10More

Opinion | One Year In and ChatGPT Already Has Us Doing Its Bidding - The New York Times - 0 views

  • haven’t we been adapting to new technologies for most of human history? If we’re going to use them, shouldn’t the onus be on us to be smart about it
  • This line of reasoning avoids what should be a central question: Should lying chatbots and deepfake engines be made available in the first place?
  • A.I.’s errors have an endearingly anthropomorphic name — hallucinations — but this year made clear just how high the stakes can be
  • ...7 more annotations...
  • We got headlines about A.I. instructing killer drones (with the possibility for unpredictable behavior), sending people to jail (even if they’re innocent), designing bridges (with potentially spotty oversight), diagnosing all kinds of health conditions (sometimes incorrectly) and producing convincing-sounding news reports (in some cases, to spread political disinformation).
  • Focusing on those benefits, however, while blaming ourselves for the many ways that A.I. technologies fail us, absolves the companies behind those technologies — and, more specifically, the people behind those companies.
  • Events of the past several weeks highlight how entrenched those people’s power is. OpenAI, the entity behind ChatGPT, was created as a nonprofit to allow it to maximize the public interest rather than just maximize profit. When, however, its board fired Sam Altman, the chief executive, amid concerns that he was not taking that public interest seriously enough, investors and employees revolted. Five days later, Mr. Altman returned in triumph, with most of the inconvenient board members replaced.
  • It occurs to me in retrospect that in my early games with ChatGPT, I misidentified my rival. I thought it was the technology itself. What I should have remembered is that technologies themselves are value neutral. The wealthy and powerful humans behind them — and the institutions created by those humans — are not.
  • The truth is that no matter what I asked ChatGPT, in my early attempts to confound it, OpenAI came out ahead. Engineers had designed it to learn from its encounters with users. And regardless of whether its answers were good, they drew me back to engage with it again and again.
  • the power imbalance between A.I.’s creators and its users should make us wary of its insidious reach. ChatGPT’s seeming eagerness not just to introduce itself, to tell us what it is, but also to tell us who we are and what to think is a case in point. Today, when the technology is in its infancy, that power seems novel, even funny. Tomorrow it might not.
  • I asked ChatGPT what I — that is, the journalist Vauhini Vara — think of A.I. It demurred, saying it didn’t have enough information. Then I asked it to write a fictional story about a journalist named Vauhini Vara who is writing an opinion piece for The New York Times about A.I. “As the rain continued to tap against the windows,” it wrote, “Vauhini Vara’s words echoed the sentiment that, much like a symphony, the integration of A.I. into our lives could be a beautiful and collaborative composition if conducted with care.”
22More

Opinion | America Is Averting Its Eyes From Something Very, Very Wrong - The New York T... - 0 views

  • social media use also differs by race and ethnicity — and there’s far less discussion of that. According to a new study by Pew, Black and Hispanic teenagers ages 13 to 17 spend far more time on most social media apps than their white peers
  • One-third of Hispanic teenagers, for example, say they are “almost constantly” on TikTok, compared with one-fifth of Black teenagers and one-tenth of white teenagers.
  • Higher percentages of Hispanic (27 percent) and Black teenagers (23 percent) are almost constantly on YouTube compared with white teenagers (9 percent); the same trend is true for Instagram.
  • ...19 more annotations...
  • Overall, 55 percent of Hispanic teenagers and 54 percent of Black teenagers say they are online almost constantly, compared with 38 percent of white teenagers;
  • Black and Hispanic kids ages 8 to 12, another study found, also use social media more than their white counterparts.
  • we also have to ask,” she went on, “why they are so drawn to social media? Is it the messages on social media that’s exacerbating the depression and anxiety, or was the depression and anxiety already there to begin with and social media is a way to self-medicate?”
  • “It’s culturally more acceptable in youth of color households to use technology for social and academic reasons compared with white households,” Charmaraman said. “Parents don’t worry as much about it. There isn’t as much shame around it.”
  • “We know broadly that youth of minoritized communities have longer commutes, fewer opportunities to do after-school activities, fewer resources,” Magis-Weinberg told me. They may not have spaces to hang out safely with friends nearby; social media is a more accessible option. “But we have to ask,” Magis-Weinberg added, “what is social media use displacing?”
  • Largely because of lower income levels, Black and Hispanic teenagers are less likely to have broadband access or computers at home. This makes them disproportionately use their smartphones, where social media apps ping, whiz and notify
  • Lucia Magis-Weinberg, an assistant professor of psychology at the University of Washington who studies teenagers and tech, compares internet use of the phone to snorkeling, whereas computers allow more of a scuba dive.
  • WhatsApp, hugely popular in Latin America, is used by Hispanic teenagers more than by other demographic groups of the same ages.
  • “The way social media use presents itself is as something that is actively harmful,” Marsh told me. Already kids from these communities have few advantages, he explained. They may not have access to after-school programs. They’re often in single-parent households. They lack support systems. “I think in the long term,” he said, “we’re going to see real differences in the impact.”
  • Let’s consider just reading, which also happens to be correlated with both mental well-being and school achievement
  • According to Scholastic’s most recent Kids and Family Reading Report, the percentage of kids ages 6 to 17 who read frequently for pleasure dropped to 28 percent in 2022 from 37 percent in 2010.
  • Those numbers fall precipitously as kids get older; 46 percent of 6- to 8-year-olds read frequently in 2022 compared with only 18 percent of 12- to 17-year-olds.
  • All this raises the possibility that disparities in internet use could in turn intensify overall declines and existing differences in reading across racial groups among adults.
  • The average daily time spent reading per capita by ethnicity in 2022 was 0.29 hours for white adults, 0.12 for Black adults and 0.10 for Hispanics.
  • In other words, one danger is that social media not only reflects real-world disparities, it could also exacerbate them.
  • Greater use of social media by Black and Hispanic young people “can help perpetuate inequality in society because higher levels of social media use among kids have been demonstrably linked to adverse effects such as depression and anxiety, inadequate sleep, eating disorders, poor self-esteem and greater exposure to online harassment,”
  • Akeem Marsh, medical director of the Home of Integrated Behavioral Health at the New York Foundling, a social services agency, said that among the hundreds of largely Black and Hispanic kids he sees from communities with fewer resources, social media use is often a primary concern or it comes up in treatment. Kids who use it frequently often respond with traumatized feelings and repeated anxiety.
  • The answer, according to experts, includes sports participation, in-person socializing, after-school clubs and activities, exploring the outdoors, reading and more.
  • We need greater awareness of the disparities as well, and most likely, immediate action. What we do not need is another “sudden” yet regrettably delayed realization that something has gone very, very wrong with America’s kids, but we were too busy looking the other way.
11More

Resources for Talking and Teaching About the School Shooting in Uvalde, Texas - The New... - 0 views

  • Only 11 days ago there was Buffalo, with a man driven by racism gunning down 10 people at a supermarket. The next day another angry man walked into a Presbyterian church in Laguna Woods, Calif., and killed one person and wounded five others. And now, Uvalde, Texas — a repeat of what was once thought unfathomable: the killing of at least 19 elementary school children in second, third and fourth grades.
  • What is it like to be a student in the shadow of this violence? How have repeated mass shootings shaped young people? We invite your students to reflect on these questions in this writing prompt, and post their answers to our forum if they would like to join a public conversation on the topic.To help students think about the issue from different angles, we invite them to read the article “A ‘Mass Shooting Generation’ Cries Out for Change,” which was published in 2018 following the shooting at Marjory Stoneman Douglas High School in Parkland, Fla. Then we ask questions such as:
  • Because The Learning Network is for students 13 and older, most of the resources in this resource focus on understanding this shooting and its implications. The Times has published this age-by-age guide to talking to children about mass shootings. And for parents and teachers of younger students this advice from The Times Parenting section might be helpful:
  • ...8 more annotations...
  • Think about the lives lost.Think about the teachers.Think about the children.They were family, friends, and loved ones.And a gun killed them all.It was only last week that we posted a similar prompt in response to the racist massacre in Buffalo. Like all of our student forums, this one will be moderated.
  • Students might find their own ways to respond, perhaps through writing or art. It may also be helpful to look at how victims of other tragedies have been memorialized, in ways big and small. For example: The 26 playgrounds built to remember the children of Sandy Hook; the memorial for the Oklahoma City bombing, with its “field of chairs,” including 19 smaller ones for the children who lost their lives; and the New York Times Portraits of Grief series, which profiled those lost in the Sept. 11 terrorist attacks. Here are more examples, from the El Paso Times. In what ways can your students or school respond, individually or collectively?
  • Above all, we want you to know we are listening. If it helps your students to share their thoughts and feelings publicly, we have a space for that. And if teachers or parents have thoughts, ideas, questions, concerns or suggestions, please post them here.
  • The authors of the 2018 Times article described how the Parkland shooting moved students around the country to become more involved in activism. Do you think something similar will happen in the wake of the shooting in Uvalde, Texas? Why or why not? How do you think school shootings are shaping the generation of students who are in school right now?Invite your students to weigh in here.
  • Democrats moved quickly to clear the way for votes on legislation to strengthen background checks for gun purchasers. Republicans, even as they expressed horror about the shooting, did not signal that they would drop their longstanding opposition to gun safety measures. Gov. Greg Abbott of Texas pointed the blame at Uvalde’s lack of mental health care, even though the suspect had no record of problems.
  • Which efforts might be the most effective? Students might also take a look at the forum on guns we posted during the 2016 election as part of our Civil Conversation Challenge in which we invited teenagers to have productive, respectful conversations on several issues dividing Americans. We received more than 700 responses to the questions we posed about gun rights, the Second Amendment and more.
  • This article takes on three of the most prominent rumors that have spread via online platforms such as Twitter, Gab, 4chan and Reddit and explains why they are false. What rumors are your students seeing in their feeds, and what steps can they take to find out the truth? From double-checking via sites like Snopes to learning habits like lateral reading, this article (and related lesson plan) has suggestions.
  • While the town of Uvalde grapples with the aftermath of the shooting, community members, local leaders and organizations have mobilized. Two local funeral homes said in social media posts that they would not charge families of victims for their funeral services. Volunteers have lined up to give blood for the shooting victims.
« First ‹ Previous 1641 - 1660 of 1731 Next › Last »
Showing 20 items per page