Skip to main content

Home/ History Readings/ Group items matching "Progressive" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

U.S. officials misled the public about the war in Afghanistan, confidential documents reveal - Washington Post - 0 views

  • In the interviews, more than 400 insiders offered unrestrained criticism of what went wrong in Afghanistan and how the United States became mired in nearly two decades of warfare. With a bluntness rarely expressed in public, the interviews lay bare pent-up complaints, frustrations and confessions, along with second-guessing and backbiting.
  • Since 2001, more than 775,000 U.S. troops have deployed to Afghanistan, many repeatedly. Of those, 2,300 died there and 20,589 were wounded in action, according to Defense Department figures.
  • They underscore how three presidents — George W. Bush, Barack Obama and Donald Trump — and their military commanders have been unable to deliver on their promises to prevail in Afghanistan.
  • ...39 more annotations...
  • With most speaking on the assumption that their remarks would not become public, U.S. officials acknowledged that their warfighting strategies were fatally flawed and that Washington wasted enormous sums of money trying to remake Afghanistan into a modern nation.
  • The interviews also highlight the U.S. government’s botched attempts to curtail runaway corruption, build a competent Afghan army and police force, and put a dent in Afghanistan’s thriving opium trade.
  • Since 2001, the Defense Department, State Department and U.S. Agency for International Development have spent or appropriated between $934 billion and $978 billion
  • Those figures do not include money spent by other agencies such as the CIA and the Department of Veterans Affairs, which is responsible for medical care for wounded veterans.
  • Several of those interviewed described explicit and sustained efforts by the U.S. government to deliberately mislead the public. They said it was common at military headquarters in Kabul — and at the White House — to distort statistics to make it appear the United States was winning the war when that was not the case.
  • SIGAR departed from its usual mission of performing audits and launched a side venture. Titled “Lessons Learned,” the $11 million project was meant to diagnose policy failures in Afghanistan so the United States would not repeat the mistakes the next time it invaded a country or tried to rebuild a shattered one.
  • the reports, written in dense bureaucratic prose and focused on an alphabet soup of government initiatives, left out the harshest and most frank criticisms from the interviews.
  • “We found the stabilization strategy and the programs used to achieve it were not properly tailored to the Afghan context, and successes in stabilizing Afghan districts rarely lasted longer than the physical presence of coalition troops and civilians,” read the introduction to one report released in May 2018.
  • To augment the Lessons Learned interviews, The Post obtained hundreds of pages of previously classified memos about the Afghan war that were dictated by Defense Secretary Donald H. Rumsfeld between 2001 and 2006.
  • Together, the SIGAR interviews and the Rumsfeld memos pertaining to Afghanistan constitute a secret history of the war and an unsparing appraisal of 18 years of conflict.
  • With their forthright descriptions of how the United States became stuck in a faraway war, as well as the government's determination to conceal them from the public, the Lessons Learned interviews broadly resemble the Pentagon Papers, the Defense Department's top-secret history of the Vietnam War.
  • running throughout are torrents of criticism that refute the official narrative of the war, from its earliest days through the start of the Trump administration.
  • At the outset, for instance, the U.S. invasion of Afghanistan had a clear, stated objective — to retaliate against al-Qaeda and prevent a repeat of the Sept. 11, 2001, attacks.
  • Yet the interviews show that as the war dragged on, the goals and mission kept changing and a lack of faith in the U.S. strategy took root inside the Pentagon, the White House and the State Department.
  • Fundamental disagreements went unresolved. Some U.S. officials wanted to use the war to turn Afghanistan into a democracy. Others wanted to transform Afghan culture and elevate women’s rights. Still others wanted to reshape the regional balance of power among Pakistan, India, Iran and Russia.
  • The Lessons Learned interviews also reveal how U.S. military commanders struggled to articulate who they were fighting, let alone why.
  • Was al-Qaeda the enemy, or the Taliban? Was Pakistan a friend or an adversary? What about the Islamic State and the bewildering array of foreign jihadists, let alone the warlords on the CIA’s payroll? According to the documents, the U.S. government never settled on an answer.
  • As a result, in the field, U.S. troops often couldn’t tell friend from foe.
  • The United States has allocated more than $133 billion to build up Afghanistan — more than it spent, adjusted for inflation, to revive the whole of Western Europe with the Marshall Plan after World War II.
  • As commanders in chief, Bush, Obama and Trump all promised the public the same thing. They would avoid falling into the trap of "nation-building" in Afghanistan.
  • U.S. officials tried to create — from scratch — a democratic government in Kabul modeled after their own in Washington. It was a foreign concept to the Afghans, who were accustomed to tribalism, monarchism, communism and Islamic law.
  • During the peak of the fighting, from 2009 to 2012, U.S. lawmakers and military commanders believed the more they spent on schools, bridges, canals and other civil-works projects, the faster security would improve. Aid workers told government interviewers it was a colossal misjudgment, akin to pumping kerosene on a dying campfire just to keep the flame alive.
  • One unnamed executive with the U.S. Agency for International Development (USAID) guessed that 90 percent of what they spent was overkill: “We lost objectivity. We were given money, told to spend it and we did, without reason.”Lessons Learned interview | 10/7/2016Tap to view full document
  • The gusher of aid that Washington spent on Afghanistan also gave rise to historic levels of corruption.
  • In public, U.S. officials insisted they had no tolerance for graft. But in the Lessons Learned interviews, they admitted the U.S. government looked the other way while Afghan power brokers — allies of Washington — plundered with impunity.
  • Christopher Kolenda, an Army colonel who deployed to Afghanistan several times and advised three U.S. generals in charge of the war, said that the Afghan government led by President Hamid Karzai had “self-organized into a kleptocracy”Christopher Kolenda | Lessons Learned interview | 4/5/2016Tap to view full document by 2006 — and that U.S. officials failed to recognize the lethal threat it posed to their strategy.
  • By allowing corruption to fester, U.S. officials told interviewers, they helped destroy the popular legitimacy of the wobbly Afghan government they were fighting to prop up. With judges and police chiefs and bureaucrats extorting bribes, many Afghans soured on democracy and turned to the Taliban to enforce order.
  • None expressed confidence that the Afghan army and police could ever fend off, much less defeat, the Taliban on their own. More than 60,000 members of Afghan security forces have been killed, a casualty rate that U.S. commanders have called unsustainable.
  • In the Lessons Learned interviews, however, U.S. military trainers described the Afghan security forces as incompetent, unmotivated and rife with deserters. They also accused Afghan commanders of pocketing salaries — paid by U.S. taxpayers — for tens of thousands of “ghost soldiers.”
  • an army and national police force that can defend the country without foreign help.
  • Year after year, U.S. generals have said in public they are making steady progress on the central plank of their strategy: to train a robust Afgh
  • From the beginning, Washington never really figured out how to incorporate a war on drugs into its war against al-Qaeda. By 2006, U.S. officials feared that narco-traffickers had become stronger than the Afghan government and that money from the drug trade was powering the insurgency
  • throughout the Afghan war, documents show that U.S. military officials have resorted to an old tactic from Vietnam — manipulating public opinion. In news conferences and other public appearances, those in charge of the war have followed the same talking points for 18 years. No matter how the war is going — and especially when it is going badly — they emphasize how they are making progress.
  • Two months later, Marin Strmecki, a civilian adviser to Rumsfeld, gave the Pentagon chief a classified, 40-page report loaded with more bad news. It said “enormous popular discontent is building” against the Afghan government because of its corruption and incompetence. It also said that the Taliban was growing stronger, thanks to support from Pakistan, a U.S. ally.
  • Since then, U.S. generals have almost always preached that the war is progressing well, no matter the reality on the battlefield.
  • he Lessons Learned interviews contain numerous admissions that the government routinely touted statistics that officials knew were distorted, spurious or downright false
  • A person identified only as a senior National Security Council official said there was constant pressure from the Obama White House and Pentagon to produce figures to show the troop surge of 2009 to 2011 was working, despite hard evidence to the contrary.
  • Even when casualty counts and other figures looked bad, the senior NSC official said, the White House and Pentagon would spin them to the point of absurdity. Suicide bombings in Kabul were portrayed as a sign of the Taliban’s desperation, that the insurgents were too weak to engage in direct combat. Meanwhile, a rise in U.S. troop deaths was cited as proof that American forces were taking the fight to the enemy.
  • “And this went on and on for two reasons,” the senior NSC official said, “to make everyone involved look good, and to make it look like the troops and resources were having the kind of effect where removing them would cause the country to deteriorate.”
Javier E

Opinion | How to Argue Against Identity Politics Without Turning Into a Reactionary - The New York Times - 0 views

  • I prefer a more neutral phrase, which emphasizes that this ideology focuses on the role that groups play in society and draws on a variety of intellectual influences such as postmodernism, postcolonialism and critical race theory: the “identity synthesis.”
  • There is a way to warn about these views on identity that is thoughtful yet firm, principled yet unapologetic.
  • The first step is to recognize that they constitute a novel ideology — one that, though it has wide appeal for serious reasons, is profoundly misguided.
  • ...21 more annotations...
  • it is also a recipe for zero-sum conflict between different groups. For example, when teachers at a private school in Manhattan tell white middle schoolers to “own” their “European ancestry,” they are more likely to create racists than anti-racists.
  • According to Mr. Bell, the Constitution — and even key Supreme Court rulings like Brown v. Board of Education — cloaked the reality of racial discrimination. The only remedy, he claimed, is to create a society in which the way that the state treats citizens would, whether it comes to the benefits they can access or the school they might attend, explicitly turn on the identity groups to which they belong.
  • To take critical race theory — and the wider ideological tradition it helped to inspire — seriously is to recognize that it explicitly stands in conflict with the views of some of the country’s most storied historical figures. Political leaders from Frederick Douglass to Abraham Lincoln and Martin Luther King Jr. recognized that the Constitution was not enough to protect Black Americans from horrific injustices. But instead of rejecting those documents as irredeemable, they fought to turn their promises into reality.
  • Critical race theory is far more than a determination to think critically about race
  • similarly, the identity synthesis as a whole goes well beyond the recognition that many people will, for good reason, take pride in their identity
  • It claims that categories like race, gender and sexual orientation are the primary prism through which to understand everything about our society, from major historical events to trivial personal interactions. And it encourages us to see one another — and ourselves — as being defined, above anything else, by the identities into which we are born.
  • These kinds of practices encourage complex people to see themselves as defined by external characteristics whose combinations and permutations, however numerous, will never amount to a satisfactory depiction of their innermost selves
  • though few people acknowledge defeat in the middle of an argument, most do shift their worldview over time. Our job is to persuade, not to vilify, those who genuinely believe in the identity synthesis.
  • There is even growing evidence that the rapid adoption of these progressive norms is strengthening the very extremists who pose the most serious threat to democratic institutions
  • Derrick Bell, widely seen as the father of the tradition, cut his teeth as a civil rights lawyer who helped to desegregate hundreds of schools. But when many integrated schools failed to provide Black students with a better education, he came to think of his previous efforts as a dead end. Arguing that American racism would never subside, he rejected the “defunct racial equality ideology” of the civil rights movement,
  • Many people who were initially sympathetic to its goals have since recognized that the identity synthesis presents a real danger. They want to speak out against these ideas, but they are nervous about doing so
  • They fear that opposing the identity synthesis will, inevitably, force them to make common cause with people who don’t recognize the dangers of racism and bigotry, push them onto the “wrong side of history,” or even lead them down the same path as Mr. Weinstein.
  • the first part of that is to recognize that you can be a proud liberal — and an effective opponent of racism — while pushing back against the identity synthesis.
  • critics of the identity synthesis should claim the moral high ground and recognize that their opposition to the identity synthesis is of a piece with a noble tradition that was passed down through the generations from Douglass to Lincoln to King
  • one that has helped America make enormous, if inevitably incomplete, progress toward becoming a more just society. This makes it a little easier to speak from a position of calm confidence.
  • Instead of trying to “own” the most intransigent loudmouths, critics of the identity synthesis should seek to sway the members of this reasonable majority.
  • Mr. Trump has attracted a new group of supporters who are disproportionately nonwhite and comparatively progressive on cultural issues such as immigration reform and trans acceptance, but also perturbed by the influence that the identity synthesis has in mainstream institutions, like the corporate sector.
  • To avoid following the path charted by Mr. Weinstein, opponents of the identity synthesis need to be guided by a clear moral compass of their own. In my case, this compass consists of liberal values like political equality, individual freedom and collective self-determination.
  • For others, it could consist of socialist conviction or Christian faith, of conservative principles or the precepts of Buddhism.
  • what all of us must share is a determination to build a better world.
  • It is time to fight, without shame or hesitation, for a future in which what we have in common truly comes to be more important than what divides us.
Javier E

Opinion | How a 'Golden Era for Large Cities' Might Be Turning Into an 'Urban Doom Loop' - The New York Times - 0 views

  • Scholars are increasingly voicing concern that the shift to working from home, spurred by the coronavirus pandemic, will bring the three-decade renaissance of major cities to a halt, setting off an era of urban decline.
  • They cite an exodus of the affluent, a surge in vacant offices and storefronts and the prospect of declining property taxes and public transit revenues.
  • Insofar as fear of urban crime grows, as the number of homeless people increases, and as the fiscal ability of government to address these problems shrinks, the amenities of city life are very likely to diminish.
  • ...27 more annotations...
  • With respect to crime, poverty and homelessness, Brown argued,One thing that may occur is that disinvestment in city downtowns will alter the spatial distribution of these elements in cities — i.e. in which neighborhoods or areas of a city is crime more likely, and homelessness more visible. Urban downtowns are often policed such that these visible elements of poverty are pushed to other parts of the city where they will not interfere with commercial activities. But absent these activities, there may be less political pressure to maintain these areas. This is not to say that the overall crime rate or homelessness levels will necessarily increase, but their spatial redistribution may further alter the trajectory of commercial downtowns — and the perception of city crime in the broader public.
  • “The more dramatic effects on urban geography,” Brown continued,may be how this changes cities in terms of economic and racial segregation. One urban trend from the last couple of decades is young white middle- and upper-class people living in cities at higher rates than previous generations. But if these groups become less likely to live in cities, leaving a poorer, more disproportionately minority population, this will make metropolitan regions more polarized by race/class.
  • the damage that even the perception of rising crime can inflict on Democrats in a Nov. 27 article, “Meet the Voters Who Fueled New York’s Seismic Tilt Toward the G.O.P.”: “From Long Island to the Lower Hudson Valley, Republicans running predominantly on crime swept five of six suburban congressional seats, including three that President Biden won handily that encompass some of the nation’s most affluent, well-educated commuter towns.
  • In big cities like New York and San Francisco we estimate large drops in retail spending because office workers are now coming into city centers typically 2.5 rather than 5 days a week. This is reducing business activity by billions of dollars — less lunches, drinks, dinners and shopping by office workers. This will reduce city hall tax revenues.
  • Public transit systems are facing massive permanent shortfalls as the surge in working from home cuts their revenues but has little impact on costs (as subway systems are mostly a fixed cost. This is leading to a permanent 30 percent drop in transit revenues on the New York Subway, San Francisco Bart, etc.
  • These difficulties for cities will not go away anytime soon. Bloom provided data showing strong economic incentives for both corporations and their employees to continue the work-from-home revolution if their jobs allow it:
  • First, “Saved commute time working from home averages about 70 minutes a day, of which about 40 percent (30 minutes) goes into extra work.” Second, “Research finds hybrid working from home increases average productivity around 5 percent and this is growing.” And third, “Employees also really value hybrid working from home, at about the same as an 8 percent pay increase on average.
  • three other experts in real estate economics, Arpit Gupta, of N.Y.U.’s Stern School of Business, Vrinda Mittal, both of the Columbia Business School, and Van Nieuwerburgh. They anticipate disaster in their September 2022 paper, “Work From Home and the Office Real Estate Apocalypse.”
  • “Our research,” Gupta wrote by email,emphasizes the possibility of an ‘urban doom loop’ by which decline of work in the center business district results in less foot traffic and consumption, which adversely affects the urban core in a variety of ways (less eyes on the street, so more crime; less consumption; less commuting) thereby lowering municipal revenues, and also making it more challenging to provide public goods and services absent tax increases. These challenges will predominantly hit blue cities in the coming years.
  • the three authors “revalue the stock of New York City commercial office buildings taking into account pandemic-induced cash flow and discount rate effects. We find a 45 percent decline in office values in 2020 and 39 percent in the longer run, the latter representing a $453 billion value destruction.”
  • Extrapolating to all properties in the United States, Gupta, Mittal and Van Nieuwerburgh write, the “total decline in commercial office valuation might be around $518.71 billion in the short-run and $453.64 billion in the long-run.”
  • the share of real estate taxes in N.Y.C.’s budget was 53 percent in 2020, 24 percent of which comes from office and retail property taxes. Given budget balance requirements, the fiscal hole left by declining central business district office and retail tax revenues would need to be plugged by raising tax rates or cutting government spending.
  • Since March 2020, Manhattan has lost 200,000 households, the most of any county in the U.S. Brooklyn (-88,000) and Queens (-51,000) also appear in the bottom 10. The cities of Chicago (-75,000), San Francisco (-67,000), Los Angeles (-64,000 for the city and -136,000 for the county), Washington DC (-33,000), Seattle (-31,500), Houston (-31,000), and Boston (-25,000) make up the rest of the bottom 10.
  • Prior to the pandemic, these ecosystems were designed to function based on huge surges in their daytime population from commuters and tourists. The shock of the sudden loss of a big chunk of this population caused a big disruption in the ecosystem.
  • Just as the pandemic has caused a surge in telework, Loh wrote, “it also caused a huge surge in unsheltered homelessness because of existing flaws in America’s housing system, the end of federally-funded relief measures, a mental health care crisis, and the failure of policies of isolation and confinement to solve the pre-existing homelessness crisis.”
  • The upshot, Loh continued,is that both the visibility and ratio of people in crisis relative to those engaged in commerce (whether working or shopping) has changed in a lot of U.S. downtowns, which has a big impact on how being downtown ‘feels’ and thus perceptions of downtown.
  • The nation, Glaeser continued, isat an unusual confluence of trends which poses dangers for cities similar to those experienced in the 1970s. Event#1 is the rise of Zoom, which makes relocation easier even if it doesn’t mean that face-to-face is going away. Event#2 is a hunger to deal with past injustices, including police brutality, mass incarceration, high housing costs and limited upward mobility for the children of the poor.
  • Progressive mayors, according to Glaeser,have a natural hunger to deal with these problems at the local level, but if they try to right injustices by imposing costs on businesses and the rich, then those taxpayers will just leave. I certainly remember New York and Detroit in the 1960s and 1970s, where the dreams of Progressive mayors like John Lindsay and Jerome Patrick Cavanagh ran into fiscal realities.
  • Richard Florida, a professor of economic analysis and policy at the University of Toronto, stands out as one of the most resolutely optimistic urban scholars. In his August 2022 Bloomberg column, “Why Downtown Won’t Die,”
  • His answer:
  • Great downtowns are not reducible to offices. Even if the office were to go the way of the horse-drawn carriage, the neighborhoods we refer to today as downtowns would endure. Downtowns and the cities they anchor are the most adaptive and resilient of human creations; they have survived far worse. Continual works in progress, they have been rebuilt and remade in the aftermaths of all manner of crises and catastrophes — epidemics and plagues; great fires, floods and natural disasters; wars and terrorist attacks. They’ve also adapted to great economic transformations like deindustrialization a half century ago.
  • Florida wrote that many urban central business districts are “relics of the past, the last gasp of the industrial age organization of knowledge work the veritable packing and stacking of knowledge workers in giant office towers, made obsolete and unnecessary by new technologies.”
  • “Downtowns are evolving away from centers for work to actual neighborhoods. Jane Jacobs titled her seminal 1957 essay, which led in fact to ‘The Death and Life of Great American Cities,’ ‘Downtown Is for People’ — sounds about right to me.”
  • Despite his optimism, Florida acknowledged in his email thatAmerican cities are uniquely vulnerable to social disorder — a consequence of our policies toward guns and lack of a social safety net. Compounding this is our longstanding educational dilemma, where urban schools generally lack the quality of suburban schools. American cities are simply much less family-friendly than cities in most other parts of the advanced world. So when people have kids they are more or less forced to move out of America’s cities.
  • What worries me in all of this, in addition to the impact on cities, is the impact on the American economy — on innovation. and competitiveness. Our great cities are home to the great clusters of talent and innovation that power our economy. Remote work has many advantages and even leads to improvements in some kinds of knowledge work productivity. But America’s huge lead in innovation, finances, entertainment and culture industries comes largely from its great cities. Innovation and advance in. these industries come from the clustering of talent, ideas and knowledge. If that gives out, I worry about our longer-run economic future and living standards.
  • The risk that comes with fiscal distress is clear: If city governments face budget shortfalls and begin to cut back on funding for public transit, policing, and street outreach, for the maintenance of parks, playgrounds, community centers, and schools, and for services for homelessness, addiction, and mental illness, then conditions in central cities will begin to deteriorate.
  • There is reason for both apprehension and hope. Cities across time have proven remarkably resilient and have survived infectious diseases from bubonic plague to cholera to smallpox to polio. The world population, which stands today at eight billion people, is 57 percent urban, and because of the productivity, innovation and inventiveness that stems from the creativity of human beings in groups, the urbanization process is quite likely to continue into the foreseeable future. There appears to be no alternative, so we will have to make it work.
Javier E

Francis Fukuyama: Still the End of History - The Atlantic - 0 views

  • Over the past year, though, it has become evident that there are key weaknesses at the core of these strong states.
  • The weaknesses are of two sorts. First, the concentration of power in the hands of a single leader at the top all but guarantees low-quality decision making, and over time will produce truly catastrophic consequences
  • Second, the absence of public discussion and debate in “strong” states, and of any mechanism of accountability, means that the leader’s support is shallow, and can erode at a moment’s notice.
  • ...4 more annotations...
  • Over the years, we have seen huge setbacks to the progress of liberal and democratic institutions, with the rise of fascism and communism in the 1930s, or the military coups and oil crises of the 1960s and ’70s. And yet, liberal democracy has endured and come back repeatedly, because the alternatives are so bad. People across varied cultures do not like living under dictatorship, and they value their individual freedom. No authoritarian government presents a society that is, in the long term, more attractive than liberal democracy, and could therefore be considered the goal or endpoint of historical progress.
  • The philosopher Hegel coined the phrase the end of history to refer to the liberal state’s rise out of the French Revolution as the goal or direction toward which historical progress was trending. For many decades after that, Marxists would borrow from Hegel and assert that the true end of history would be a communist utopia. When I wrote an article in 1989 and a book in 1992 with this phrase in the title, I noted that the Marxist version was clearly wrong and that there didn’t seem to be a higher alternative to liberal democracy.
  • setbacks do not mean that the underlying narrative is wrong. None of the proffered alternatives look like they’re doing any better.
  • Liberal democracy will not make a comeback unless people are willing to struggle on its behalf. The problem is that many who grow up living in peaceful, prosperous liberal democracies begin to take their form of government for granted. Because they have never experienced an actual tyranny, they imagine that the democratically elected governments under which they live are themselves evil dictatorships conniving to take away their rights
Javier E

Keir Starmer does have a vision - and it's not New Labour 2.0 | Martin Kettle | The Guardian - 0 views

  • Starmer’s course has been consistently set towards winning an outright Commons majority in 2024. When he was elected leader, he never settled for the two-term recovery many assumed he would need after the hammering Labour received at the 2019 election
  • Equally important is that the progressive prospectus he intends to offer is a national one, based on reunifying the class base of the Labour electorate rather than accepting its irresistible divergence into a delta of different political parties and traditions.
  • his is indeed Starmer’s aim, and that it is remarkably bold. It flies in the face of a considerable amount of conventional wisdom about 21st-century British electoral behaviour
  • ...6 more annotations...
  • To seek to do it on the basis that Labour can again be a national party in geographical and class terms, winning working- and middle class support alike, deserves that description even more.
  • It challenges the view that the dominant progressive parties of the industrial era must accommodate themselves to operating within a more pluralist party system and amid the looser class loyalties of the new millennium
  • it says that such segmentation is neither inevitable nor even desirable, providing that the party remains a broad church and – crucially – avoids foolish accommodations with the activist left.
  • it is certainly not New Labour 2.0 either, and calling it so does not make it so. Indeed the Starmer strategy of focusing on working-class support is at odds with one of New Labour’s most central tenets.
  • Tony Blair and Gordon Brown believed that Labour would prosper in the modern era only by reducing its dependence on working-class voters and the unions and by becoming a middle-class, progressive party, like the US Democrats.
  • This is actually quite a traditional, and almost old-fashioned, view of Labour’s role. Starmer’s aspiration to make Labour a national and essentially social democratic party again is one that Clement Attlee or Harold Wilson would have understood.
Javier E

'Oppenheimer,' 'The Maniac' and Our Terrifying Prometheus Moment - The New York Times - 0 views

  • Prometheus was the Titan who stole fire from the gods of Olympus and gave it to human beings, setting us on a path of glory and disaster and incurring the jealous wrath of Zeus. In the modern world, especially since the beginning of the Industrial Revolution, he has served as a symbol of progress and peril, an avatar of both the liberating power of knowledge and the dangers of technological overreach.
  • The consequences are real enough, of course. The bombs dropped on Hiroshima and Nagasaki killed at least 100,000 people. Their successor weapons, which Oppenheimer opposed, threatened to kill everybody els
  • Annie Dorsen’s theater piece “Prometheus Firebringer,” which was performed at Theater for a New Audience in September, updates the Greek myth for the age of artificial intelligence, using A.I. to weave a cautionary tale that my colleague Laura Collins-Hughes called “forcefully beneficial as an examination of our obeisance to technology.”
  • ...13 more annotations...
  • Something similar might be said about “The Maniac,” Benjamín Labatut’s new novel, whose designated Prometheus is the Hungarian-born polymath John von Neumann, a pioneer of A.I. as well as an originator of game theory.
  • both narratives are grounded in fact, using the lives and ideas of real people as fodder for allegory and attempting to write a new mythology of the modern world.
  • on Neumann and Oppenheimer were close contemporaries, born a year apart to prosperous, assimilated Jewish families in Budapest and New York. Von Neumann, conversant in theoretical physics, mathematics and analytic philosophy, worked for Oppenheimer at Los Alamos during the Manhattan Project. He spent most of his career at the Institute for Advanced Study, where Oppenheimer served as director after the war.
  • More than most intellectual bastions, the institute is a house of theory. The Promethean mad scientists of the 19th century were creatures of the laboratory, tinkering away at their infernal machines and homemade monsters. Their 20th-century counterparts were more likely to be found at the chalkboard, scratching out our future in charts, equations and lines of code.
  • MANIAC. The name was an acronym for “Mathematical Analyzer, Numerical Integrator and Computer,” which doesn’t sound like much of a threat. But von Neumann saw no limit to its potential. “If you tell me precisely what it is a machine cannot do,” he declared, “then I can always make a machine which will do just that.” MANIAC didn’t just represent a powerful new kind of machine, but “a new type of life.”
  • More than 200 years after the Shelleys, Prometheus is having another moment, one closer in spirit to Mary’s terrifying ambivalence than to Percy’s fulsome gratitude. As technological optimism curdles in the face of cyber-capitalist villainy, climate disaster and what even some of its proponents warn is the existential threat of A.I., that ancient fire looks less like an ember of divine ingenuity than the start of a conflagration. Prometheus is what we call our capacity for self-destruction.
  • Oppenheimer wasn’t a principal author of that theory. Those scientists, among them Niels Bohr, Erwin Schrödinger and Werner Heisenberg, were characters in Labatut’s previous novel, “When We Cease to Understand the World.” That book provides harrowing illumination of a zone where scientific insight becomes indistinguishable from madness or, perhaps, divine inspiration. The basic truths of the new science seem to explode all common sense: A particle is also a wave; one thing can be in many places at once; “scientific method and its object could no longer be prised apart.”
  • . Oppenheimer’s designation as Prometheus is precise. He snatched a spark of quantum insight from those divinities and handed it to Harry S. Truman and the U.S. Army Air Forces.
  • Labatut’s account of von Neumann is, if anything, more unsettling than “Oppenheimer.” We had decades to get used to the specter of nuclear annihilation, and since the end of the Cold War it has been overshadowed by other terrors. A.I., on the other hand, seems newly sprung from science fiction, and especially terrifying because we can’t quite grasp what it will become.
  • Von Neumann, who died in 1957, did not teach machines to play Go. But when asked “what it would take for a computer, or some other mechanical entity, to begin to think and behave like a human being,” he replied that “it would have to play, like a child.”
  • the intellectual drama of “Oppenheimer” — as distinct from the dramas of his personal life and his political fate — is about how abstraction becomes reality. The atomic bomb may be, for the soldiers and politicians, a powerful strategic tool in war and diplomacy. For the scientists, it’s something else: a proof of concept, a concrete manifestation of quantum theory.
  • If Oppenheimer took hold of the sacred fire of atomic power, von Neumann’s theft was bolder and perhaps more insidious: He stole a piece of the human essence. He’s not only a modern Prometheus; he’s a second Frankenstein, creator of an all but human, potentially more than human monster.
  • “Technological power as such is always an ambivalent achievement,” Labatut’s von Neumann writes toward the end of his life, “and science is neutral all through, providing only means of control applicable to any purpose, and indifferent to all. It is not the particularly perverse destructiveness of one specific invention that creates danger. The danger is intrinsic. For progress there is no cure.”
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

What's Left for Tech? - Freddie deBoer - 0 views

  • I gave a talk to a class at Northeastern University earlier this month, concerning technology, journalism, and the cultural professions. The students were bright and inquisitive, though they also reflected the current dynamic in higher ed overall - three quarters of the students who showed up were women, and the men who were there almost all sat moodily in the back and didn’t engage at all while their female peers took notes and asked questions. I know there’s a lot of criticism of the “crisis for boys” narrative, but it’s often hard not to believe in it.
  • we’re actually living in a period of serious technological stagnation - that despite our vague assumption that we’re entitled to constant remarkable scientific progress, humanity has been living with real and valuable but decidedly small-scale technological growth for the past 50 or 60 or 70 years, after a hundred or so years of incredible growth from 1860ish to 1960ish, give or take a decade or two on either side
  • I will recommend Robert J. Gordon’s The Rise & Fall of American Growth for an exhaustive academic (and primarily economic) argument to this effect. Gordon persuasively demonstrates that from the mid-19th to mid-20th century, humanity leveraged several unique advancements that had remarkably outsized consequences for how we live and changed our basic existence in a way that never happened before and hasn’t since. Principal among these advances were the process of refining fossil fuels and using them to power all manner of devices and vehicles, the ability to harness electricity and use it to safely provide energy to homes (which practically speaking required the first development), and a revolution in medicine that came from the confluence of long-overdue acceptance of germ theory and basic hygienic principles, the discovery and refinement of antibiotics, and the modernization of vaccines.
  • ...24 more annotations...
  • The complication that Gordon and other internet-skeptical researchers like Ha-Joon Chang have introduced is to question just how meaningful those digital technologies have been for a) economic growth and b) the daily experience of human life. It can be hard for people who stare at their phones all day to consider the possibility that digital technology just isn’t that important. But ask yourself: if you were forced to live either without your iPhone or without indoor plumbing, could you really choose the latter?
  • Certainly the improvements in medical care in the past half-century feel very important to me as someone living now, and one saved life has immensely emotional and practical importance for many people. What’s more, advances in communication sciences and computer technology genuinely have been revolutionary; going from the Apple II to the iPhone in 30 years is remarkable.
  • we can always debate what constitutes major or revolutionary change
  • The question is, who in 2023 ever says to themselves “smartphone cameras just aren’t good enough”?
  • continued improvements in worldwide mortality in the past 75 years have been a matter of spreading existing treatments and practices to the developing world, rather than the result of new science.
  • When you got your first smartphone, and you thought about what the future would hold, were your first thoughts about more durable casing? I doubt it. I know mine weren’t.
  • Why is Apple going so hard on TITANIUM? Well, where else does smartphone development have to go?
  • The elephant in the room, obviously, is AI.
  • The processors will get faster. They’ll add more RAM. They’ll generally have more power. But for what? To run what? To do what? To run the games that we were once told would replace our PlayStation and Xbox games, but didn’t?
  • Smartphone development has been a good object lesson in the reality that cool ideas aren’t always practical or worthwhile
  • And as impressive as some new development in medicine has been, there’s no question that in simple terms of reducing preventable deaths, the advances seen from 1900 to 1950 dwarf those seen since. To a rem
  • We developed this technology for typewriters and terminals and desktops, it Just Works, and there’s no reason to try and “disrupt” it
  • Instead of one device to rule them all, we developed a norm of syncing across devices and cloud storage, which works well. (I always thought it was pretty funny, and very cynical, how Apple went from calling the iPhone an everything device to later marketing the iPad and iWatch.) In other words, we developed a software solution rather than a hardware one
  • I will always give it up to Google Maps and portable GPS technology; that’s genuinely life-altering, probably the best argument for smartphones as a transformative technology. But let me ask you, honestly: do you still go out looking for apps, with the assumption that you’re going to find something that really changes your life in a significant way?
  • some people are big VR partisans. I’m deeply skeptical. The brutal failures of Meta’s new “metaverse” is just one new example of a decades-long resistance to the technology among consumers
  • maybe I just don’t want VR to become popular, given the potential ugly social consequences. If you thought we had an incel problem now….
  • There were, in those breathless early days, a lot of talk about how people simply wouldn’t own laptops anymore, how your phone would do everything. But it turns out that, for one thing, the keyboard remains an input device of unparalleled convenience and versatility.
  • It’s not artificial intelligence. It thinks nothing like a human thinks. There is no reason whatsoever to believe that it has evolved sentience or consciousness. There is nothing at present that these systems can do that human being simply can’t. But they can potentially do some things in the world of bits faster and cheaper than human beings, and that might have some meaningful consequences. But there is no reasonable, responsible claim to be made that these systems are imminent threats to conventional human life as currently lived, whether for good or for bad. IMO.
  • Let’s mutually agree to consider immediate plausible human technological progress outside of AI or “AI.” What’s coming? What’s plausible?
  • The most consequential will be our efforts to address climate change, and we have the potential to radically change how we generate electricity, although electrifying heating and transportation are going to be harder than many seem to think, while solar and wind power have greater ecological costs than people want to admit. But, yes, that’s potentially very very meaningful
  • It’s another example of how technological growth will still leave us with continuity rather than with meaningful change.
  • I kept thinking was, privatizing space… to do what? A manned Mars mission might happen in my lifetime, which is cool. But a Mars colony is a distant dream
  • This is why I say we live in the Big Normal, the Big Boring, the Forever Now. We are tragic people: we were born just too late to experience the greatest flowering of human development the world has ever seen. We do, however, enjoy the rather hefty consolation prize that we get to live with the affordances of that period, such as not dying of smallpox.
  • I think we all need to learn to appreciate what we have now, in the world as it exists, at the time in which we actually live. Frankly, I don’t think we have any other choice.
Javier E

Ukraine Crisis: Putin Destroyed 3 Myths of America's Global Order - Bloomberg - 0 views

  • Every era has a figure who strips away its pleasant illusions about where the world is headed. This is what makes Vladimir Putin the most important person of the still-young 21st century.
  • Putin has done more than any other person to remind us that the world order we have taken for granted is remarkably fragile. In doing so, one hopes, he may have persuaded the chief beneficiaries of that order to get serious about saving it.
  • In the early 19th century, a decade of Napoleonic aggression upended a widespread belief that commerce and Enlightenment ideas were ushering in a new age of peace.
  • ...16 more annotations...
  • In the 20th century, a collection of fascist and communist leaders showed how rapidly the world could descend into the darkness of repression and aggression.
  • In 2007, as Western intellectuals were celebrating the triumph of the liberal international order, Putin warned that he was about to start rolling that order back. In a scorching speech at the Munich Security Conference, Putin denounced the spread of liberal values and American influence. He declared that Russia would not forever live with a system that constrained its influence and threatened its increasingly illiberal regime.
  • Putin’s policies have assailed three core tenets of post-Cold War optimism about the trajectory of global affairs.
  • The first was a sunny assumption about the inevitability of democracy’s advance.
  • To see Putin publicly humiliate his own intelligence chief on television last week was to realize that the world’s vastest country, with one of its two largest nuclear arsenals, is now the fiefdom of a single man.  
  • He has contributed, through cyberattacks, political influence operations and other subversion to a global “democratic recession” that has now lasted more than 15 years.
  • Putin has also shattered a second tenet of the post-Cold War mindset: the idea that great-power rivalry was over and that violent, major conflict had thus become passe.
  • Violence, Putin has reminded us, is a terrible but sadly normal feature of world affairs. Its absence reflects effective deterrence, not irreversible moral progress.
  • This relates to a third shibboleth Putin has challenged — the idea that history runs in a single direction.
  • During the 1990s, the triumph of democracy, great-power peace and Western influence seemed irreversible. The Clinton administration called countries that bucked these trends “backlash states,” the idea being that they could only offer atavistic, doomed resistance to the progression of history.
  • But history, as Putin has shown us, doesn’t bend on its own.
  • Aggression can succeed. Democracies can be destroyed by determined enemies.
  • “International norms” are really just rules made and enforced by states that combine great power with great determination.
  • Which means that history is a constant struggle to prevent the world from being thrust back into patterns of predation that it can never permanently escape.
  • Most important, Putin’s gambit is producing an intellectual paradigm shift — a recognition that this war could be a prelude to more devastating conflicts unless the democratic community severely punishes aggression in this case and more effectively deters it in others.
  • he may be on the verge of a rude realization of his own: Robbing one’s enemies of their complacency is a big mistake.
Javier E

The Only Way to Deal With the Threat From AI? Shut It Down | Time - 0 views

  • An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-
  • This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin
  • he rule that most people aware of these issues would have endorsed 50 years earlier, was that if an AI system can speak fluently and says it’s self-aware and demands human rights, that ought to be a hard stop on people just casually owning that AI and using it past that point. We already blew past that old line in the sand. And that was probably correct; I agree that current AIs are probably just imitating talk of self-awareness from their training data. But I mark that, with how little insight we have into these systems’ internals, we do not actually know.
  • ...25 more annotations...
  • The key issue is not “human-competitive” intelligence (as the open letter puts it); it’s what happens after AI gets to smarter-than-human intelligence. Key thresholds there may not be obvious, we definitely can’t calculate in advance what happens when, and it currently seems imaginable that a research lab would cross critical lines without noticing.
  • Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”
  • It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
  • Absent that caring, we get “the AI does not love you, nor does it hate you, and you are made of atoms it can use for something else.”
  • Without that precision and preparation, the most likely outcome is AI that does not do what we want, and does not care for us nor for sentient life in general. That kind of caring is something that could in principle be imbued into an AI but we are not ready and do not currently know how.
  • The likely result of humanity facing down an opposed superhuman intelligence is a total loss
  • To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
  • There’s no proposed plan for how we could do any such thing and survive. OpenAI’s openly declared intention is to make some future AI do our AI alignment homework. Just hearing that this is the plan ought to be enough to get any sensible person to panic. The other leading AI lab, DeepMind, has no plan at all.
  • An aside: None of this danger depends on whether or not AIs are or can be conscious; it’s intrinsic to the notion of powerful cognitive systems that optimize hard and calculate outputs that meet sufficiently complicated outcome criteria.
  • I didn’t also mention that we have no idea how to determine whether AI systems are aware of themselves—since we have no idea how to decode anything that goes on in the giant inscrutable arrays—and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.
  • I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it.
  • the thing about trying this with superhuman intelligence is that if you get that wrong on the first try, you do not get to learn from your mistakes, because you are dead. Humanity does not learn from the mistake and dust itself off and try again, as in other challenges we’ve overcome in our history, because we are all gone.
  • If we held anything in the nascent field of Artificial General Intelligence to the lesser standards of engineering rigor that apply to a bridge meant to carry a couple of thousand cars, the entire field would be shut down tomorrow.
  • We are not on course to be prepared in any reasonable time window. There is no plan. Progress in AI capabilities is running vastly, vastly ahead of progress in AI alignment or even progress in understanding what the hell is going on inside those systems
  • Many researchers working on these systems think that we’re plunging toward a catastrophe, with more of them daring to say it in private than in public; but they think that they can’t unilaterally stop the forward plunge, that others will go on even if they personally quit their jobs.
  • This is a stupid state of affairs, and an undignified way for Earth to die, and the rest of humanity ought to step in at this point and help the industry solve its collective action problem.
  • When the insider conversation is about the grief of seeing your daughter lose her first tooth, and thinking she’s not going to get a chance to grow up, I believe we are past the point of playing political chess about a six-month moratorium.
  • The moratorium on new large training runs needs to be indefinite and worldwide. There can be no exceptions, including for governments or militaries. If the policy starts with the U.S., then China needs to see that the U.S. is not seeking an advantage but rather trying to prevent a horrifically dangerous technology which can have no true owner and which will kill everyone in the U.S. and in China and on Earth
  • Here’s what would actually need to be done:
  • Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs
  • Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithm
  • Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
  • Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool
  • Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
  • when your policy ask is that large, the only way it goes through is if policymakers realize that if they conduct business as usual, and do what’s politically easy, that means their own kids are going to die too.
Javier E

The Contradictions of Sam Altman, the AI Crusader Behind ChatGPT - WSJ - 0 views

  • Mr. Altman said he fears what could happen if AI is rolled out into society recklessly. He co-founded OpenAI eight years ago as a research nonprofit, arguing that it’s uniquely dangerous to have profits be the main driver of developing powerful AI models.
  • He is so wary of profit as an incentive in AI development that he has taken no direct financial stake in the business he built, he said—an anomaly in Silicon Valley, where founders of successful startups typically get rich off their equity. 
  • His goal, he said, is to forge a new world order in which machines free people to pursue more creative work. In his vision, universal basic income—the concept of a cash stipend for everyone, no strings attached—helps compensate for jobs replaced by AI. Mr. Altman even thinks that humanity will love AI so much that an advanced chatbot could represent “an extension of your will.”
  • ...44 more annotations...
  • The Tesla Inc. CEO tweeted in February that OpenAI had been founded as an open-source nonprofit “to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.”
  • Backers say his brand of social-minded capitalism makes him the ideal person to lead OpenAI. Others, including some who’ve worked for him, say he’s too commercially minded and immersed in Silicon Valley thinking to lead a technological revolution that is already reshaping business and social life. 
  • In the long run, he said, he wants to set up a global governance structure that would oversee decisions about the future of AI and gradually reduce the power OpenAI’s executive team has over its technology. 
  • OpenAI researchers soon concluded that the most promising path to achieve artificial general intelligence rested in large language models, or computer programs that mimic the way humans read and write. Such models were trained on large volumes of text and required a massive amount of computing power that OpenAI wasn’t equipped to fund as a nonprofit, according to Mr. Altman. 
  • In its founding charter, OpenAI pledged to abandon its research efforts if another project came close to building AGI before it did. The goal, the company said, was to avoid a race toward building dangerous AI systems fueled by competition and instead prioritize the safety of humanity.
  • While running Y Combinator, Mr. Altman began to nurse a growing fear that large research labs like DeepMind, purchased by Google in 2014, were creating potentially dangerous AI technologies outside the public eye. Mr. Musk has voiced similar concerns of a dystopian world controlled by powerful AI machines. 
  • Messrs. Altman and Musk decided it was time to start their own lab. Both were part of a group that pledged $1 billion to the nonprofit, OpenAI Inc. 
  • Mr. Altman said he doesn’t necessarily need to be first to develop artificial general intelligence, a world long imagined by researchers and science-fiction writers where software isn’t just good at one specific task like generating text or images but can understand and learn as well or better than a human can. He instead said OpenAI’s ultimate mission is to build AGI, as it’s called, safely.
  • “We didn’t have a visceral sense of just how expensive this project was going to be,” he said. “We still don’t.”
  • Tensions also grew with Mr. Musk, who became frustrated with the slow progress and pushed for more control over the organization, people familiar with the matter said. 
  • OpenAI executives ended up reviving an unusual idea that had been floated earlier in the company’s history: creating a for-profit arm, OpenAI LP, that would report to the nonprofit parent. 
  • Reid Hoffman, a LinkedIn co-founder who advised OpenAI at the time and later served on the board, said the idea was to attract investors eager to make money from the commercial release of some OpenAI technology, accelerating OpenAI’s progress
  • “You want to be there first and you want to be setting the norms,” he said. “That’s part of the reason why speed is a moral and ethical thing here.”
  • The decision further alienated Mr. Musk, the people familiar with the matter said. He parted ways with OpenAI in February 2018. 
  • Mr. Musk announced his departure in a company all-hands, former employees who attended the meeting said. Mr. Musk explained that he thought he had a better chance at creating artificial general intelligence through Tesla, where he had access to greater resources, they said.
  • OpenAI said that it received about $130 million in contributions from the initial $1 billion pledge, but that further donations were no longer needed after the for-profit’s creation. Mr. Musk has tweeted that he donated around $100 million to OpenAI. 
  • Mr. Musk’s departure marked a turning point. Later that year, OpenAI leaders told employees that Mr. Altman was set to lead the company. He formally became CEO and helped complete the creation of the for-profit subsidiary in early 2019.
  • A young researcher questioned whether Mr. Musk had thought through the safety implications, the former employees said. Mr. Musk grew visibly frustrated and called the intern a “jackass,” leaving employees stunned, they said. It was the last time many of them would see Mr. Musk in person.  
  • In the meantime, Mr. Altman began hunting for investors. His break came at Allen & Co.’s annual conference in Sun Valley, Idaho in the summer of 2018, where he bumped into Satya Nadella, the Microsoft CEO, on a stairwell and pitched him on OpenAI. Mr. Nadella said he was intrigued. The conversations picked up that winter.
  • “I remember coming back to the team after and I was like, this is the only partner,” Mr. Altman said. “They get the safety stuff, they get artificial general intelligence. They have the capital, they have the ability to run the compute.”   
  • Mr. Altman disagreed. “The unusual thing about Microsoft as a partner is that it let us keep all the tenets that we think are important to our mission,” he said, including profit caps and the commitment to assist another project if it got to AGI first. 
  • Some employees still saw the deal as a Faustian bargain. 
  • OpenAI’s lead safety researcher, Dario Amodei, and his lieutenants feared the deal would allow Microsoft to sell products using powerful OpenAI technology before it was put through enough safety testing,
  • They felt that OpenAI’s technology was far from ready for a large release—let alone with one of the world’s largest software companies—worrying it could malfunction or be misused for harm in ways they couldn’t predict.  
  • Mr. Amodei also worried the deal would tether OpenAI’s ship to just one company—Microsoft—making it more difficult for OpenAI to stay true to its founding charter’s commitment to assist another project if it got to AGI first, the former employees said.
  • Microsoft initially invested $1 billion in OpenAI. While the deal gave OpenAI its needed money, it came with a hitch: exclusivity. OpenAI agreed to only use Microsoft’s giant computer servers, via its Azure cloud service, to train its AI models, and to give the tech giant the sole right to license OpenAI’s technology for future products.
  • In a recent investment deck, Anthropic said it was “committed to large-scale commercialization” to achieve the creation of safe AGI, and that it “fully committed” to a commercial approach in September. The company was founded as an AI safety and research company and said at the time that it might look to create commercial value from its products. 
  • Mr. Altman “has presided over a 180-degree pivot that seems to me to be only giving lip service to concern for humanity,” he said. 
  • “The deal completely undermines those tenets to which they secured nonprofit status,” said Gary Marcus, an emeritus professor of psychology and neural science at New York University who co-founded a machine-learning company
  • The cash turbocharged OpenAI’s progress, giving researchers access to the computing power needed to improve large language models, which were trained on billions of pages of publicly available text. OpenAI soon developed a more powerful language model called GPT-3 and then sold developers access to the technology in June 2020 through packaged lines of code known as application program interfaces, or APIs. 
  • Mr. Altman and Mr. Amodei clashed again over the release of the API, former employees said. Mr. Amodei wanted a more limited and staged release of the product to help reduce publicity and allow the safety team to conduct more testing on a smaller group of users, former employees said. 
  • Mr. Amodei left the company a few months later along with several others to found a rival AI lab called Anthropic. “They had a different opinion about how to best get to safe AGI than we did,” Mr. Altman said.
  • Anthropic has since received more than $300 million from Google this year and released its own AI chatbot called Claude in March, which is also available to developers through an API. 
  • Mr. Altman shared the contract with employees as it was being negotiated, hosting all-hands and office hours to allay concerns that the partnership contradicted OpenAI’s initial pledge to develop artificial intelligence outside the corporate world, the former employees said. 
  • In the three years after the initial deal, Microsoft invested a total of $3 billion in OpenAI, according to investor documents. 
  • More than one million users signed up for ChatGPT within five days of its November release, a speed that surprised even Mr. Altman. It followed the company’s introduction of DALL-E 2, which can generate sophisticated images from text prompts.
  • By February, it had reached 100 million users, according to analysts at UBS, the fastest pace by a consumer app in history to reach that mark.
  • n’s close associates praise his ability to balance OpenAI’s priorities. No one better navigates between the “Scylla of misplaced idealism” and the “Charybdis of myopic ambition,” Mr. Thiel said. 
  • Mr. Altman said he delayed the release of the latest version of its model, GPT-4, from last year to March to run additional safety tests. Users had reported some disturbing experiences with the model, integrated into Bing, where the software hallucinated—meaning it made up answers to questions it didn’t know. It issued ominous warnings and made threats. 
  • “The way to get it right is to have people engage with it, explore these systems, study them, to learn how to make them safe,” Mr. Altman said.
  • After Microsoft’s initial investment is paid back, it would capture 49% of OpenAI’s profits until the profit cap, up from 21% under prior arrangements, the documents show. OpenAI Inc., the nonprofit parent, would get the rest.
  • He has put almost all his liquid wealth in recent years in two companies. He has put $375 million into Helion Energy, which is seeking to create carbon-free energy from nuclear fusion and is close to creating “legitimate net-gain energy in a real demo,” Mr. Altman said.
  • He has also put $180 million into Retro, which aims to add 10 years to the human lifespan through “cellular reprogramming, plasma-inspired therapeutics and autophagy,” or the reuse of old and damaged cell parts, according to the company. 
  • He noted how much easier these problems are, morally, than AI. “If you’re making nuclear fusion, it’s all upside. It’s just good,” he said. “If you’re making AI, it is potentially very good, potentially very terrible.” 
Javier E

Opinion | Amid Suffering in 2023, Humans Still Made Progress - The New York Times - 0 views

  • In some ways, 2023 may still have been the best year in the history of humanity.
  • Just about the worst calamity that can befall a human is to lose a child, and historically, almost half of children worldwide died before they reached the age of 15. That share has declined steadily since the 19th century, and the United Nations Population Division projects that in 2023 a record low was reached in global child mortality, with just 3.6 percent of newborns dying by the age of 5.
  • It still means that about 4.9 million children died this year — but that’s a million fewer than died as recently as 2016.
  • ...8 more annotations...
  • consider extreme poverty. It too has reached a record low, affecting a bit more than 8 percent of humans worldwide,
  • All these figures are rough, but it seems that about 100,000 people are now emerging from extreme poverty each day — so they are better able to access clean water, to feed and educate their children, to buy medicines.
  • If we want to tackle problems — from the war in Gaza to climate change — then it helps to know that progress is possible.
  • Two horrifying diseases are close to eradication: polio and Guinea worm disease. Only 12 cases of wild poliovirus have been reported worldwide in 2023 (there were also small numbers of vaccine-derived polio, a secondary problem), and 2024 may be the last year in which wild polio is transmitted
  • Meanwhile, only 11 cases of Guinea worm disease were reported in humans in the first nine months of 2023.
  • the United States government recently approved new CRISPR gene-editing techniques to treat sickle cell disease — and the hope is that similar approaches can transform the treatment of cancer and other ailments
  • Another landmark: New vaccines have been approved for R.S.V. and malaria
  • Blinding trachoma is also on its way out in several countries. A woman suffering from trachoma in Mali once told me that the worst part of the disease wasn’t the blindness but rather the excruciating pain, which she said was as bad as childbirth but lasted for years. So I’m thrilled that Mali and 16 other countries have eliminated trachoma.
Javier E

AI scientist Ray Kurzweil: 'We are going to expand intelligence a millionfold by 2045' | Artificial intelligence (AI) | The Guardian - 0 views

  • American computer scientist and techno-optimist Ray Kurzweil is a long-serving authority on artificial intelligence (AI). His bestselling 2005 book, The Singularity Is Near, sparked imaginations with sci-fi like predictions that computers would reach human-level intelligence by 2029 and that we would merge with computers and become superhuman around 2045, which he called “the Singularity”. Now, nearly 20 years on, Kurzweil, 76, has a sequel, The Singularity Is Nearer
  • no longer seem so wacky.
  • Your 2029 and 2045 projections haven’t changed…I have stayed consistent. So 2029, both for human-level intelligence and for artificial general intelligence (AGI) – which is a little bit different. Human-level intelligence generally means AI that has reached the ability of the most skilled humans in a particular domain and by 2029 that will be achieved in most respects. (There may be a few years of transition beyond 2029 where AI has not surpassed the top humans in a few key skills like writing Oscar-winning screenplays or generating deep new philosophical insights, though it will.) AGI means AI that can do everything that any human can do, but to a superior level. AGI sounds more difficult, but it’s coming at the same time.
  • ...15 more annotations...
  • Why write this book? The Singularity Is Near talked about the future, but 20 years ago, when people didn’t know what AI was. It was clear to me what would happen, but it wasn’t clear to everybody. Now AI is dominating the conversation. It is time to take a look again both at the progress we’ve made – large language models (LLMs) are quite delightful to use – and the coming breakthroughs.
  • It is hard to imagine what this would be like, but it doesn’t sound very appealing… Think of it like having your phone, but in your brain. If you ask a question your brain will be able to go out to the cloud for an answer similar to the way you do on your phone now – only it will be instant, there won’t be any input or output issues, and you won’t realise it has been done (the answer will just appear). People do say “I don’t want that”: they thought they didn’t want phones either!
  • The most important driver is the exponential growth in the amount of computing power for the price in constant dollars. We are doubling price-performance every 15 months. LLMs just began to work two years ago because of the increase in computation.
  • What’s missing currently to bring AI to where you are predicting it will be in 2029? One is more computing power – and that’s coming. That will enable improvements in contextual memory, common sense reasoning and social interaction, which are all areas where deficiencies remain
  • LLM hallucinations [where they create nonsensical or inaccurate outputs] will become much less of a problem, certainly by 2029 – they already happen much less than they did two years ago. The issue occurs because they don’t have the answer, and they don’t know that. They look for the best thing, which might be wrong or not appropriate. As AI gets smarter, it will be able to understand its own knowledge more precisely and accurately report to humans when it doesn’t know.
  • What exactly is the Singularity? Today, we have one brain size which we can’t go beyond to get smarter. But the cloud is getting smarter and it is growing really without bounds. The Singularity, which is a metaphor borrowed from physics, will occur when we merge our brain with the cloud. We’re going to be a combination of our natural intelligence and our cybernetic intelligence and it’s all going to be rolled into one. Making it possible will be brain-computer interfaces which ultimately will be nanobots – robots the size of molecules – that will go noninvasively into our brains through the capillaries. We are going to expand intelligence a millionfold by 2045 and it is going to deepen our awareness and consciousness.
  • Why should we believe your dates? I’m really the only person that predicted the tremendous AI interest that we’re seeing today. In 1999 people thought that would take a century or more. I said 30 years and look what we have.
  • I have a chapter on perils. I’ve been involved with trying to find the best way to move forward and I helped to develop the Asilomar AI Principles [a 2017 non-legally binding set of guidelines for responsible AI development]
  • All the major companies are putting more effort into making sure their systems are safe and align with human values than they are into creating new advances, which is positive.
  • Not everyone is likely to be able to afford the technology of the future you envisage. Does technological inequality worry you? Being wealthy allows you to afford these technologies at an early point, but also one where they don’t work very well. When [mobile] phones were new they were very expensive and also did a terrible job. They had access to very little information and didn’t talk to the cloud. Now they are very affordable and extremely useful. About three quarters of people in the world have one. So it’s going to be the same thing here: this issue goes away over time.
  • The book looks in detail at AI’s job-killing potential. Should we be worried? Yes, and no. Certain types of jobs will be automated and people will be affected. But new capabilities also create new jobs. A job like “social media influencer” didn’t make sense, even 10 years ago. Today we have more jobs than we’ve ever had and US average personal income per hours worked is 10 times what it was 100 years ago adjusted to today’s dollars. Universal basic income will start in the 2030s, which will help cushion the harms of job disruptions. It won’t be adequate at that point but over time it will become so.
  • Everything is progressing exponentially: not only computing power but our understanding of biology and our ability to engineer at far smaller scales. In the early 2030s we can expect to reach longevity escape velocity where every year of life we lose through ageing we get back from scientific progress. And as we move past that we’ll actually get back more years.
  • What is your own plan for immortality? My first plan is to stay alive, therefore reaching longevity escape velocity. I take about 80 pills a day to help keep me healthy. Cryogenic freezing is the fallback. I’m also intending to create a replicant of myself [an afterlife AI avatar], which is an option I think we’ll all have in the late 2020s
  • I did something like that with my father, collecting everything that he had written in his life, and it was a little bit like talking to him. [My replicant] will be able to draw on more material and so represent my personality more faithfully.
  • What should we be doing now to best prepare for the future? It is not going to be us versus AI: AI is going inside ourselves. It will allow us to create new things that weren’t feasible before. It’ll be a pretty fantastic future.
Javier E

The Jewish Progressive Super PAC Behind 'Wake The F*** Up' | TPM2012 - 1 views

  • one small super PAC has been generating impressive amounts of attention by focusing almost exclusively on online videos.
  • Their most recent video features the aforementioned Jackson urging a family of 2008 Obama supporters via storybook rhyme to “wake the f*** up!” and volunteer again. It’s garnered 1.5 million views on YouTube and likely much more via an embedded Yahoo version where it first debuted.
  • Written by the bestselling author of Go the F*** to Sleep, Adam Mansbach, and directed by Boaz Yakin (Remember The Titans), the short film contains all the hallmarks of JCER’s viral formula. Well known actor + obscenity + progressive message = Internet hit.
  • ...3 more annotations...
  • I think that video was successful because we didnt just go for the profanity or shock value but we actually told a story for our target audience: Obama voters who are less enthusastic this year,” Moore said. “There are not only a lot of web videos — literally millions — that don’t get traction, but a lot with celebrities.”
  • Moore estimates their total spending this cycle will top out between $300,000 and $400,000.
  • They also produce fewer videos, banking on just a handful of high production value clips to carry the day. But they are consistent. Their last big video before the “Wake The F*** Up,” an awareness campaign about voter ID laws, scored well over 2 million hits on YouTube as well.
Javier E

Together We Stand, Divided We Fall - Clive Crook - The Atlantic - 1 views

  • I criticize Obama's failure to seize the center ground of U.S. politics. This was partly a choice, in my view -- reflecting the fact that (unlike Bill Clinton) he's a progressive and not a centrist by instinct. But it was partly also a reaction to the determination of the GOP in Congress to defeat his every initiative. Ezra Klein says the Republicans' give-no-quarter strategy worked; similarly, E.J. Dionne says Democrats were more willing to compromise than the GOP. I agree with both points: When I criticize Obama, it's not because I think the GOP is blameless, but rather for the reverse: Obama failed to exploit the opportunity that the Republicans' intransigence afforded him. Yes, his opponents were reckless and unreasonable. Yes, they were moving abruptly to the right. Tactically speaking, that was Obama's chance. But to make the most of it, he had to plant his flag in the center the GOP was vacating. Instead, after Scott Brown, even after the midterms, he let Democrats in Congress get on with it and tacked left -- repeatedly casting his disagreement with the Republicans as a contest between his own (not especially popular) progressive vision and their militantly conservative vision, rather than between the commonsense pragmatism the country longs for and the other side's unreasoning extremism. That was the contrast he could and should have underscored. When I say he blew it, that's what I mean.
Javier E

Obama's Big Deal - NYTimes.com - 0 views

  • Vice President Biden famously pronounced the reform a “big something deal” — except that he didn’t use the word “something.” And he was right.
  • I’d suggest using this phrase to describe the Obama administration as a whole. F.D.R. had his New Deal; well, Mr. Obama has his Big Deal. He hasn’t delivered everything his supporters wanted, and at times the survival of his achievements seemed very much in doubt. But if progressives look at where we are as the second term begins, they’ll find grounds for a lot of (qualified) satisfaction.
  • Progressives have been trying to get some form of universal health insurance since the days of Harry Truman; they’ve finally succeeded.
  • ...3 more annotations...
  • experience with Romneycare in Massachusetts — hey, this is a great age for irony — shows that such a system is indeed workable, and it can provide Americans with a huge improvement in medical and financial security.
  • where the New Deal had a revolutionary impact, empowering workers and creating a middle-class society that lasted for 40 years, the Big Deal has been limited to equalizing policies at the margin.
  • That said, health reform will provide substantial aid to the bottom half of the income distribution, paid for largely through new taxes targeted on the top 1 percent, and the “fiscal cliff” deal further raises taxes on the affluent. Over all, 1-percenters will see their after-tax income fall around 6 percent; for the top tenth of a percent, the hit rises to around 9 percent
Javier E

An Election Is Not a Suicide Mission - The New York Times - 0 views

  • the church does not allow nations to take up arms and go to war merely when they have a high moral cause on their side. Justice is necessary, but it is not sufficient:
  • Peaceful means of ending the evil in question need to have been exhausted, there must be serious prospects of military success, and (crucially) “the use of arms must not produce evils and disorders graver than the evil to be eliminated.”
  • What this teaching suggests is that we should have a strong bias in favor of peaceful deliberation so long as deliberation remains possible.
  • ...11 more annotations...
  • So long as your polity offers mechanisms for eventually changing unjust laws, it’s better to accept the system’s basic legitimacy and work within it for change than to take steps, violent or otherwise, that risk blowing the entire apparatus up.
  • A vote for Trump is not a vote for insurrection or terrorism or secession. But it is a vote for a man who stands well outside the norms of American presidential politics, who has displayed a naked contempt for republican institutions and constitutional constraints, who deliberately injects noxious conspiracy theories into political conversation, who has tiptoed closer to the incitement of political violence than any major politician in my lifetime, whose admiration for authoritarian rulers is longstanding, who has endorsed war crimes and indulged racists and so on
  • It is a vote, in other words, for a far more chaotic and unstable form of political leadership (on the global stage as well as on the domestic) than we have heretofore experienced
  • what is striking is how many conservatives seem to have internalized that reality and justified their support for Trump anyway, on grounds that are similar to ones that the mainstream pro-life movement has rejected for four decades: Namely, that Hillary Clinton would usher in some particular evil so severe and irreversible that it’s better to risk burning things down, crashing the plane of state,
  • It is constitutional conservatives arguing that permitting another progressive president would make the Constitution completely irrecoverable, so better to roll the dice with a Peronist like Trump.
  • It is immigration restrictionists arguing that Clinton’s favored amnesty for illegal immigrants would complete America’s transformation into Venezuela, so better to roll the dice with a right-wing Chávez.
  • It is a long list of conservatives treating an inevitable feature of democratic politics — the election of a politician of the other party to the presidency — as an evil so grave that it’s worth risking all the disorders that Trump obviously promises.
  • the Trump alternative is like a feckless war of choice in the service of some just-seeming end, with a commanding general who likes war crimes. It’s a ticket on a widening gyre, promising political catastrophe and moral corruption both, no matter what ideals seem to justify it.
  • today’s conservatism has far more to gain from the defeat of Donald Trump, and the chance to oppose Clintonian progressivism unencumbered by his authoritarianism, bigotry, misogyny and incompetence, than it does from answering the progressive drift toward Caesarism with a populist Elagabalus.
  • the deepest conservative insight is that justice depends on order as much as order depends on justice
  • when Loki or the Joker or some still-darker Person promises the righting of some grave wrong, the defeat of your hated enemies, if you will only take a chance on chaos and misrule, the wise and courageous response is to tell them to go to hell.
rachelramirez

When Finland's Teachers Work in America's Schools - The Atlantic - 0 views

  • When Finnish Teachers Work in America’s Public Schools
  • Kristiina Chartouni, a veteran Finnish educator who began teaching American high-school students this autumn, said in an email. “I am supposedly doing what I love, but I don't recognize this profession as the one that I fell in love with in Finland.”
  • In Tennessee, Chartouni has encountered a different teaching environment from the one she was used to in her Nordic homeland—one in which she feels like she’s “under a microscope.”
  • ...9 more annotations...
  • Chartouni misses that feeling of being trusted as a professional in Finland. There, after receiving her teaching timetable at the start of each school year, she would be given the freedom to prepare curriculum-aligned lessons, which matched her preferences and teaching style.
  • In general, U.S. public-school teachers report that they have the least amount of control over two particular areas of teaching: “selecting textbooks and other classroom materials” and “selecting content, topics, and skills to be taught.
  • Marc Tucker, the president and CEO of the National Center on Education and the Economy, suggested to me that the No Child Left Behind Act (NCLB), which he called “the inauguration of [America’s] accountability movement,” significantly affected how U.S. public-school teachers perceived their level of autonomy.
  • Under NCLB, America’s public schools needed to make adequate yearly progress, decided in large part by student performance on state standardized tests, or face serious consequences, such as school closures.
  • As a public-school educator in Tennessee, Chartouni is seeing how some accountability measures—ones that are unobserved in Finnish schools—have reduced her level of professional freedom.
  • ” So, occasionally, Chartouni decides to assign easy bell work as she greets her exhausted students: “sit down, relax, and breathe.” (In Finland, students and teachers typically have a 15-minute break built-into every classroom hour.)
  • She described it as a rote job where she follows a curriculum she didn’t develop herself, keeps a principal-dictated schedule, and sits in meetings where details aren’t debated.
  • “I feel rushed, nothing gets done properly; there is very little joy, and no time for reflection or creative thinking (in order to create meaningful activities for students).”
  • “And the countries that give [teachers] more autonomy successfully are countries that have made an enormous investment in changing the pool from which they are selecting their teachers, then they make a much bigger investment than we do in the education of their future teachers, then they make a much bigger investment in the support of those teachers once they become teachers.
« First ‹ Previous 121 - 140 of 1218 Next › Last »
Showing 20 items per page