Skip to main content

Home/ History Readings/ Group items matching "wikipedia" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

(2) What Was the 'Soviet Century'? - by André Forget - Bulwark+ - 0 views

  • Schlögel makes the argument that the Soviet Union is best understood not primarily as the manifestation of rigid Communist ideology, but as an attempt to transform an agrarian peasant society into a fully modern state
  • “A ‘Marxist theory,’” he writes, “yields very little for an understanding of the processes of change in postrevolutionary Russia. We get somewhat nearer the mark if we explore the scene of a modernization without modernity and of a grandiose civilizing process powered by forces that were anything but civil.” In other words, the interminable debates about whether Lenin was the St. Paul of communism or its Judas Iscariot are beside the point: As a Marxist might put it, the history of the Soviet Union is best explained by material conditions.
  • the story one pieces together from his chapters goes something like this. In the years between 1917 and 1945, the Russian Empire ceased to be a semi-feudal aristocracy governed by an absolutist monarch whose rule rested on divine right, and became an industrialized state. It dammed rivers, electrified the countryside, built massive factories and refineries, collectivized agriculture, raised literacy rates, set up palaces of culture, created a modern military, and made the Soviet Union one of the most powerful countries in the world. In the course of doing so, it sent some of its best minds into exile, crippled its system of food production, set up a massive network of prison camps, watched millions of its citizens die of hunger, killed hundreds of thousands more through slave labor and forced relocation, and executed a generation of revolutionary leaders. It did all this while surviving one of the most brutal civil wars of the twentieth century and the largest land invasion in history.
  • ...30 more annotations...
  • Over the next forty-five years, it tried to establish a solid basis for growth and prosperity. It launched an ambitious housing program to create living spaces for its massive and rapidly urbanizing population, and to nurture the growth of a Soviet middle class that had access to amenities and luxury goods. At the same time, it systematically blocked this new middle class from exercising its creative faculties outside a narrow range of approved topics and ideological formulas, and it could not reliably ensure that if someone wanted to buy a winter coat in December, they could find it in the shop. It created a state with the resources and technology to provide for the needs of its citizens, but that was unable to actually deliver the goods.
  • The USSR moved forward under the weight of these contradictions, first sprinting, then staggering, until it was dismantled by another revolution, one that was orchestrated by the very class of party elites the first one had produced. But the states that emerged from the Soviet Union in 1991, and the people who lived in them, had undergone a profound change in the process.
  • Schlögel argues that over its sixty-eight years of existence, the Soviet Union did succeed in its goal of creating a “new Soviet person” (novy sovetsky chelovek). But, as he puts it,The new human being was the product not of any faith in a utopia, but of a tumult in which existing lifeworlds were destroyed and new ones born. The “Homo Sovieticus” was no fiction to be casually mocked but a reality with whom we usually only start to engage in earnest when we realize that analyzing the decisions of the Central Committee is less crucial than commonly assumed
  • Placing the emphasis on modernization rather than ideology allows Schlögel to delineate oft-ignored parallels and connections between the USSR and the United States. In the 1930s, especially, there was a great deal of cultural and technical collaboration between U.S. citizens and their Soviet counterparts, which led to what Hans Rogger called “Soviet Americanism” (sovetsky amerikanizm). “In many respects,” Schlögel writes, Soviet citizens “felt closer to America; America had left behind the class barriers and snobbery of Old Europe. America was less hierarchical; you could rise socially, something otherwise possible only in postrevolutionary Russia, where class barriers had broken down and equality had been universally imposed by brute force.”
  • As each rose to a position of global economic, political, and military predominance, the British Empire and the United States divided the world into “white” people, who had certain inalienable rights, and “colored” people who did not. The USSR, rising later and faster, made no such distinctions. An Old Bolshevik who had served the revolution for decades was just as likely to end their life freezing on the taiga as a Russian aristocrat or a Kazakh peasant.
  • Pragmatism and passion were certainly present in the development of the USSR, but they were not the only inputs. Perhaps the crucial factor was the almost limitless cheap labor supplied by impoverished peasants driven off their land, petty criminals, and political undesirables who could be press-ganged into service as part of their “reeducation.”
  • Between 1932 and 1937, the output of the Dalstroy mine went from 511 kilograms of gold to 51.5 tons. The price of this astonishing growth was paid by the bodies of the prisoners, of whom there were 163,000 by the end of the decade. The writer Varlam Shalamov, Schlögel’s guide through this frozen Malebolge, explains it this way:To turn a healthy young man, who had begun his career in the clean winter air of the gold mines, into a goner, all that was needed, at a conservative estimate, was a term of twenty to thirty days of sixteen hours of work per day, with no rest days, with systematic starvation, torn clothes, and nights spent in temperatures of minus sixty degrees in a canvas tent with holes in it, and being beaten by the foremen, the criminal gang masters, and the guards.
  • There is no moral calculus that can justify this suffering. And yet Schlögel lays out the brutal, unassimilable fact about the violence of Soviet modernization in the 1930s: “Without the gold of Kolyma . . . there would have been no build-up of the arms industries before and during the Soviet-German war.” The lives of the workers in Kolyma were the cost of winning the Second World War as surely as those of the soldiers at the front.
  • Of the 250,000 people, most of them prisoners,1 involved in building the 227-kilometer White Sea Canal, around 12,800 are confirmed to have died in the process. Even if the actual number is higher, as it probably is, it is hardly extraordinary when set against the 28,000 people who died in the construction of the 80-kilometer Panama Canal (or the 20,000 who had died in an earlier, failed French attempt to build it), or the tens of thousands killed digging the Suez Canal
  • it is worth noting that slave labor in mines and building projects, forced starvation of millions through food requisitions, and the destruction of traditional lifeworlds were all central features of the colonial projects that underwrote the building of modernity in the U.S. and Western Europe. To see the mass death caused by Soviet policies in the first decades of Communist rule in a global light—alongside the trans-Atlantic slave trade, the genocide of Indigenous peoples in Africa and the Americas, and the great famines in South Asia—is to see it not as the inevitable consequence of socialist utopianism, but of rapid modernization undertaken without concern for human life.
  • But Soviet Americanism was about more than cultural affinities. The transformation of the Soviet Union would have been impossible without American expertise.
  • Curiously enough, Schlögel seems to credit burnout from the era of hypermobilization for the fall of the USSR:Whole societies do not collapse because of differences of opinion or true or false guidelines or even the decisions of party bosses. They perish when they are utterly exhausted and human beings can go on living only if they cast off or destroy the conditions that are killing them
  • it seems far more accurate to say that the USSR collapsed the way it did because of a generational shift. By the 1980s, the heroic generation was passing away, and the new Soviet people born in the post-war era were comparing life in the USSR not to what it had been like in the bad old Tsarist days, but to what it could be like
  • Schlögel may be right that “Pittsburgh is not Magnitogorsk,” and that the U.S. was able to transition out of the heroic period of modernization far more effectively than the USSR. But the problems America is currently facing are eerily similar to those of the Soviet Union in its final years—a sclerotic political system dominated by an aging leadership class, environmental degradation, falling life expectancy, a failed war in Afghanistan, rising tensions between a traditionally dominant ethnic group and freedom-seeking minorities, a population that has been promised a higher standard of living than can be delivered by its economic system.
  • given where things stand in the post-Soviet world of 2023, the gaps tell an important story. The most significant one is around ethnic policy, or what the Soviet Union referred to as “nation-building” (natsional‘noe stroitel‘stvo).
  • In the more remote parts of the USSR, where national consciousness was still in the process of developing, it raised the more profound question of which groups counted as nations. When did a dialect become a language? If a nation was tied to a clearly demarcated national territory, how should the state deal with nomadic peoples?
  • The Bolsheviks dealt with this last problem by ignoring it. Lenin believed that “nationality” was basically a matter of language, and language was simply a medium for communication.
  • Things should be “national in form, socialist in content,” as Stalin famously put it. Tatar schools would teach Tatar children about Marx and Engels in Tatar, and a Kyrgyz novelist like Chinghiz Aitmatov could write socialist realist novels in Kyrgyz.
  • Unity would be preserved by having each nationality pursue a common goal in their own tongue. This was the reason Lenin did not believe that establishing ethno-territorial republics would lead to fragmentation of the Soviet state
  • Despite these high and earnest ideals, the USSR’s nationalities policy was as filled with tragedy as the rest of Soviet history. Large numbers of intellectuals from minority nations were executed during the Great Purge for “bourgeois nationalism,” and entire populations were subject to forced relocation on a massive scale.
  • In practice, Soviet treatment of national minorities was driven not by a commitment to self-determination, but by the interests (often cynical, sometimes paranoid) of whoever happened to be in the Kremlin.
  • The ethnic diversity of the USSR was a fundamental aspect of the lifeworlds of millions of Soviet citizens, and yet Schlögel barely mentions it.
  • As is often the case with books about the Soviet Union, it takes life in Moscow and Leningrad to be representative of the whole. But as my friends in Mari El used to say, “Moscow is another country.”
  • None of this would matter much if it weren’t for the fact that the thirty years since the dismantling of the USSR have been defined in large part by conflicts between and within the successor states over the very questions of nationality and territory raised during the founding of the Soviet Union.
  • in the former lands of the USSR, barely a year has gone since 1991 without a civil war, insurgency, or invasion fought over control of territory or control of the government of that territory in Central Asia, the Caucasus, and Eastern Europe.
  • Russia’s full-scale invasion of Ukraine in February 2022 euthanized any remaining hopes that globalization and integration of trade would establish a lasting peace in Eastern Europe. The sense of possibility that animates Schlögel’s meditations on post-Soviet life—the feeling that the lifeworld of kommunalkas and queues had given way to a more vivacious, more dynamic, more forward-looking society that was bound to sort itself out eventually—now belongs definitively to the past. Something has been broken that cannot be fixed.
  • It is worth noting (Schlögel does not) that of the institutions that survived the dismantling of the Soviet state, the military and intelligence services and the criminal syndicates were the most powerful, in large part because they were so interconnected. In a kind of Hegelian shit-synthesis, the man who established a brutal kind of order after the mayhem of the nineteen-nineties, Vladimir Putin, has deep ties to both. The parts of Soviet communism that ensured a basic standard of living were, for the most part, destroyed in the hideously bungled transition to a market economy. Militarism, chauvinism, and gangster capitalism thrived, as they still do today.
  • Perhaps it is now possible to see the Soviet century as an anomaly in world history, an interregnum during which two power blocks, each a distorted reflection of the other, marshaled the energies of a modernizing planet in a great conflict over the future. The United States and the USSR both preached a universal doctrine, both claimed they were marching toward the promised land.
  • The unipolar moment lasted barely a decade, and we have now fallen through the rotten floor of American hegemony to find ourselves once again in the fraught nineteenth century. The wars of today are not between “smelly little orthodoxies,” but between empires and nations, the powerful states that can create their own morality and the small countries that have to find powerful friends
  • the key difference between 2023 and 1900 is that the process of modernization is, in large parts of the world, complete. What this means for great-power politics in the twenty-first century, we are only beginning to understand.
Javier E

Network of ancient Maya cities reveals well-organized civilization - The Washington Post - 0 views

  • Mapping the area since 2015 using lidar technology — an advanced type of radar that reveals things hidden by dense vegetation and the tree canopy — researchers have found what they say is evidence of a well-organized economic, political and social system operating some two millennia ago.
  • The discovery is sparking a rethinking of the accepted idea that the people of the mid- to late-Preclassic Maya civilization (1,000 B.C. to A.D. 250) would have been only hunter-gatherers, “roving bands of nomads, planting corn,”
  • “We now know that the Preclassic period was one of extraordinary complexity and architectural sophistication, with some of the largest buildings in world history being constructed during this time
  • ...11 more annotations...
  • thinking about the history of the Americas, Hansen said. The lidar findings have unveiled “a whole volume of human history that we’ve never known” because of the scarcity of artifacts from that period, which were probably buried by later construction by the Maya and then covered by jungle.
  • When scientists digitally removed ceiba and sapodilla trees that cloak the area, the lidar images revealed ancient dams, reservoirs, pyramids and ball courts. El Mirador has long been considered the “cradle of the Maya civilization,” but the proof of a complex society already being in place circa 1,000 B.C. suggests “a whole volume of human history that we’ve never known before,”
  • Excavations around Balamnal in 2009 “failed to recognize the incredible sophistication and size of the city, all of which was immediately evident with lidar technology,” Hansen says. Lidar showed the site to be among the largest in El Mirador, with causeways “radiating to other smaller sites suggest[ing] its administrative, economic and political importance in the Preclassic periods.”
  • He says that once the area is fully revealed, it could be potentially as significant a marker in human history as the pyramids in Egypt, the oldest of which dates circa 2,700 B.C
  • the research “sheds light on how the ancient Maya significantly modified their local environment, and it enhances our understanding of how social complexity arose.”
  • Among the multistory temples, buildings and roads, images of Balamnal, one of the Preclassic civilization’s crucial hubs, were revealed for the first time. It dates back to 1,000 or possibly 2,000 years before the most famous, and well-excavated, Maya site of Chichen Itza in Mexico’s Yucatán Peninsula, which was constructed in the early A.D. 400s.
  • Before the lidar study, archaeologists, biologists and historians had identified about 50 sites of importance in a decade. “Now there are more than 900 [settlements]. … We [couldn’t] see that before. It was impossible.”
  • The lidar images raise questions about how “one society living in a tropical jungle in Central America became one of the greatest ancient civilizations in the world [while] another society living in Borneo is still hunting and gathering in the exact same environment,”
  • About 40 miles south of Petén is Tikal, ruins of the largest city of the Maya civilization’s later “Classic” period (A.D. 200 to 900). Now a national park, Tikal was declared a UNESCO World Heritage site in 1979. It could serve as a possible blueprint for El Mirador.
  • “It could be something great,” Hernández says of El Mirador’s potential transformation into a significant tourist site. “But only if the government, archaeological organizations and locals work together. Then a decision can be taken as to whether it should become a national monument, an area of returned, modern-day Mayans and other Indigenous Guatemalans (who make up about 40 percent of the population in the country) or a tourist hub.
  • “I don’t want my kids to say, ‘oh, I remember the Mirador, it was a nice place, jaguars were living there’ — like a legend,” Hernández says. “We can save it now. This is the right moment to do it.”
Javier E

What the Scopes Trial Teaches Us About Climate-Change Denial - The Atlantic - 0 views

  • "A desperate flight backward to old certainties replaced the prewar belief in gradual adaptation to new conditions," he wrote. "In a convulsion of filiopiety, men tried to deny the present by asserting a fugitive and monastic virtue.
  • Not progress, but stability and certainty."
  • This dynamic helped explain, Ginger wrote, the new rise of fundamentalism as a political force. It accounted for great skepticism of the new truths of science. And it generated the rise in nativism and xenophobia that gripped the nation during that time as well as the restrictive immigration policies that resulted from it.
  • ...1 more annotation...
  • We have a new generation of fear and prejudice wrought by a new wave of immigration. We have a new wave of weariness of war after two more bloody conflicts. And we have a new wave of skepticism about science that has manifested itself in two distinct ways. Nearly 90 years after the Scopes trial, there are still anti-evolution forces pushing to include creationism in our public schools And nearly 90 years after the Monkey trial corporate forces still are pushing back against science, still promoting the "inculcation of received truths."
Javier E

The tragedy of the Israel-Palestine conflict is this: underneath all the horror is a clash of two just causes | Jonathan Freedland | The Guardian - 0 views

  • Many millions around the world watch the Israel-Palestine conflict in the same way: as a binary contest in which you can root for only one team, and where any losses suffered by your opponent – your enemy – feel like a win.
  • You see it in those who tear down posters on London bus shelters depicting the faces of the more than 200 Israelis currently held hostage by Hamas in Gaza – including toddlers and babies. You see it too in those who close their eyes to the consequences of Israel’s siege of Gaza, to the impact of denied or restricted supplies of water, food, medicine and fuel on ordinary Gazans – including toddlers and babies. For these hardcore supporters of each side, to allow even a twinge of human sympathy for the other is to let the team down.
  • Thinking like this – my team good, your team bad – can lead you into some strange, dark places. It ends in a group of terrified Jewish students huddling in the library of New York’s Cooper Union college, fleeing a group of masked protesters chanting “Free Palestine” – their pursuers doubtless convinced they are warriors for justice and liberation, rather than the latest in a centuries-long line of mobs hounding Jews.
  • ...6 more annotations...
  • even after the 7 October massacre had stirred memories of the bleakest chapters of the Jewish past – and prompted a surge in antisemitism across the world – Jews were being told exactly how they can and cannot speak about their pain. We’re not to mention the Holocaust, one scholar advised, because that would be “weaponising” it. Historical context about the Nakba, the 1948 dispossession of the Palestinians, is – rightly – deemed essential. But mention the Nazi murder of 6 million Jews – the event that finally secured near-universal agreement among the Jewish people, and the United Nations in 1947, that Jews needed a state of their own – and you’ve broken the rules. Because it’s impossible that both sides might have suffered historic pain.
  • Instead, a shift is under way that has been starkly revealed during these past three weeks. It squeezes the Israel-Palestine conflict into a “decolonisation” frame it doesn’t quite fit, with all Israelis – not just those in the occupied West Bank – defined as the footsoldiers of “settler colonialism”, no different from, say, the French in Algeria
  • They have been framed as the modern world’s ultimate evildoer: the coloniser.
  • That matters because, in this conception, justice can only be done once the colonisers are gone
  • What’s more, such a framing brands all Israelis – not just West Bank settlers – as guilty of the sin of colonialism. Perhaps that explains why those letter writers could not full-throatedly condemn the 7 October killing of innocent Israeli civilians. Because they do not see any Israeli, even a child, as wholly innocent.
  • the late Israeli novelist and peace activist Amos Oz was never wiser than when he described the Israel/Palestine conflict as something infinitely more tragic: a clash of right v right. Two peoples with deep wounds, howling with grief, fated to share the same small piece of land.
Javier E

(1) The Middle East is getting older - by Noah Smith - 0 views

  • I noticed something interesting about the Israel-Gaza war that seems to have generally been overlooked: The war hasn’t shown much sign of spreading throughout the Middle East.
  • The “Arab street” that everyone feared back in the early 2000s has certainly had protests in support of the Palestinians, but they’ve been very peaceful. Saudi Arabia has said that it still wants to normalize relations with Israel, conditional on a ceasefire.
  • it’s far from the dire expectations that everyone was throwing around in the first few days of the war. In 2011, the Arab Spring spread like wildfire, igniting huge, lengthy, bloody wars in Syria and Yemen, as well as various smaller wars throughout the Middle East; the Israel-Gaza war shows no sign of repeating this history.
  • ...18 more annotations...
  • it’s also possible that population aging has something to do with it. There’s a pretty well-established literature linking youthful population bulges to elevated risk of conflict.
  • For example, Cincotta and Weber (2021) find that countries with a median age of 25 or less are much more likely to have revolutions:
  • Evidence from the 1990s reveals that countries where people aged fifteen to twenty-nine made up more than 40 percent of the adult population were twice as likely to suffer civil conflict.
  • Data collected from 1950 to 2000 found that countries where 35 percent or more of their adult populations comprised people aged fifteen to twenty-four were 150 percent more likely to experience an outbreak of civil conflict
  • , the young people crowd each other out, and this makes them mad. This effect is obviously exacerbated when the economy is stagnating. Also, simply having a lot of young men around without much to lose seems like a risk factor in and of itself.
  • Whether rich or poor, the countries in the Greater Middle East — I’ll throw Afghanistan and Pakistan into the mix, since they’ve also been a big locus of conflict — just don’t tend to experience much economic growth at all.
  • But the good news here, at least from a conflict-avoidance perspective, is that these countries are getting steadily older. There are a number of countries in the region where median age has already passed the 25-year mark:
  • how much things have changed
  • When Iran threw hundreds of thousands of soldiers against Iraq in “human wave” attacks in the 1980s, the median Iranian was just 17 years old; now, the median Iranian is in their early 30s.
  • Saudi Arabia got involved in the Yemen war, but was reluctant to send ground troops against the Houthis — possibly because the Houthis are formidable, but possibly because the Saudis have relatively few young people to send.
  • Hezbollah resides in a considerably older country than in 2006 when they attacked Israe
  • On the other hand, there are a number of other countries in the region that are still pretty young:
  • troublingly, Afghanistan, Yemen, Palestine, Iraq, and Pakistan are projected to still be below a median age of 25 a decade from now.
  • all of these countries are still aging at a steady clip — as are the countries that are already over 25. The fundamental reason is the big collapse in fertility rates in the Greater Middle East (and across the broader Muslim world) over the past few decades.
  • When Iran exploded in revolution and fought a titanic war against Iraq in the late 70s and 80s, its fertility was over 6; now it’s down to about 1.5
  • When the U.S. invaded Afghanistan in 2001, its fertility was over 7; now, it’s below 4
  • I don’t want to claim that “demography is destiny” here, and it’s all too easy to look at individual countries and tell just-so stories about how aging and fertility might have affected their conflict
  • The old Middle East, with massive crowds of angry young people thronging the streets, ready to explode into nationalist or sectarian or revolutionary violence, is steadily disappearing, being replaced by a more sedate, aging society. Given the horrific outcomes of the last few decades, it’s hard not to see that as a good thing.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

Nations don't get rich by plundering other nations - 0 views

  • One idea that I often encounter in the world of economic discussion, and which annoys me greatly, is that nations get rich by looting other nations.
  • This idea is a pillar of “third world” socialism and “decolonial” thinking, but it also exists on the political Right. This is, in a sense, a very natural thing to believe — imperialism is a very real feature of world history, and natural resources sometimes do get looted. So this isn’t a straw man; it’s a common misconception that needs debunkin
  • it’s important to debunk it, because only when we understand how nations actually do get rich can we Americans make sure we take the necessary steps to make sure our nation stays rich.
  • ...11 more annotations...
  • The first thing to notice is that in the past, no country was rich.
  • even allowing for quite a bit of uncertainty, it’s definitely true that the average citizen of a developed country, or a middle-income country, is far more materially wealthy than their ancestors were 200 years ago:
  • If you account for increasing population and look at total GDP, the increase is even more dramatic.
  • What this means is that whatever today’s rich countries did to get rich, they weren’t doing it in 1820.
  • Imperialism is very old — the Romans, the Persians, the Mongols, and many other empires all pillaged and plundered plenty of wealth. But despite all of that plunder, no country in the world was getting particularly rich, by modern standards, until the latter half of the 20th century.
  • Think about all the imperial plunder that was happening in 1820. The U.S. had 1.7 million slaves and was in the process of taking land from Native Americans. Latin American countries had slavery, as well as other slavery-like labor systems for their indigenous peoples. European empires were already exploiting overseas colonies.
  • But despite all this plunder and extraction of resources and labor, Americans and Europeans were extremely poor by modern standards.
  • With no antibiotics, vaccines, or water treatment, even rich people suffered constantly from all sorts of horrible diseases. They didn’t have cars or trains or airplanes to take them around. Their food was meager and far less varied than ours today. Their living space was much smaller, with little privacy or personal space. Their clothes were shabby and fell apart quickly.
  • At night their houses were dark, and without air conditioning they had trouble escaping the summer heat. They had to carry water from place to place, and even rich people pooped in outhouses or chamberpots. Everyone had bedbugs. Most water supplies were carried from place to place by hand.
  • They were plundering as hard as they could, but it wasn’t making them rich.
  • although Africa, Latin America, and Asia were closer to Europe in terms of living standards back then, they were all very, very poor by modern standards.
Javier E

Excuse me, but the industries AI is disrupting are not lucrative - 0 views

  • Google’s Gemini. The demo video earlier this week was nothing short of amazing, as Gemini appeared to fluidly interact with a questioner going through various tasks and drawings, always giving succinct and correct answers.
  • another huge new AI model revealed.
  • that’s. . . not what’s going on. Rather, they pre-recorded it and sent individual frames of the video to Gemini to respond to, as well as more informative prompts than shown, in addition to editing the replies from Gemini to be shorter and thus, presumably, more relevant. Factor all that in, Gemini doesn’t look that different from GPT-4,
  • ...24 more annotations...
  • Continued hype is necessary for the industry, because so much money flowing in essentially allows the big players, like OpenAI, to operate free of economic worry and considerations
  • The money involved is staggering—Anthropic announced they would compete with OpenAI and raised 2 billion dollars to train their next-gen model, a European counterpart just raised 500 million, etc. Venture capitalists are eager to throw as much money as humanely possible into AI, as it looks so revolutionary, so manifesto-worthy, so lucrative.
  • While I have no idea what the downloads are going to be for the GPT Store next year, my suspicion is it does not live up to the hyped Apple-esque expectation.
  • given their test scores, I’m willing to say GPT-4 or Gemini is smarter along many dimensions than a lot of actual humans, at least in the breadth of their abstract knowledge—all while noting even leading models still have around a 3% hallucination rate, which stacks up in a complex task.
  • A more interesting “bear case” for AI is that, if you look at the list of industries that leading AIs like GPT-4 are capable of disrupting—and therefore making money off of—the list is lackluster from a return-on-investment perspective, because the industries themselves are not very lucrative.
  • What are AIs of the GPT-4 generation best at? It’s things like:writing essays or short fictionsdigital artchattingprogramming assistance
  • As of this writing, the compute cost to create an image using a large image model is roughly $.001 and it takes around 1 second. Doing a similar task with a designer or a photographer would cost hundreds of dollars (minimum) and many hours or days (accounting for work time, as well as schedules). Even if, for simplicity’s sake, we underestimate the cost to be $100 and the time to be 1 hour, generative AI is 100,000 times cheaper and 3,600 times faster than the human alternative.
  • The issue is that taking the job of a human illustrator just. . . doesn’t make you much money. Because human illustrators don’t make much money
  • While you can easily use Dall-E to make art for a blog, or a comic book, or a fantasy portrait to play an RPG, the market for those things is vanishingly small, almost nonexistent
  • While I personally wouldn’t go so far as to describe current LLMs as “a solution in search of a problem” like cryptocurrency has famously been described as, I do think the description rings true in an overall economic/business sense so fa
  • Was there really a great crying need for new ways to cheat on academic essays? Probably not. Will chatting with the History Buff AI app (it was is in the background of Sam Altman’s presentation) be significantly different than chatting with posters on /r/history on Reddit? Probably not
  • Search is the most obvious large market for AI companies, but Bing has had effectively GPT-4-level AI on offer now for almost a year, and there’s been no huge steal from Google’s market share.
  • What about programming? It’s actually a great expression of the issue, because AI isn’t replacing programming—it’s replacing Stack Overflow, a programming advice website (after all, you can’t just hire GPT-4 to code something for you, you have to hire a programmer who uses GPT-4
  • Even if OpenAI drove Stack Overflow out of business entirely and cornered the market on “helping with programming” they would gain, what? Stack Overflow is worth about 1.8 billion, according to its last sale in 2022. OpenAI already dwarfs it in valuation by an order of magnitude.
  • The more one thinks about this, one notices a tension in the very pitch itself: don’t worry, AI isn’t going to take all our jobs, just make us better at them, but at the same time, the upside of AI as an industry is the total combined worth of the industries its replacing, er, disrupting, and this justifies the massive investments and endless economic optimism.
  • It makes me worried about the worst of all possible worlds: generative AI manages to pollute the internet with cheap synthetic data, manages to make being a human artist / creator harder, manages to provide the basis of agential AIs that still pose some sort of existential risk if they get intelligent enough—all without ushering in some massive GDP boost that takes us into utopia
  • If the AI industry ever goes through an economic bust sometime in the next decade I think it’ll be because there are fewer ways than first thought to squeeze substantial profits out of tasks that are relatively commonplace already
  • We can just look around for equivalencies. The payment for humans working as “mechanical turks” on Amazon are shockingly low. If a human pretending to be an AI (which is essentially what a mechanical turk worker is doing) only makes a buck an hour, how much will an AI make doing the same thing?
  • , is it just a quirk of the current state of technology, or something more general?
  • What’s written on the internet is a huge “high quality” training set (at least in that it is all legible and collectable and easy to parse) so AIs are very good at writing the kind of things you read on the internet
  • But data with a high supply usually means its production is easy or commonplace, which, ceteris paribus, means it’s cheap to sell in turn. The result is a highly-intelligent AI merely adding to an already-massive supply of the stuff it’s trained on.
  • Like, wow, an AI that can write a Reddit comment! Well, there are millions of Reddit comments, which is precisely why we now have AIs good at writing them. Wow, an AI that can generate music! Well, there are millions of songs, which is precisely why we now have AIs good at creating them.
  • Call it the supply paradox of AI: the easier it is to train an AI to do something, the less economically valuable that thing is. After all, the huge supply of the thing is how the AI got so good in the first place.
  • AI might end up incredibly smart, but mostly at things that aren’t economically valuable.
Javier E

The One Parenting Decision That Really Matters - The Atlantic - 0 views

  • Hillary Clinton, then the first lady of the United States, published It Takes a Village: And Other Lessons Children Teach Us. Clinton’s book—and the proverb the title referenced—argue that children’s lives are shaped by many people in their neighborhood: firefighters and police officers, garbage collectors, teachers and coaches.
  • Dole said, “I am here to tell you: It does not take a village to raise a child. It takes a family to raise a child.” The crowd roared.
  • So who was right, Bob Dole or Hillary Clinton?
  • ...8 more annotations...
  • some neighborhoods produce more successful kids: One in every 864 Baby Boomers born in Washtenaw, Michigan, the county that includes the University of Michigan, did something notable enough to warrant an entry in Wikipedia, while just one in 31,167 kids born in Harlan County, Kentucky, achieved that distinction
  • The results showed that some large metropolitan areas give kids an edge. They get a better education. They earn more money: The best cities can increase a child’s future income by about 12 percent. They found that the five best metropolitan areas are: Seattle; Minneapolis; Salt Lake City; Reading, Pennsylvania; and Madison, Wisconsin.
  • a website, The Opportunity Atlas, that allows anyone to find out how beneficial any neighborhood is expected to be for kids of different income levels, genders, and races.
  • We find that one factor about a home—its location—accounts for a significant fraction of the total effect of that home.
  • I have estimated that some 25 percent—and possibly more—of the overall effects of a parent are driven by where that parent raises their child. In other words, this one parenting decision has much more impact than many thousands of others.
  • Three of the biggest predictors that a neighborhood will increase a child’s success are the percent of households in which there are two parents, the percent of residents who are college graduates, and the percent of residents who return their census forms.
  • These are neighborhoods, in other words, with many role models: adults who are smart, accomplished, engaged in their community, and committed to stable family lives.
  • Data can be liberating. It can’t make decisions for us, but it can tell us which decisions really matter. When it comes to parenting, the data tells us, moms and dads should put more thought into the neighbors they surround their children with—and lighten up about everything else.
Javier E

Heeding the Warning from the Future - The Bulwark - 0 views

  • The way out of the conspiracy crisis, Weill argues, runs along the entrance path but in the opposite direction. What’s necessary is the re-establishment of normal social connections and interpersonal relationships with those whose fringe beliefs have isolated them.
  • The deprogrammers she cites say that at the individual level nothing is gained and much can be lost via ridicule or shunning of conspiracy-minded friends and family. You can’t argue anyone out of a conspiracy belief, but with some luck and patience, you might be able to love them out.
  • Ultimately, there’s a need to get on the prevention side of conspiracism. That probably means keeping the pressure on social media companies to sacrifice some profit by reducing the addictiveness of their online products
  • ...7 more annotations...
  • It’s not that we haven’t been here before. It’s that we arrived and never left. We are caught in a recurring cycle of acute identity crisis (Are we a divine creation or a cosmic accident?) with our sense of our own dignity locked in a war against scientific and technological progress.
  • The erosion of traditional authority, namely religion, and the transition away from small, intimate communities in favor of large, impersonal urban settings has been rattling us emotionally and psychologically since before Charles Darwin posited evolution over special creation.
  • it’s also true that the mental habits of conspiracy are probably as old as the human species, and may be rooted in certain evolutionary advantages (e.g., pattern-seeking, symbolic language, cooperative skills) that have betrayed us.
  • We laugh at flat earthism or the stipulation of lizard people just as many nineteenth-century Germans mocked spiritualism, theosophy, and World Ice Theory. But it bears remembering that these esoteric views formed a good part of the intellectual scaffolding on which an overarching antisemitic “Volk theory” grew and which helped lead the world into catastrophe.
  • The long-term lesson of conspiracy is that the convergence of social forces under extraordinary economic and social pressures can split the atom of esoteric theories and lead to critical chain reactions
  • It doesn’t take a lot of imagination to envision an unscrupulous politician in this country welding a majority out of conspiracists and a beleaguered suburban middle class by focusing public anger on an imaginary “other.”
  • Teachers, university professors, drag queens, and “pedophiles” come to mind as such a figure’s potential targets. It has happened before, and it can happen again.
Javier E

Who Watches the Watchdog? The CJR's Russia Problem - Byline Times - 0 views

  • In December 2018, Pope commissioned me to report for the CJR on the troubled history of The Nation magazine and its apparent support for the policies of Vladimir Putin. 
  • My $6,000 commission to write for the prestigious ”watchdog” was flattering and exciting – but would also be a hard call. Watchdogs, appointed or self-proclaimed, can only claim entitlement when they hold themselves to the highest possible standards of reporting and conduct. It was not to be
  • For me, the project was vital but also a cause for personal sadness.  During the 1980s, I had been an editor of The Nation’s British sister magazine New Statesman and had served as chair of its publishing company. I knew, worked with and wrote for The Nation’s then-editor, the late Victor Navasky. He subsequently chaired the CJR. 
  • ...28 more annotations...
  • Investigating and calling out a magazine and editor for which I felt empathy, and had historic connections to, hearing from its critics and dissidents, and finding whistleblowers and confidential inside sources was a challenge. But hearing responses from all sides was a duty.
  • I worked on it for six months, settling a first draft of my story to the CJR‘s line editor in the summer 2019. From then on my experience of the CJR was devastating and damaging.
  • After delivering the story and working through a year-long series of edits and re-edits required by Pope, the story was slow-walked to dismissal. In 2022, after Russian tanks had rolled towards Kyiv, I urged Pope to restore and publish the report, given the new and compelling public interest. He refused.
  • he trigger for my CJR investigation was a hoax concerning Democratic Party emails hacked and dumped in 2016 by teams from Russia’s GRU intelligence agency.  The GRU officers responsible were identified and their methods described in detail in the 2019 Mueller Report.  
  • The Russians used the dumped emails decisively – first to leverage an attack on that year’s Democratic National Convention; and then to divert attention from Donald Trump’s gross indiscretions at critical times before his election
  • In 2017, with Trump in the White House, Russian and Republican denial operations began, challenging the Russian role and further widening divisions in America. A pinnacle of these operations was the publication in The Nation on 9 August 2017 of an article – still online under a new editor – claiming that the stolen emails were leaked from inside the DNC.  
  • Immediately after the article appeared, Trump-supporting media and his MAGA base were enthralled. They celebrated that a left-liberal magazine had refuted the alleged Russian operations in supporting Trump, and challenged the accuracy of mainstream press reporting on ‘Russiagate’
  • Nation staff and advisors were aghast to find their magazine praised lavishly by normally rabid outlets – Fox News, Breitbart, the Washington Times. Even the President’s son.
  • When I was shown the Nation article later that year by one of the experts it cited, I concluded that it was technical nonsense, based on nothing.  The White House felt differently and directed the CIA to follow up with the expert, former senior National Security Agency official and whistleblower, William Binney (although nothing happened)
  • Running the ‘leak’ article positioned the left-wing magazine strongly into serving streams of manufactured distractions pointing away from Russian support for Trump.
  • I traced the source of the leak claim to a group of mainly American young right-wing activists delivering heavy pro-Russian and pro-Syrian messaging, working with a British collaborator. Their leader, William Craddick, had boasted of creating the ‘Pizzagate’ conspiracy story – a fantasy that Hillary Clinton and her election staff ran a child sex and torture ring in the non-existent basement of a pleasant Washington neighbourhood pizzeria. Their enterprise had clear information channels from Moscow. 
  • We spoke for 31 minutes at 1.29 ET on 12 April 2019. During the conversation, concerning conflicts of interest, Pope asked only about my own issues – such as that former editor Victor Navasky, who would figure in the piece, had moved from running and owning The Nation to being Chair of the CJR board; and that the independent wealth foundation of The Nation editor Katrina vanden Heuvel – the Kat Foundation – periodically donated to Columbia University.
  • In the series, writer Jeff Gerth condemns multiple Pulitzer Prize-winning reports on Russian interference operations by US mainstream newspapers. Echoing words used in 2020 by vanden Heuvel, he cited as more important “RealClearInvestigations, a non-profit online news site that has featured articles critical of the Russia coverage by writers of varying political orientation, including Aaron Maté”.
  • On the day we spoke, I now know, Pope was working with vanden Heuvel and The Nation to launch – 18 days later – a major new international joint journalism project ‘Covering Climate Now!‘
  • Soon after we spoke, the CJR tweeted that “CJR and @thenation are gathering some of the world’s top journalists, scientists, and climate experts” for the event. I did not see the tweet. Pope and the CJR staff said nothing of this to me. 
  • Any editor must know without doubt in such a situation, that every journalist has a duty of candour and a clear duty to recuse themselves from editorial authority if any hint of conflict of interest arises. Pope did not take these steps. From then until August 2020, through his deputy, he sent me a stream of directions that had the effect of removing adverse material about vanden Heuvel and its replacement with lists of her ‘achievements’. Then he killed the story
  • Working on my own story for the CJR, I did not look behind or around – or think I needed to. I was working for the self-proclaimed ‘watchdog of journalism’. I forgot the ancient saw: who watches the watchdog?
  • This week, Kyle Pope failed to reply to questions from Byline Times about conflicts of interest in linking up with the subjects of the report he had commissioned.
  • During the period I was preparing the report about The Nation and its editor, he wrote for The Nation on nine occasions. He has admitted being remunerated by the publication. While I was working for the CJR, he said nothing. He did not recuse himself, and actively intervened to change content for a further 18 months.
  • On April 16 2019, I was informed that Katrina vanden Heuvel had written to Pope to ask about my report. “We’re going to say thanks for her thoughts and that we’ll make sure the piece is properly vetted and fact-checked,” I was told
  • A month later, I interviewed her for the CJR. Over the course of our 100 minutes discussion, it must have slipped her mind to mention that she and Kyle Pope had just jointly celebrated being given more than $1 million from the Rockefeller Family and other foundations to support their climate project.
  • Pope then asked me to identify my confidential sources from inside The Nation, describing this as a matter of “policy”
  • Pope asked several times that the article be amended to state that there were general tie-ups between the US left and Putin. I responded that I could find no evidence to suggest that was true, save that the Daily Beast had uncovered RT attempting cultivation of the US left. 
  • Pope then wanted the 6,000-word and fully edited report cut by 1,000 words, mainly to remove material about the errors in The Nation article. Among sections cut down were passages showing how, from 2014 onwards, vanden Heuvel had hired a series of pro-Russian correspondents after they had praised her husband. Among the new intake was a Russian and Syrian Government supporting broadcaster, Aaron Maté, taken on in 2017 after he had platformed Cohen on his show The Real News. 
  • On 30 January 2023, the CJR published an immense four-part 23,000-word series on Trump, Russia and the US media. The CJR‘s writers found their magazine praised lavishly by normally rabid outlets. Fox News rejoiced that The New York Times had been “skewered by the liberal media watchdog the Columbia Journalism Review” over Russiagate”. WorldNetDaily called it a “win for Trump”.
  • Pope agreed. Trump had “hailed our report as proof of the media assault on Trump that they’ve been hyping all along,” he wrote. “Trump cheered that view on Truth Social, his own, struggling social-media platform
  • She and her late husband, Professor Stephen Cohen, were at the heart of my reporting on the support The Nation gave to Putin’s Russia. Sixteen months later, as Pope killed my report, he revealed that he had throughout been involved in an ambitious and lucratively funded partnership between the CJR and The Nation, and between himself and vanden Heuvel. 
  • As with The Nation in 2017, the CJR is seeing a storm of derisive and critical evaluations of the series by senior American journalists. More assessments are said to be in the pipeline. “We’re taking the critiques seriously,” Pope said this week. The Columbia Journalism Review may now have a Russia Problem.  
Javier E

What Does Peter Thiel Want? - Persuasion - 0 views

  • Of the many wealthy donors working to shape the future of the Republican Party, none has inspired greater fascination, confusion, and anxiety than billionaire venture capitalist Peter Thiel. 
  • Thiel’s current outlook may well make him a danger to American democracy. But assessing the precise nature of that threat requires coming to terms with his ultimate aims—which have little to do with politics at all. 
  • Thiel and others point out that when we lift our gaze from our phones and related consumer products to the wider vistas of human endeavor—breakthroughs in medicine, the development of new energy sources, advances in the speed and ease of transportation, and the exploration of space—progress has indeed slowed to a crawl.
  • ...21 more annotations...
  • It certainly informed his libertarianism, which inclined in the direction of an Ayn Rand-inspired valorization of entrepreneurial superman-geniuses whose great acts of capitalistic creativity benefit all of mankind. Thiel also tended to follow Rand in viewing the masses as moochers who empower Big Government to crush these superman-geniuses.
  • Thiel became something of an opportunistic populist inclined to view liberal elites and institutions as posing the greatest obstacle to building an economy and culture of dynamistic creativity—and eager to mobilize the anger and resentment of “the people” as a wrecking ball to knock them down. 
  • the failure of the Trump administration to break more decisively from the political status quo left Thiel uninterested in playing a big role in the 2020 election cycle.
  • Does Thiel personally believe that the 2020 election was stolen from Trump? I doubt it. It’s far more likely he supports the disruptive potential of encouraging election-denying candidates to run and helping them to win.
  • Thiel is moved to indignation by the fact that since 1958 no commercial aircraft (besides the long-decommissioned Concorde) has been developed that can fly faster than 977 kilometers per hou
  • Thiel is, first and foremost, a dynamist—someone who cares above all about fostering innovation, exploration, growth, and discovery.
  • the present looks and feels pretty much the same as 1969, only “with faster computers and uglier cars.” 
  • Thiel’s approach to the problem is distinctive in that he sees the shortfall as evidence of a deeper and more profound moral, aesthetic, and even theological failure. Human beings are capable of great creativity and invention, and we once aspired to achieve it in every realm. But now that aspiration has been smothered by layer upon layer of regulation and risk-aversion. “Legal sclerosis,” Thiel claimed in that same book review, “is likely a bigger obstacle to the adoption of flying cars than any engineering problem.”
  • Progress in science and technology isn’t innate to human beings, Thiel believes. It’s an expression of a specific cultural or civilizational impulse that has its roots in Christianity and reached a high point during the Victorian era of Western imperialism
  • Thiel aims to undermine the progressive liberalism that dominates the mainstream media, the federal bureaucracy, the Justice Department, and the commanding heights of culture (in universities, think tanks, and other nonprofits).
  • In Thiel’s view, recapturing civilizational greatness through scientific and technological achievement requires fostering a revival of a kind of Christian Prometheanism (a monotheistic variation on the rebellious creativity and innovation pursued by the demigod Prometheus in ancient Greek mythology)
  • Against those who portray modern scientific and technological progress as a rebellion against medieval Christianity, Thiel insists it is Christianity that encourages a metaphysical optimism about transforming and perfecting the world, with the ultimate goal of turning it into “a place where no accidents can happen” and the achievement of “personal immortality” becomes possible
  • All that’s required to reach this transhuman end is that we “remain open to an eschatological frame in which God works through us in building the kingdom of heaven today, here on Earth—in which the kingdom of heaven is both a future reality and something partially achievable in the present.” 
  • As Thiel put it last summer in a wide-ranging interview with the British website UnHerd, the Christian world “felt very expansive, both in terms of the literal empire and also in terms of the progress of knowledge, of science, of technology, and somehow that was naturally consonant with a certain Christian eschatology—a Christian vision of history.”
  • JD Vance is quoted on the subject of what this political disruption might look like during a Trump presidential restoration in 2025. Vance suggests that Trump should “fire every single midlevel bureaucrat, every civil servant in the administrative state, replace them with our people. And when the courts stop [him], stand before the country, and say, ‘the chief justice has made his ruling. Now let him enforce it.’”
  • Another Thiel friend and confidante discussed at length in Vanity Fair, neo-reactionary Curtis Yarvin, takes the idea of disrupting the liberal order even further, suggesting various ways a future right-wing president (Trump or someone else) could shake things up, shredding the smothering blanket of liberal moralism, conformity, rules, and regulations, thereby encouraging the creation of something approaching a scientific-technological wild west, where innovation and experimentation rule the day. Yarvin’s preferred path to tearing down what he calls the liberal “Cathedral,” laid out in detail on a two-hour Claremont Institute podcast from May 2021, involves a Trump-like figure seizing dictatorial power in part by using a specially designed phone app to direct throngs of staunch supporters (Jan. 6-style) to overpower law enforcement at key locations around the nation’s capital.  
  • this isn’t just an example of guilt-by-association. These are members of Thiel’s inner circle, speaking publicly about ways of achieving shared goals. Thiel funded Vance’s Senate campaign to the tune of at least $15 million. Is it likely the candidate veered into right-wing radicalism with a Vanity Fair reporter in defiance of his campaign’s most crucial donor?
  • As for Yarvin, Thiel continued to back his tech start up (Urbit) after it became widely known he was the pseudonymous author behind the far-right blog “Unqualified Reservations,” and as others have shown, the political thinking of the two men has long overlapped in numerous other ways. 
  • He’s deploying his considerable resources to empower as many people and groups as he can, first, to win elections by leveraging popular disgust at corrupt institutions—and second, to use the power they acquire to dismantle or even topple those institutions, hopefully allowing a revived culture of Christian scientific-technological dynamism to arise from out of the ruins.  
  • Far more than most big political donors, Thiel appears to care only about the extra-political goal of his spending. How we get to a world of greater dynamism—whether it will merely require selective acts of troublemaking disruption, or whether, instead, it will ultimately involve smashing the political order of the United States to bits—doesn’t really concern him. Democratic politics itself—the effort of people with competing interests and clashing outlooks to share rule for the sake of stability and common flourishing—almost seems like an irritant and an afterthought to Peter Thiel.
  • What we do have is the opportunity to enlighten ourselves about what these would-be Masters of the Universe hope to accomplish—and to organize politically to prevent them from making a complete mess of things in the process.
Javier E

The Only Crypto Story You Need, by Matt Levine - 0 views

  • the technological accomplishment of Bitcoin is that it invented a decentralized way to create scarcity on computers. Bitcoin demonstrated a way for me to send you a computer message so that you’d have it and I wouldn’t, to move items of computer information between us in a way that limited their supply and transferred possession.
  • The wild thing about Bitcoin is not that Satoshi invented a particular way for people to send numbers to one another and call them payments. It’s that people accepted the numbers as payments.
  • That social fact, that Bitcoin was accepted by many millions of people as having a lot of value, might be the most impressive thing about Bitcoin, much more than the stuff about hashing.
  • ...11 more annotations...
  • Socially, cryptocurrency is a coordination game; people want to have the coin that other people want to have, and some sort of abstract technical equivalence doesn’t make one cryptocurrency a good substitute for another. Social acceptance—legitimacy—is what makes a cryptocurrency valuable, and you can’t just copy the code for that.
  • A thing that worked exactly like Bitcoin but didn’t have Bitcoin’s lineage—didn’t descend from Satoshi’s genesis block and was just made up by some copycat—would have the same technology but none of the value.
  • Here’s another generalization of Bitcoin: Satoshi made up an arbitrary token that trades electronically for some price. The price turns out to be high and volatile. The price of an arbitrary token is … arbitrary?
  • it’s very interesting as a matter of finance theory. Modern portfolio theory demonstrates that adding an uncorrelated asset to a portfolio can improve returns and reduce risk.
  • To the extent that the price of Bitcoin 1) mostly goes up, though with lots of ups and downs along the way, and 2) goes up and down for reasons that are arbitrary and mysterious and not tied to, like, corporate earnings or the global economy, then Bitcoin is interesting to institutional investors.
  • In practice, it turns out that the price of Bitcoin is pretty correlated with the stock market, especially tech stocks
  • Bitcoin hasn’t been a particularly effective inflation hedge: Its price rose during years when US inflation was low, and it’s fallen this year as inflation has increased.
  • The right model of crypto prices might be that they go up during broad speculative bubbles when stock prices go up, and then they go down when those bubbles pop. That’s not a particularly appealing story for investors looking to diversify.
  • one important possibility is that the first generalization of Bitcoin, that an arbitrary tradeable electronic token can become valuable just because people want it to, permanently broke everyone’s brains about all of finance.
  • Before the rise of Bitcoin, the conventional thing to say about a share of stock was that its price represented the market’s expectation of the present value of the future cash flows of the business.
  • But Bitcoin has no cash flows; its price represents what people are willing to pay for it. Still, it has a high and fluctuating market price; people have gotten rich buying Bitcoin. So people copied that model, and the creation of and speculation on pure, abstract, scarce electronic tokens became a big business.
Javier E

In 2022, TV Woke Up From the American Dream - The New York Times - 0 views

  • In politics, “the American dream” has long been used aspirationally, to evoke family and home. But as my colleague Jazmine Ulloa detailed earlier this year, the phrase has also lately been used ominously, especially by conservative politicians, to describe a certain way of life in danger of being stolen by outsiders.
  • The typical counterargument, both in politics and pop culture, has been that immigrants pursuing their ambitions help to strengthen all of America
  • recent stories have complicated this idea by questioning whether the dream itself — or, at least, defining that dream in mostly material terms — can be toxic.
  • ...1 more annotation...
  • This is the danger of the American dream when you scale it down from the national to the individual level. You risk devoting your life to wanting something because it’s what you’ve been told you should want. Everybody loves a Cinderella story, but sometimes your dream, in reality, is just a wish somebody else’s heart made.
Javier E

For the Love of Justice - by Damon Linker - 0 views

  • Thanks to social media, gaining widespread public attention for oneself and one’s favored causes has never been easier.
  • This has incentivized a lot of performative outrage that sometimes manifests itself in acts of protest, from environmental activists throwing soup on paintings in European museums to pro-Palestinian demonstrators halting traffic in major cities by sitting down en masse in the middle of roadways.
  • I don’t think they do much to advance the aims of the activists. In fact, I think they often backfire, generating ill-will among ordinary citizens inconvenienced by the protest. (As for the activists hoping to fight climate change by destroying works of art, I don’t even grasp what they think they’re doing with their lives.)
  • ...10 more annotations...
  • there’s a deeper reason for my harsh judgment, which is that I’m fully committed to the liberal project of domesticating and taming the most intense political passions, ultimately channeling them into representative political institutions, where they are forced to reach accommodation and compromise with contrary views held by other members of the polity.
  • The love of justice can be noble, but it can also be incredibly destructive.
  • (This is hard to see if you conveniently associate such love exclusively with positions staked out by your ideological or partisan allies. In reality, the political ambitions of one’s opponents are often fueled by their own contrary convictions about justice and its demands.
  • My liberal commitments therefore make me maximally suspicious of most examples of “street politics,” especially forms of it in which the activists risk very little and primarily appear to be engaging in a spiritually fulfilling form of socializing with likeminded peers.
  • But Bushnell’s act of self-immolation belongs in a different category altogether—one distinct from just about every other form of protest,
  • Bushnell could have written an op-ed. He could have joined, organized, or led a march and delivered a speech. He could have built up a loud social-media presence and used it to accuse the United States of complicity in genocide and publicize the accusation. He could have leveraged his position in the Air Force to draw added attention to his dissent from Biden administration policy in the Middle East. He could even have embraced terrorism and sought to gain entry to the Israeli embassy with a weapon or explosive
  • But Bushnell didn’t do any of these things. Instead, a few hours before his act of protest, he posted the following message on Facebook:
  • Many of us like to ask ourselves, “What would I do if I was alive during slavery? Or the Jim Crow South? Or apartheid? What would I do if my country was committing genocide?”The answer is, you’re doing it. Right now.
  • I will no longer be complicit in genocide…. I am about to engage in an extreme act of protest. But compared to what people have been experiencing in Palestine at the hands of their colonizers, it’s not extreme at all. This is what our ruling class has decided will be normal.
  • And then, like a small number of other intensely committed individuals down through the decades, he doused himself in a flammable liquid and set himself ablaze, opting to sacrifice his own life in a public act of excruciating self-torture, without doing anything at all to harm anyone but himself, in order to draw attention to what he considered an ongoing, intolerable injustice.
Javier E

The Overton Window: How Politics Change | Definition and Examples - Conceptually - 0 views

  • The Overton window of political possibility is the range of ideas the public is willing to consider and accept.
  • Public officials cannot enact any policy they please like they’re ordering dessert from a menu. They have to choose from among policies that are politically acceptable at the time. And we believe the Overton window defines that range of ideas.
  • The most common misconception is that lawmakers themselves are in the business of shifting the Overton window. That is absolutely false. Lawmakers are actually in the business of detecting where the window is, and then moving to be in accordance with it.
  • ...1 more annotation...
  • think tank, contended that pushing for extreme positions is more effective at changing public opinion. However, there are multiple approaches to shifting the status quo. For example, similar to the foot-in-the-door technique used in sales, one could advocate for small but gradually larger shifts to a policy.
Javier E

JD Vance and the Galaxy-Brained Style in American Politics - 0 views

  • “Cultural pessimism has a strong appeal in America today,” the historian Fritz Stern wrote. “As political conditions appear stable at home or irremediable abroad, American intellectuals have become concerned with the cultural problems of our society, and have substituted sociological or cultural analyses for political criticism.”
  • I bring up Stern’s book because it nails the character of “revolutionary” conservatism—just the sort of politics Vance represents. The junior senator from Ohio believes “culture war is class warfare,”
  • has made it possible for him to claim to be a tribune of the working class in spite of a 0 percent score from the AFL-CIO on “voting with working people.”
  • ...5 more annotations...
  • In general, it’s the point of view of someone who takes Thomas Cole’s The Course of Empire painting cycle to contain a subtle and profound truth about society, one best expressed in a familiar maxim: Strong men make good times; good times make weak men; weak men make . . . (I need to yawn and will let you fill in the rest).1
  • for Vanity Fair, James Pogue did a good job summarizing the tech billionaire Peter Thiel influence nexus and the Thiel-funded coterie that Vance ran with online in a long feature two years ago. Pogue notes: 
  • Vance and this New Right cohort, who are mostly so, so highly educated and well-read that their big problem often seems to be that they’re just too nerdy to be an effective force in mass politics, are not anti-intellectual. Vance is an intellectual himself, even if he’s not currently playing one on TV.
  • the man doesn’t just have cracked beliefs but cracked instincts. Almost endearingly, he and his pals seem to think that workaday politics is an opportune context for doing a bit of grand theory,
  • Stern, again: “They condemned or prophesied, rather than exposited or argued, and all their writings showed that they despised the discourse of intellectuals, depreciated reason, and exalted intuition.” As Stern makes clear, this is the style of thinking that did so much to pave the way for the “revolutionary conservatism” that emerged in the Weimar era.
Javier E

The Rise and Fall of BNN Breaking, an AI-Generated News Outlet - The New York Times - 0 views

  • His is just one of many complaints against BNN, a site based in Hong Kong that published numerous falsehoods during its short time online as a result of what appeared to be generative A.I. errors.
  • During the two years that BNN was active, it had the veneer of a legitimate news service, claiming a worldwide roster of “seasoned” journalists and 10 million monthly visitors, surpassing the The Chicago Tribune’s self-reported audience. Prominent news organizations like The Washington Post, Politico and The Guardian linked to BNN’s stories
  • Google News often surfaced them, too
  • ...16 more annotations...
  • A closer look, however, would have revealed that individual journalists at BNN published lengthy stories as often as multiple times a minute, writing in generic prose familiar to anyone who has tinkered with the A.I. chatbot ChatGPT.
  • How easily the site and its mistakes entered the ecosystem for legitimate news highlights a growing concern: A.I.-generated content is upending, and often poisoning, the online information supply.
  • The websites, which seem to operate with little to no human supervision, often have generic names — such as iBusiness Day and Ireland Top News — that are modeled after actual news outlets. They crank out material in more than a dozen languages, much of which is not clearly disclosed as being artificially generated, but could easily be mistaken as being created by human writers.
  • Now, experts say, A.I. could turbocharge the threat, easily ripping off the work of journalists and enabling error-ridden counterfeits to circulate even more widely — as has already happened with travel guidebooks, celebrity biographies and obituaries.
  • The result is a machine-powered ouroboros that could squeeze out sustainable, trustworthy journalism. Even though A.I.-generated stories are often poorly constructed, they can still outrank their source material on search engines and social platforms, which often use A.I. to help position content. The artificially elevated stories can then divert advertising spending, which is increasingly assigned by automated auctions without human oversight.
  • NewsGuard, a company that monitors online misinformation, identified more than 800 websites that use A.I. to produce unreliable news content.
  • Low-paid freelancers and algorithms have churned out much of the faux-news content, prizing speed and volume over accuracy.
  • Former employees said they thought they were joining a legitimate news operation; one had mistaken it for BNN Bloomberg, a Canadian business news channel. BNN’s website insisted that “accuracy is nonnegotiable” and that “every piece of information underwent rigorous checks, ensuring our news remains an undeniable source of truth.”
  • this was not a traditional journalism outlet. While the journalists could occasionally report and write original articles, they were asked to primarily use a generative A.I. tool to compose stories, said Ms. Chakraborty and Hemin Bakir, a journalist based in Iraq who worked for BNN for almost a year. They said they had uploaded articles from other news outlets to the generative A.I. tool to create paraphrased versions for BNN to publish.
  • Mr. Chahal’s evangelism carried weight with his employees because of his wealth and seemingly impressive track record, they said. Born in India and raised in Northern California, Mr. Chahal made millions in the online advertising business in the early 2000s and wrote a how-to book about his rags-to-riches story that landed him an interview with Oprah Winfrey.
  • Mr. Chahal told Mr. Bakir to focus on checking stories that had a significant number of readers, such as those republished by MSN.com.Employees did not want their bylines on stories generated purely by A.I., but Mr. Chahal insisted on this. Soon, the tool randomly assigned their names to stories.
  • This crossed a line for some BNN employees, according to screenshots of WhatsApp conversations reviewed by The Times, in which they told Mr. Chahal that they were receiving complaints about stories they didn’t realize had been published under their names.
  • According to three journalists who worked at BNN and screenshots of WhatsApp conversations reviewed by The Times, Mr. Chahal regularly directed profanities at employees and called them idiots and morons. When employees said purely A.I.-generated news, such as the Fanning story, should be published under the generic “BNN Newsroom” byline, Mr. Chahal was dismissive.“When I do this, I won’t have a need for any of you,” he wrote on WhatsApp.Mr. Bakir replied to Mr. Chahal that assigning journalists’ bylines to A.I.-generated stories was putting their integrity and careers in “jeopardy.”
  • This was a strategy that Mr. Chahal favored, according to former BNN employees. He used his news service to exercise grudges, publishing slanted stories about a politician from San Francisco he disliked, Wikipedia after it published a negative entry about BNN Breaking and Elon Musk after accounts belonging to Mr. Chahal, his wife and his companies were suspended o
  • The increasing popularity of programmatic advertising — which uses algorithms to automatically place ads across the internet — allows A.I.-powered news sites to generate revenue by mass-producing low-quality clickbait content
  • Experts are nervous about how A.I.-fueled news could overwhelm accurate reporting with a deluge of junk content distorted by machine-powered repetition. A particular worry is that A.I. aggregators could chip away even further at the viability of local journalism, siphoning away its revenue and damaging its credibility by contaminating the information ecosystem.
« First ‹ Previous 141 - 158 of 158
Showing 20 items per page