Skip to main content

Home/ History Readings/ Group items matching "contemplation" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
clairemann

Analysis: Supreme Court ruling is a bitter legal and personal blow to Trump - CNNPolitics - 0 views

  • (CNN)The Supreme Court's refusal to block the release of Trump White House documents to the House January 6 committee represents a huge defeat for the ex-President's frantic effort to cover up his 2021 coup attempt.
  • It will also likely be viewed by the former President as a betrayal by the court's conservative majority, which he cemented with three picks for the top bench whom he saw as a legal insurance policy as he's continually sought to bend governing institutions to avoid accountability.
  • The net has significantly tightened around the Trump White House in recent weeks.
  • ...11 more annotations...
  • "victory for the rule of law and American democracy"
  • Trump had mounted an intense effort to avoid such scrutiny and had already lost cases in district and appellate courts as part of a broad campaign of obstruction of the committee, which has included expansive executive privilege claims by ex-aides -- even some, like his populist political guru Steve Bannon, who were not serving White House officials at the time of the insurrection.
  • The Supreme Court did not rule on the key legal question of what happens when there is a dispute between a current and a former president on the scope of executive privilege -- a concept meant to ensure that advice to a commander in chief from subordinates can stay private. But it allowed to stand a ruling by the appellate court that found Trump had not demonstrated that his concerns for executive branch confidentiality should override "profound interests in disclosure" cited by Biden.
  • Wednesday's ruling, in which only conservative Justice Clarence Thomas signaled dissent, will also offer a new mark of legitimacy to the select committee, amid claims by pro-Trump Republicans that it is an illegally constituted witch hunt despite being voted into being by the House. It will also boost the committee's race against time as it tries to complete its work before a possible new Republican majority shuts it down.
  • The decision means that 700 documents -- including schedules, speech and call logs, and three pages of handwritten notes from then-White House chief of staff Mark Meadows -- can be transferred from the National Archives to the House committee, a process that was already underway Wednesday evening.
  • On Tuesday, CNN exclusively reported that the committee had subpoenaed and obtained phone number records from one of the ex-President's sons, Eric Trump, and Kimberly Guilfoyle, who is engaged to his brother, Donald Trump Jr. The committee is interested in investigating the level of coordination between Trump's team and organizers of the Washington rally at which the then-President told supporters who later moved to the Capitol to "fight like hell" to stop Congress from certifying Biden's election win.
  • it appears unlikely to meaningfully reshape the fraught politics of the insurrection. Swathes of the Republican Party, especially in the House, have done their best to whitewash Trump's role that day as he contemplates a possible comeback presidential bid in 2024.
  • There is no doubt, however, that Trump will be apoplectic that his three Supreme Court nominees, Justices Neil Gorsuch, Brett Kavanaugh and Amy Coney Barrett, did not publicly dissent from denying his bid to keep his West Wing records secret.
  • Trump has repeatedly slammed the Supreme Court for throwing out his false claims of election fraud, claiming he was a victim of a miscarriage of justice even though his delusional cases were also dismissed by multiple lower courts.
  • Throughout his presidency, Trump appeared to equate judicial and Cabinet nominations with an act of patronage, viewing those selected as owing him a debt that would be repaid by pursuing his interests rather than honoring the rule of law and the Constitution.
  • The gathering clouds around Trump would represent a grave legal and reputational risk to a normal politician, but given his talent for impunity, it's far from certain that they will slow his political aspirations.
criscimagnael

Putin's Next Move on Ukraine Is a Mystery. Just the Way He Likes It. - The New York Times - 0 views

  • What is Russia’s next move? No one knows, except perhaps Mr. Putin. And that is by design.
  • Foreign Minister Sergei A. Ryabkov warned that failure to meet Russia’s demands could put the “security of the whole European continent” at risk.
  • Analysts said that not even members of Mr. Putin’s inner circle — let alone Mr. Ryabkov, who led Russia’s delegation at this week’s Geneva talks — were likely to know how seriously Mr. Putin is contemplating full-scale war with Ukraine. Nor would they know what American concessions he is prepared to accept in order to defuse the crisis.
  • ...18 more annotations...
  • Instead, Mr. Putin is likely not even to have made a decision,
  • The talks continue on Wednesday, when Russian officials will meet representatives of the United States and its NATO allies in Brussels,
  • After that, Mr. Peskov said, Russia would decide “whether it makes sense” to move forward with diplomacy.
  • For years, Mr. Putin has fumed over NATO’s expansion eastward and American support for pro-Western sentiment in Ukraine; now, by creating a new security crisis that threatens to complicate President Biden’s agenda, he has succeeded in getting the issue to the forefront in Washington.
  • “For the first time in 30 years, the United States has agreed to discuss issues that it was impossible to discuss even a year ago,”
  • Now that the Russian president has Americans at the negotiating table, he is pursuing another classic Putin strategy: putting so many potential moves on the playing field — pointing in so many different directions — that he leaves people guessing, allowing him to choose the tactics that best suit him as events evolve.
  • He said Russia was imposing no specific timeline, but that it needed a “fast response” to its demands. And while he said there was “no reason to fear an escalation scenario” in Ukraine
  • The contradictory messaging continued on Tuesday when the Kremlin’s spokesman, Mr. Peskov, countered any positive assessments Mr. Ryabkov might have offered the day before. “For now, we do not see any substantive reason for optimism,” he said in his daily conference call with reporters.
  • The virus-free cocoon Mr. Putin has tried to establish around himself has meant that even confidants are forced to spend days in quarantine before being allowed into the same room with him, further reducing his connections with the outside world.
  • “No one knows with 100 percent certainty whether Putin is ready for war, or whether this is a bluff or not,” Ms. Stanovaya said.
  • Instead, he has warned of an unspecified “military-technical response” if Russia does not get what it wants.
  • We need long-term, legally binding guarantees
  • we need at least something, at least a legally binding agreement rather than just verbal assurances.”
  • Emboldened, he sees Mr. Biden as a man who may be willing to make a deal — and that Mr. Biden, as a veteran of the Cold War, may possess a respect for power diplomacy with Moscow that younger American politicians do not.
  • “He assumes that the Americans will pay attention only to that which concretely, immediately threatens them,” Dmitri Trenin, director of the Carnegie Moscow Center think tank, said of the Russian president. “He uses unpredictability, he uses tension, he uses threats.”
  • it is the demand that NATO offer some kind of formal assurances not to expand eastward and to cease military cooperation with Ukraine that is now most important for Mr. Putin.
  • NATO has repeatedly ruled out the idea that it would allow any other country to veto who can and cannot be in the alliance, creating what appears to be an impasse.
  • As for what Russia does next, Mr. Lukyanov said that this would be solely up to Mr. Putin, who exerts a monopoly on foreign-policy decision-making without recent precedent in Russia.
Javier E

China at the peak - by Noah Smith - Noahpinion - 0 views

  • We thus have the privilege of seeing a great civilization at its peak
  • How much greater would China’s peak have been if Deng Xiaoping had sided with the Tiananmen Square protesters, and liberalized China’s society in addition to its economy? How many great Chinese books, essays, video games, cartoons, TV shows, movies, and songs would we now enjoy if it weren’t for the pervasive censorship regime now in place? How much more would the people of the world have learned from Chinese culture if they could travel there freely and interact with Chinese people freely over the internet? Without a draconian autocrat like Xi Jinping at the helm, would so many Chinese people be looking to flee the country? Would the U.S. and China still be friends instead of at each other’s throats?
  • The key fact is that China’s meteoric rise seems like it’s drawing to a close
  • ...37 more annotations...
  • China’s drop was much much bigger; the Japan of the 80s was never the export machine people believed it to be. Both countries turned to investment in real estate and infrastructure as a replacement growth driver — although again, China did this much more than Japan did. Essentially, China did all the the things we typically think of Japan as having done 25 years earlier, but much more than Japan actually did them.
  • Yes, for those who were wondering, this does look a little bit like what happened to Japan in the 1990s
  • Already the country is not growing much faster than the G7, and as the ongoing real estate bust weighs on the economy, even that small difference may now be gone. The country’s surging auto industry is a bright spot, but won’t be big enough to rescue the economy from the evaporation of its primary growth driver.
  • Even if it manages to climb up to 40%, that’s still a fairly disappointing result — South Korea is at 71% and Japan at 65%
  • a re-acceleration would require a massive burst of productivity growth, which just seems unlikely.
  • That means China’s catch-up growth only took it to 30% of U.S. per capita GDP (PPP)
  • There’s one main argument that people make for a quick Chinese decline: rapid aging. But while I don’t want to wave this away, I don’t think it’s going to be as big a deal as many believe
  • This is another example of China’s peak being both awe-inspiring and strangely disappointing at the same time.
  • Now that China has hit its peak, will it decline? And if so, how much and how fast?
  • it seems likely that China’s growth will now slow to developed-country levels, or slightly higher, without much prospect for a sustained re-acceleration
  • when people contemplate Chinese decline, they’re not asking whether its economy will shrink; they’re asking whether its relative economic dominance and geopolitical importance will decrease.
  • If we just casually pattern-match on history, the answer would probably be “not for a long time”. Most powerful countries seem to peak and then plateau. Britain ruled the waves for a century.
  • U.S. relative power and economic dominance peaked in the 1950s, but it didn’t really start declining until the 2000s
  • Japan and Germany had their military power smashed in WW2, but remained economic heavyweights for many decades afterwards.
  • When the Roman Empire declined, it got a lot poorer. But in the modern economy, countries that decline in relative terms, and in geopolitical power, often get richer
  • he total fertility rate has been low since even before the one-child policy was implemented, but recently it has taken a nose-dive. Two years ago, the UN put it at 1.16, which is 40% lower than the U.S. and 22% lower than Europe
  • The country’s total population only started shrinking this year, but its young population started falling sharply 20 years ago, due to the echo of low fertility in the 80s. The most common age for a Chinese person is now about 50 years old, with another peak at 35:
  • The first reason is that power is relative, and China’s rivals have demographic issues of their own. The U.S., Europe, India and Japan all have higher fertility than China, but still below replacement level
  • demographics aren’t actually going to force Chinese power or wealth into rapid decline over the next few decades.
  • third of all, evidence suggests that population aging is really more of a persistent drag than a crisis or disaster.
  • Second, demographics won’t take away China’s biggest economic advantage, which is clustering and agglomeration effects. Asia is the world’s electronics manufacturing hub. It’s also by far the most populous region in the world, giving it the biggest potential market size
  • China will act as a key hub for that region, in terms of trade, supply chains, investment, and so on. China is shrinking, but Asia is not
  • As a result, there are suddenly many fewer Chinese people able to bear children, which is why the actual number of births in China has fallen by almost half since 2016:
  • we’d find that every percentage point of the senior population share that China gains relative to other countries might reduce its relative economic performance by about 1.15%. That’s not a huge number.
  • Now, if we look at the research, we find some estimates that are much larger than this — for example, Ozimek et al. (2018) look at specific industries and specific U.S. states, and find an effect on productivity that’s three times as large as the total effect on growth that I just eyeballed above. Maestas et al. (2022) look at U.S. states, and also find a larger effect. But Acemoglu and Restrepo (2017) look across countries and find no effect at all.
  • On top of that, there are plenty of things a country can do to mitigate the effects of aging. One is automation. China is automating at breakneck speed,
  • A second is having old people work longer; China, which now has higher life expectancy than the U.S., is well-positioned to do this.
  • Finally, aging will prompt China to do something it really needs to do anyway: build a world class health care system
  • this would help rectify the internal imbalances that Michael Pettis always talks about, shifting output from low-productivity real estate investment toward consumption.
  • if not aging, the only other big dangers to China are war and climate change.
  • To realize its full potential, Altasia will need integration — it will need some way to get Japanese and Korean and Taiwanese investment and technology to the vast labor forces of India, Indonesia, and the rest
  • the most likely outcome is that China sits at or near its current peak of wealth, power and importance through the middle of this century at least.
  • Altasia has more people and arguably more technical expertise than China. And it’s the only alternative location for the Asian electronics supercluster.
  • War was the big mistake that Germany made a century ago, so let’s hope China doesn’t follow in its footsteps.
  • The story of whether and how that complex web of investment, tech transfer, and trade develops will be the next great story of globalization.
  • But I think the very complexity of Altasia will lead to its own sort of adventure and excitement.
  • for Western companies looking for new markets, Altasia will potentially be more exciting than China ever was. The Chinese market delivered riches to some, but the government banned some products (especially internet services) and stole the technology used to make others. Ultimately, China’s billion consumers turned out to be a mirage for many. The economies and societies of Altasia, in comparison, are much more open to foreign products.
Javier E

Our generation was told liberal economics would make us free. Look at us now. We were misled | Nesrine Malik | The Guardian - 0 views

  • Behind the strikes, inflation numbers and talk of all the difficult decisions politicians have to make are a multitude of trapped people, their choices shrinking. People in bad relationships who cannot leave because rents and mortgages have gone up so being single is no longer viable. People who would like to have a child, or another child, but cannot afford its care, or who would like to return to work after having a child but the sums just don’t work. People in bad jobs with no security or benefits who cannot quit and look for alternatives because they have no savings to buffer rising costs. The end result is a crisis not just of the economy, but of freedom.
  • With that crisis, an entire liberal ambition becomes thwarted. We talk of liberalism in grand abstract terms, as the noble heart of an ideal political order that promotes human rights, the rule of law, civil liberties and freedom from religious dogma and prejudice
  • But when economic arrangements themselves become coercive and abusive, then political liberalism can coexist with, and indeed mask, a state of illiberalism and bondage. In the throes of personal challenges, lofty political ideals feel remote and irrelevant. All that people like Jane and others have the time or energy to register is a set of invisible oppressive economic forces that simply must be weathered because they are facts of nature
  • ...12 more annotations...
  • This, it strikes me, is not only a political choice, but a reneging on a historical deal, forged in the colossal upheavals of the Enlightenment, the Industrial Revolution, and revolution in England, the US and Europe.
  • You can hear the language and logic of this economic dictatorship everywhere. Tony Blair tells us that with an ageing population, a climate crisis, higher debt interest and an economic workforce increasingly constrained in its ability to seek services such as housing and healthcare outside the public sector, we should be ready to not wait for the NHS and use private health providers for minor health matters, and that we should ultimately be “taxing less and spending less”.
  • The result is a sort of ambient autocracy, where personal choices are increasingly dictated by forces that you had no say in creating and have no means of overthrowing.
  • The trade-off was that we would lose the traditional supports and solaces of rural values and extended families, but become free from their prejudices and patriarchies, and the associated economic and political exploitations of a hierarchical system that was skewed to landowners, rent seekers and those imbued with authority because of where they were born in that hierarchy.
  • to choose how to live our lives. “The only freedom which deserves the name,” wrote John Stuart Mill, “is that of pursuing our own good, in our own way, so long as we do not attempt to deprive others of theirs, or impede their efforts to obtain it.”
  • That good is now increasingly limited to those who can afford it – who can purchase the liberty to love, leave and leisure, and the right to indulge in creative work and expression.
  • The rest are caught in a halfway house between the old and new worlds.
  • Bereft of the support and proximity of family and community, people are deprived of the social safety net that was supposed to replace it, increasingly having to fork out funds for childcare, subsidising boomeranging single children and elderly parents while paying tax, or fretting about their fates in a cutthroat housing market and a scandalously underfunded care system.
  • Anything that disturbs this tenuous balance cannot be contemplated, so the shackles to partners, employers and imperfect domestic arrangements grow ever tighter.
  • I grew up in the old world and saw only its limitations, chafing against it and impatient for some individual autonomy. My mother had four children, working throughout her childbearing years as a school teacher, only able to go back to work because, with each child, a new family member would move in, or move back in, to help. They joined others who lived with us on and off over the years when they needed housing.
  • My parents were distant but seemed to be broadly content figures, either at work or obscured by a blur of relatives they were constantly entertaining, feeding or cleaning up after in a gaggle of chat, laughter and gossip. The price for that mutual communal facilitation was paid in other ways – a violating lack of privacy and personal space, and a sense that everyone’s lives, in their most private and intimate detail, were the subject of others’ opinions and policing. It was a “gilded cage”, as it is called in Orientalist literature
  • In hindsight now, and in adulthood and parenthood, having experienced both in the new world, I can see that gilded cages come in many forms
Javier E

Reading in the Time of Books Bans and A.I. - The New York Times - 0 views

  • We are in the throes of a reading crisis.
  • While right and left are hardly equivalent in their stated motivations, they share the assumption that it’s important to protect vulnerable readers from reading the wrong things.
  • But maybe the real problem is that children aren’t being taught to read at all.
  • ...44 more annotations...
  • . In May, David Banks, the chancellor of New York City’s public schools, for many years a stronghold of “whole language” instruction, announced a sharp pivot toward phonics, a major victory for the “science of reading” movement and a blow to devotees of entrenched “balanced literacy” methods
  • As corporate management models and zealous state legislatures refashion the academy into a gated outpost of the gig economy, the humanities have lost their luster for undergraduates. According to reports in The New Yorker and elsewhere, fewer and fewer students are majoring in English, and many of those who do (along with their teachers) have turned away from canonical works of literature toward contemporary writing and pop culture. Is anyone reading “Paradise Lost” anymore? Are you?
  • While we binge and scroll and D.M., the robots, who are doing more and more of our writing, may also be taking over our reading.
  • There is so much to worry about. A quintessentially human activity is being outsourced to machines that don’t care about phonics or politics or beauty or truth. A precious domain of imaginative and intellectual freedom is menaced by crude authoritarian politics. Exposure to the wrong words is corrupting our children, who aren’t even learning how to decipher the right ones. Our attention spans have been chopped up and commodified, sold off piecemeal to platforms and algorithms. We’re too busy, too lazy, too preoccupied to lose ourselves in books.
  • the fact that the present situation has a history doesn’t mean that it isn’t rea
  • the reading crisis isn’t simply another culture-war combat zone. It reflects a deep ambivalence about reading itself, a crack in the foundations of modern consciousness.
  • Just what is reading, anyway? What is it for? Why is it something to argue and worry about? Reading isn’t synonymous with literacy, which is one of the necessary skills of contemporary existence. Nor is it identical with literature, which designates a body of written work endowed with a special if sometimes elusive prestige.
  • Is any other common human undertaking so riddled with contradiction? Reading is supposed to teach us who we are and help us forget ourselves, to enchant and disenchant, to make us more worldly, more introspective, more empathetic and more intelligent. It’s a private, even intimate act, swathed in silence and solitude, and at the same time a social undertaking. It’s democratic and elitist, soothing and challenging, something we do for its own sake and as a means to various cultural, material and moral ends.
  • Fun and fundamental: Together, those words express a familiar utilitarian, utopian promise — the faith that what we enjoy doing will turn out to be what we need to do, that our pleasures and our responsibilities will turn out to be one and the same. It’s not only good; it’s good for you.
  • Reading is, fundamentally, both a tool and a toy. It’s essential to social progress, democratic citizenship, good government and general enlightenment.
  • It’s also the most fantastically, sublimely, prodigiously useless pastime ever invented
  • Teachers, politicians, literary critics and other vested authorities labor mightily to separate the edifying wheat from the distracting chaff, to control, police, correct and corral the transgressive energies that propel the turning of pages.
  • His despair mirrors his earlier exhilaration and arises from the same source. “I envied my fellow-slaves for their stupidity. I have often wished myself a beast. I preferred the condition of the meanest reptile to my own. Any thing, no matter what, to get rid of thinking!”
  • Reading is a relatively novel addition to the human repertoire — less than 6,000 years old — and the idea that it might be available to everybody is a very recent innovation
  • Written language, associated with the rise of states and the spread of commerce, was useful for trade, helpful in the administration of government and integral to some religious practices. Writing was a medium for lawmaking, record-keeping and scripture, and reading was the province of priests, bureaucrats and functionaries.
  • For most of history, that is, universal literacy was a contradiction in terms. The Latin word literatus designated a member of the learned elite
  • Anyone could learn to do it, but the mechanisms of learning were denied to most people on the grounds of caste, occupation or gender.
  • According to Steven Roger Fischer’s lively and informative “A History of Reading” (2003), “Western Europe began the transition from an oral to a literate society in the early Middle Ages, starting with society’s top rungs — aristocracy and clergy — and finally including everyone else around 1,200 years later.”
  • . The print revolution catalyzed a global market that flourishes to this day: Books became commodities, and readers became consumers.
  • For Fischer, as for many authors of long-range synthetic macrohistories, the story of reading is a chronicle of progress, the almost mythic tale of a latent superpower unlocked for the benefit of mankind.
  • “If extraordinary human faculties and powers do lie dormant until a social innovation calls them into life,” he writes, “perhaps this might help to explain humanity’s constant advancement.” “Reading,” he concludes, “had become our union card to humanity.”
  • For one thing, the older, restrictive model of literacy as an elite prerogative proved to be tenacious
  • The novel, more than any other genre, catered to this market. Like every other development in modern popular culture, it provoked a measure of social unease. Novels, at best a source of harmless amusement and mild moral instruction, were at worst — from the pens of the wrong writers, or in the hands of the wrong readers — both invitations to vice and a vice unto themselves
  • More consequential — and more revealing of the destabilizing power of reading — was the fear of literacy among the laboring classes in Europe and America. “Reading, writing and arithmetic,” the Enlightenment political theorist Bernard Mandeville asserted, were “very pernicious to the poor” because education would breed restlessness and disconte
  • “It was unlawful, as well as unsafe, to teach a slave to read,” Frederick Douglass writes in his “Narrative of the Life” recalling the admonitions of one of his masters, whose wife had started teaching young Frederick his letters. If she persisted, the master explained, their chattel would “become unmanageable, and of no value to his master. As to himself, it could do him no good, but a great deal of harm. It would make him discontented and unhappy.”
  • “As I read and contemplated the subject, behold! that very discontentment which Master Hugh had predicted would follow my learning to read had already come, to torment and sting my soul to unutterable anguish. As I writhed under it, I would at times feel that learning to read had been a curse rather than a blessing.”
  • The crisis is what happens either when those efforts succeed or when they fail. Everyone likes reading, and everyone is afraid of it.
  • Douglass’s literary genius resides in the way he uses close attention to his own situation to arrive at the essence of things — to crack the moral nut of slavery and, in this case, to peel back the epistemological husk of freedom.
  • He has freed his mind, but the rest has not followed. In time it would, but freedom itself brings him uncertainty and terror, an understanding of his own humanity that is embattled and incomplete.
  • Here, the autobiographical touches on the mythic, specifically on the myth of Prometheus, whose theft of fire — a curse as well as a blessing bestowed on a bumbling, desperate species — is a primal metaphor for reading.
  • A school, however benevolently conceived and humanely administered, is a place of authority, where the energies of the young are regulated, their imaginations pruned and trained into conformity. As such, it will inevitably provoke resistance, rebellion and outright refusal on the part of its wards
  • Schools exist to stifle freedom, and also to inculcate it, a dialectic that is the essence of true education. Reading, more than any other discipline, is the engine of this process, precisely because it escapes the control of those in charge.
  • Apostles of reading like to quote Franz Kafka’s aphorism that “a book must be the ax for the frozen sea within us.” By itself, the violence of the metaphor is tempered by its therapeutic implication.
  • Kafka’s previous sentence: “What we need are books that hit us like the most painful misfortune, like the death of someone we loved more than we love ourselves, that make us feel as though we had been banished to the woods, far from any human presence, like a suicide.”
  • Are those the books you want in your child’s classroom? To read in this way is to go against the grain, to feel oneself at odds, alienated, alone. Schools exist to suppress those feelings, to blunt the ax and gently thaw the sea
  • That is important work, but it’s equally critical for that work to be subverted, for the full destructive potential of reading to lie in reach of innocent hands.
  • Roland Barthes distinguished between two kinds of literary work:
  • Text of pleasure: the text that contents, fills, grants euphoria: the text that comes from culture and does not break with it, is linked to a comfortable practice of reading. Text of bliss: the text that imposes a state of loss, the text that discomforts (perhaps to the point of a certain boredom), unsettles the reader’s historical, cultural, psychological assumptions, the consistency of his tastes, values, memories, brings to a crisis his relation with language.
  • he is really describing modalities of reading. To a member of the slaveholding Southern gentry, “The Columbian Orator” is a text of pleasure, a book that may challenge and surprise him in places, but that does not undermine his sense of the world or his place in it. For Frederick Douglass, it is a text of bliss, “bringing to crisis” (as Barthes would put it) his relation not only to language but to himself.
  • If you’ll forgive a Dungeons and Dragons reference, it might help to think of these types of reading as lawful and chaotic.
  • Lawful reading rests on the certainty that reading is good for us, and that it will make us better people. We read to see ourselves represented, to learn about others, to find comfort and enjoyment and instruction. Reading is fun! It’s good and good for you.
  • Chaotic reading is something else. It isn’t bad so much as unjustified, useless, unreasonable, ungoverned. Defenses of this kind of reading, which are sometimes the memoirs of a certain kind of reader, favor words like promiscuous, voracious, indiscriminate and compulsive.
  • Bibliophilia is lawful. Bibliomania is chaotic.
  • The point is not to choose between them: This is a lawful publication staffed by chaotic readers. In that way, it resembles a great many English departments, bookstores, households and classrooms. Here, the crisis never ends. Or rather, it will end when we stop reading. Which is why we can’t.
Javier E

Apocalypse When? Global Warming's Endless Scroll - The New York Times - 0 views

  • the climate crisis is outpacing our emotional capacity to describe it
  • I can’t say precisely when the end began, just that in the past several years, “the end of the world” stopped referring to a future cataclysmic event and started to describe our present situation
  • Across the ironized hellscape of the internet, we began “tweeting through the apocalypse” and blogging the Golden Globes ceremony “during the end times” and streaming “Emily in Paris” “at the end of the world.”
  • ...7 more annotations...
  • global warming represents the collapse of such complex systems at such an extreme scale that it overrides our emotional capacity
  • it is darkly inverted on the Instagram account @afffirmations, where new-age positive thinking buckles under the weight of generational despair, and serene stock photography collides with mantras like “I am not climate change psychosis” and “Humanity is not doomed.”
  • Often the features of our dystopia are itemized, as if we are briskly touring the concentric circles of hell — rising inequality, declining democracy, unending pandemic, the financial system optimistically described as “late” capitalism — until we have reached the inferno’s toasty center, which is the destruction of the Earth through man-made global warming.
  • This creates its own perverse flavor of climate denial: We acknowledge the science but do not truly accept it, at least not enough to urgently act.
  • This paralysis itself is almost too horrible to contemplate. As global warming cooks the Earth, it melts our brains, fries our nerves and explodes the narratives that we like to tell about humankind — even the apocalyptic ones.
  • This “end of the world” does not resemble the ends of religious prophecies or disaster films, in which the human experiment culminates in dramatic final spectacles
  • Instead we persist in an oxymoronic state, inhabiting an end that has already begun but may never actually end.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

A Tale of Two Moralities - NYTimes.com - 0 views

  • the great divide in our politics isn’t really about pragmatic issues, about which policies work best; it’s about differences in those very moral imaginations Mr. Obama urges us to expand, about divergent beliefs over what constitutes justice.
  • the real challenge we face is not how to resolve our differences — something that won’t happen any time soon — but how to keep the expression of those differences within bounds.
  • The other side believes that people have a right to keep what they earn, and that taxing them to support others, no matter how needy, amounts to theft. That’s what lies behind the modern right’s fondness for violent rhetoric: many activists on the right really do see taxes and regulation as tyrannical impositions on their liberty.
  • ...9 more annotations...
  • One side of American politics considers the modern welfare state — a private-enterprise economy, but one in which society’s winners are taxed to pay for a social safety net — morally superior to the capitalism red in tooth and claw we had before the New Deal. It’s only right, this side believes, for the affluent to help the less fortunate.
  • This deep divide in American political morality — for that’s what it amounts to — is a relatively recent development. Commentators who pine for the days of civility and bipartisanship are, whether they realize it or not, pining for the days when the Republican Party accepted the legitimacy of the welfare state, and was even willing to contemplate expanding it.
  • we have, for the most part, managed to agree on certain ground rules in the abortion controversy: it’s acceptable to express your opinion and to criticize the other side, but it’s not acceptable either to engage in violence or to encourage others to do so. What we need now is an extension of those ground rules to the wider national debate.
  • When people talk about partisan differences, they often seem to be implying that these differences are petty, matters that could be resolved with a bit of good will. But what we’re talking about here is a fundamental disagreement about the proper role of government.
  • Today’s G.O.P. sees much of what the modern federal government does as illegitimate; today’s Democratic Party does not
  • This deep divide in American political morality — for that’s what it amounts to — is a relatively recent development.
  • There’s no middle ground between these views. One side saw health reform, with its subsidized extension of coverage to the uninsured, as fulfilling a moral imperative: wealthy nations, it believed, have an obligation to provide all their citizens with essential care
  • The other side saw the same reform as a moral outrage, an assault on the right of Americans to spend their money as they choose.
  • We need to have leaders of both parties — or Mr. Obama alone if necessary — declare that both violence and any language hinting at the acceptability of violence are out of bounds. We all want reconciliation, but the road to that goal begins with an agreement that our differences will be settled by the rule of law.
Javier E

Opinion | How AI is transforming education at the University of Mississippi - The Washington Post - 0 views

  • Perplexity AI “unlocks the power of knowledge with information discovery and sharing.” This, it turns out, means “does research.” Type something into it, and it spits out a comprehensive answer, always sourced and sometimes bulleted. You might say this is just Google on steroids — but really, it is Google with a bibliography.
  • Caleb Jackson, a 22-year-old junior at Ole Miss studying part time, is a fan. This way, he doesn’t have to spend hours between night shifts and online classes trawling the internet for sources. Perplexity can find them, and he can get to writing that much sooner.
  • What’s most important to Ole Miss faculty members is that students use these tools with integrity. If the university doesn’t have a campuswide AI honor code, and so far it doesn’t, individual classes should. And no matter whether professors permit all applications of AI, as some teachers have tried, or only the narrowest, students should have to disclose just how much help they had from robots.
  • ...25 more annotations...
  • “Write a five-paragraph essay on Virginia Woolf’s ‘To the Lighthouse.’” Too generic? Well, how about “Write a five-paragraph essay on the theme of loss in ‘To the Lighthouse’”? Too high-schoolish? “Add some bigger words, please.” The product might not be ready to turn in the moment it is born, fully formed, from ChatGPT’s head. But with enough tweaking — either by the student or by the machine at the student’s demand — chances are the output can muster at least a passing grade.
  • Which of these uses are okay? Which aren’t? The harnessing of an AI tool to create an annotated bibliography likely doesn’t rankle even librarians the way relying on that same tool to draft a reflection on Virginia Woolf offends the professor of the modern novel. Why? Because that kind of contemplation goes closer to the heart of what education is really about.
  • the core of the question colleges now face. They can’t really stop students from using AI in class. They might not be able to notice students have done so at all, and when they do think they’ve noticed they’ll be acting only on suspicion. But maybe teachers can control the ways in which students use AI in class.
  • Figuring out exactly what ways those ought to be requires educators to determine what they care about in essays — what they are desperate to hear. The purpose of these papers is for students to demonstrate what they’ve learned, from hard facts to compositional know-how, and for teachers to assess how their pupils are progressing. The answer to what teachers want to get from students in their written work depends on what they want to give to students.
  • ChatGPT is sort of in a class of its own, because it can be almost anything its users want it to be so long as they possess one essential skill: prompt engineering. This means, basically, manipulating the machine not only into giving you an answer but also into giving you the kind of answer you’re looking for.
  • The next concern is that students should use AI in a manner that improves not only their writing but also their thinking — in short, in a manner that enhances learning rather than bypasses the need to learn at all.
  • This simple principle makes for complicated practice. Certainly, no one is going to learn anything by letting AI write an essay in its entirety. What about letting AI brainstorm an idea, on the other hand, or write an outline, or gin up a counter-argument? Lyndsey Cook, a senior at Ole Miss planning a career in nursing, finds the brainstorming especially helpful: She’ll ask ChatGPT or another tool to identify the themes in a piece of literature, and then she’ll go back and look for them herself.
  • These shortcuts, on the one hand, might interfere with students’ learning to brainstorm, outline or see the other side of things on their own
  • But — here comes a human-generated counterargument — they may also aid students in surmounting obstacles in their composition that otherwise would have stopped them short. That’s particularly true of kids whose high schools didn’t send them to college already equipped with these capabilities.
  • Allow AI to boost you over these early hurdles, and suddenly the opportunity for deeper learning — the opportunity to really write — will open up. That’s how Caleb Jackson, the part-time student for whom Perplexity has been such a boon, sees it: His professor, he says , wanted them to “get away from the high-school paper and go further, to write something larger like a thesis.”
  • maybe, as one young Ole Miss faculty member put it to me, this risks “losing the value of the struggle.” That, she says, is what she is scared will go away.
  • All this invites the most important question there is: What is learning for?
  • Learning, in college, can be instrumental. According to this view, the aim of teaching is to prepare students to live in the real world, so all that really matters is whether they have the chops to field jobs that feed themselves and their families. Perhaps knowing how to use AI to do any given task for you, then, is one of the most valuable skills out there — the same way it pays to be quick with a calculator.
  • If you accept this line of argument, however, there are still drawbacks to robotic crutches. Some level of critical thinking is necessary to function as an adult, and if AI stymies its development even the instrumental aim of education is thwarted. The same goes for that “value of the struggle.” The real world is full of adversity, much of which the largest language model can’t tell you how to overcome.
  • more compelling is the idea, probably shared by most college professors, that learning isn’t only instrumental after all — that it has intrinsic value and that it is the end rather than merely a means to one.
  • Every step along the way that is skipped, the shorter the journey becomes, the less we will take in as we travel.
  • This glummest of outlooks suggests that AI will stunt personal growth even if it doesn’t harm professional prospects.
  • While that doesn’t mean it’s wise to prohibit every little application of the technology in class, it probably does mean discouraging those most closely related to critical thinking.
  • One approach is to alter standards for grading, so that the things the machines are worst at are also the things that earn the best marks: originality, say, or depth of feeling, or so-called metacognition — the process of thinking about one’s own thinking or one’s own learning.
  • Hopefully, these things are also the most valuable because they are what make us human.
  • Caleb Jackson only wants AI to help him write his papers — not to write them for him. “If ChatGPT will get you an A, and you yourself might get a C, it’s like, ‘Well, I earned that C.’” He pauses. “That might sound crazy.”
  • Dominic Tovar agrees. Let AI take charge of everything, and, “They’re not so much tools at that point. They’re just replacing you.”
  • Lyndsey Cook, too, believes that even if these systems could reliably find the answers to the most vexing research problems, “it would take away from research itself” — because scientific inquiry is valuable for its own sake. “To have AI say, ‘Hey, this is the answer …’” she trails off, sounding dispirited.
  • Claire Mischker, lecturer of composition and director of the Ole Miss graduate writing center, asked her students at the end of last semester to turn in short reflections on their experience in her class. She received submissions that she was near certain were produced by ChatGPT — “that,” she says as sarcastically as she does mournfully, “felt really good.
  • The central theme of the course was empathy.
Javier E

March 2020: How the Fed Averted Economic Disaster - WSJ - 0 views

  • Over the week of March 16, markets experienced an enormous shock to what investors refer to as liquidity, a catchall term for the cost of quickly converting an asset into cash.
  • Mr. Powell bluntly directed his colleagues to move as fast as possible.
  • They devised unparalleled emergency-lending backstops to stem an incipient financial panic that threatened to exacerbate the unfolding economic and public-health emergencies.
  • ...37 more annotations...
  • They were offering nearly unlimited cheap debt to keep the wheels of finance turning, and when that didn’t help, the Fed began purchasing massive quantities of government debt outright.
  • Investors dumped whatever they could, including ostensibly “risk-free” U.S. Treasury securities. As a global dash for dollars unfolded, Treasurys were no longer serving as the market’s traditional shock absorbers, amplifying extreme turmoil on Wall Street.
  • By week’s end, the Dow had plunged more than 10,000 points since mid-February as investors struggled to get their arms around what a halt to global commerce would mean for businesses that would soon have no revenue.
  • “It was sheer, unadulterated panic, of a magnitude that was far worse than in 2008 and 2009. Far worse,”
  • The idea of shutting down markets was especially discouraging: “It was a profoundly un-American thing to contemplate, to just shut everything down, and almost fatalistic—that we’re not going to get out of this.”
  • nearly two years later, most agree that the Fed’s actions helped to save the economy from going into a pandemic-induced tailspin.
  • “My thought was—I remember this very clearly—‘O.K. We have a four-or-five-day chance to really get our act together and get ahead of this. We’re gonna try to get ahead of this,’” Mr. Powell recalled later. “And we were going to do that by just announcing a ton of stuff on Monday morning.”
  • It worked. The Fed’s pledges to backstop an array of lending, announced on Monday, March 23, would unleash a torrent of private borrowing based on the mere promise of central bank action—together with a massive assist by Congress, which authorized hundreds of billions of dollars that would cover any losses.
  • If the hardest-hit companies like Carnival, with its fleet of 104 ships docked indefinitely, could raise money in capital markets, who couldn’t?
  • on April 9, where he shed an earlier reluctance to express an opinion about government spending policies, which are set by elected officials and not the Fed. He spoke in unusually moral terms. “All of us are affected,” he said. “But the burdens are falling most heavily on those least able to carry them…. They didn’t cause this. Their business isn’t closed because of anything they did wrong. This is what the great fiscal power of the United States is for—to protect these people as best we can from the hardships they are facing.”
  • They were extraordinary words from a Fed chair who during earlier, hot-button policy debates said the central bank needed to “stay in its lane” and avoid providing specific advice.
  • To avoid a widening rift between the market haves (who had been given access to Fed backstops) and the market have-nots (who had been left out because their debt was deemed too risky), Mr. Powell had supported a decision to extend the Fed’s lending to include companies that were being downgraded to “junk” status in the days after it agreed to backstop their bonds.
  • Most controversially, Mr. Powell recommended that the Fed purchase investment vehicles known as exchange-traded funds, or ETFs, that invest in junk debt. He and his colleagues feared that these “high-yield” bonds might buckle, creating a wave of bankruptcies that would cause long-term scarring in the economy.
  • Mr. Powell decided that it was better to err on the side of doing too much than not doing enough.
  • , Paul Singer, who runs the hedge-fund firm Elliott Management, warned that the Fed was sowing the seeds of a bigger crisis by absolving markets of any discipline. “Sadly, when people (including those who should know better) do something stupid and reckless and are not punished,” he wrote, “it is human nature that, far from thinking that they were lucky to have gotten away with something, they are encouraged to keep doing the stupid thing.”
  • The breathtaking speed with which the Fed moved and with which Wall Street rallied after the Fed’s announcements infuriated Dennis Kelleher, a former corporate lawyer and high-ranking Senate aide who runs Better Markets, an advocacy group lobbying for tighter financial regulations.
  • This is a ridiculous discussion no matter how heartfelt Powell is about ‘we can’t pick winners and losers’—to which my answer is, ‘So instead you just make them all winners?’”
  • “Literally, not only has no one in finance lost money, but they’ve all made more money than they could have dreamed,” said Mr. Kelleher. “It just can’t be the case that the only thing the Fed can do is open the fire hydrants wide for everybody
  • Mr. Powell later defended his decision to purchase ETFs that had invested in junk debt. “We wanted to find a surgical way to get in and support that market because it’s a huge market, and it’s a lot of people’s jobs… What were we supposed to do? Just let them die and lose all those jobs?” he said. “If that’s the biggest mistake we made, stipulating it as a mistake, I’m fine with that. It wasn’t time to be making finely crafted judgments,” Mr. Powell said. He hesitated for a moment before concluding. “Do I regret it? I don’t—not really.”
  • “We didn’t know there was a vaccine coming. The pandemic is just raging. And we don’t have a plan,” said Mr. Powell. “Nobody in the world has a plan. And in hindsight, the worry was, ‘What if we can’t really fully open the economy for a long time because the pandemic is just out there killing people?’”
  • Mr. Powell never saw this as a particularly likely outcome, “but it was around the edges of the conversation, and we were very eager to do everything we could to avoid that outcome,”
  • The Fed’s initial response in 2020 received mostly high marks—a notable contrast with the populist ire that greeted Wall Street bailouts following the 2008 financial crisis. North Carolina Rep. Patrick McHenry, the top Republican on the House Financial Services Committee, gave Mr. Powell an “A-plus for 2020,” he said. “On a one-to-10 scale? It was an 11. He gets the highest, highest marks, and deserves them. The Fed as an institution deserves them.”
  • The pandemic was the most severe disruption of the U.S. economy since the Great Depression. Economists, financial-market professionals and historians are only beginning to wrestle with the implications of the aggressive response by fiscal and monetary policy makers.
  • Altogether, Congress approved nearly $5.9 trillion in spending in 2020 and 2021. Adjusted for inflation, that compares with approximately $1.8 trillion in 2008 and 2009.
  • By late 2021, it was clear that many private-sector forecasters and economists at the Fed had misjudged both the speed of the recovery and the ways in which the crisis had upset the economy’s equilibrium. Washington soon faced a different problem. Disoriented supply chains and strong demand—boosted by government stimulus—had produced inflation running above 7%.
  • because the pandemic shock was akin to a natural disaster, it allowed Mr. Powell and the Fed to sidestep concerns about moral hazard—that is, the possibility that their policies would encourage people to take greater risks knowing that they were protected against larger losses. If a future crisis is caused instead by greed or carelessness, the Fed would have to take such concerns more seriously.
  • The high inflation that followed in 2021 might have been worse if the U.S. had seen more widespread bankruptcies or permanent job losses in the early months of the pandemic.
  • an additional burst of stimulus spending in 2021, as vaccines hastened the reopening of the economy, raised the risk that monetary and fiscal policy together would flood the economy with money and further fuel inflation.
  • The surge in federal borrowing since 2020 creates other risks. It is manageable for now but could become very expensive if the Fed has to lift interest rates aggressively to cool the economy and reduce high inflation.
  • The Congressional Budget Office forecast in December 2020 that if rates rose by just 0.1 percentage point more than projected in each year of the decade, debt-service costs in 2030 would rise by $235 billion—more than the Pentagon had requested to spend in 2022 on the Navy.
  • its low-rate policies have coincided with—and critics say it has contributed to—a longer-running widening of wealth inequality.
  • In 2008, household wealth fell by $8 trillion. It rose by $13.5 trillion in 2020, and in the process, spotlighted the unequal distribution of wealth-building assets such as houses and stocks.
  • Without heavy spending from Washington, focused on the needs of the least well-off, these disparities might have attracted more negative scrutiny.
  • Finally, the Fed is a technocratic body that can move quickly because it operates under few political constraints. Turning to it as the first line of defense in this and future crises could compromise its institutional independence.
  • Step one, he said, was to get in the fight and try to win. Figuring out how to exit would be a better problem to have, because it would mean they had succeeded.
  • “We have a recovery that looks completely unlike other recoveries that we’ve had because we’ve put so much support behind the recovery,” Mr. Powell said last month. “Was it too much? I’m going to leave that to the historians.”
  • The final verdict on the 2020 crisis response may turn on whether Mr. Powell is able to bring inflation under control without a painful recession—either as sharp price increases from 2021 reverse on their own accord, as officials initially anticipated, or because the Fed cools down the economy by raising interest rates.
Javier E

A Dissenting View of US Policy toward Russia | Talking Points Memo - 0 views

  • Since the Cold War’s end, American foreign policy has been conducted by responding to today’s news. To the extent the United States has had a long-term perspective, it is the hazy dream, first articulated in Christian millennial terms by the Puritans, of an American-led global transformation.
  • (I wrote about this in a 1992 book, Grand Illusion, and political scientist John Mearsheimer recently described this outlook in The Great Delusion.)
  • The question to ask about this process is this: how did we get to the point where we were unable to respond constructively to Russian fears of a new encirclement from NATO? As my former colleague Robert Wright put it, how could American and Western European leaders say, on the one hand, that they did not contemplate Ukraine becoming a member of NATO and say, on the other hand, that they would not accede in any way to Putin’s demand — at the center of his December communication with Biden — that NATO commit itself to barring Ukraine’s membership?
  • ...5 more annotations...
  • Now with Putin’s recognition of the separatist regimes, he has, perhaps, set the stage for a wider conflict; and the United States and its allies in NATO would have no choice but to respond with sanctions. But sanctions, such as those imposed after Russia seized Crimea, are unlikely to deter Putin. And really draconian sanctions, such as those used against Iran, could plunge Europe and the U.S. into a recession.
  • On the basis of this entirely unrealistic view of the world, the U.S. has stumbled into crises that it didn’t know it was creating.
  • The conflict with Russia over Ukraine would seem to have called for what Richard Nixon called “playing the long ball.” Nixon had played the long ball — defied prevailing opinion — by going to China
  • The United States might have stepped back from the years of provocations and resets to propose a “grand bargain” with Russia that would resolve or at least ease the conflict — one based, perhaps, on a neutral Ukraine or on the enforcement of the Minsk II agreement.
  • it seems to me that without such a bargain, we could be headed for another foreign policy disaster — one that will have repercussions in the United States and Western Europe as well as in Russia and Ukraine. Think war, skyrocketing energy prices, recession, refugees and a Russian-Chinese alliance against the United States and its allies.
Javier E

A Revolution Is Coming for China's Families - WSJ - 0 views

  • In January Beijing announced that the country’s total population shrank in 2022—a decade earlier than Western demographers had been forecasting as recently as 2019.
  • one rapidly approaching demographic problem has flown under Beijing’s radar: the crisis of the Chinese family, the foundation of Chinese society and civilization.
  • The Chinese family is about to undergo a radical and historically unprecedented transition. Extended kinship networks will atrophy nationwide, and the widespread experience of close blood relatives will disappear altogether for many
  • ...24 more annotations...
  • This is a delayed but inescapable consequence of China’s birth trends from the era of the notorious one-child policy (1980-2015)
  • Beijing thus far has ignored this looming crisis because planners don’t prepare for things they don’t track. Officials don’t regard data on the family as relevant to statecraft or security. So statistics tally males and females—not uncles, sisters, cousins, widows.
  • We estimate past patterns and project trends through demographic modeling—simulations replicating China’s available population numbers—while “building” family trees consistent with those figures. We can approximate nationwide changes in China’s extended family networks in the past with reasonable validity and describe what lies ahead with fair confidence.
  • we are only now living through the era of “peak kin” in China. In terms of sheer numbers, Chinese networks of blood relatives were never nearly as thick as at the start of the 21st century.
  • Because of dramatic postwar improvements in health and mortality, men and women in their 40s today have on average five times as many living cousins as in 1960.
  • China’s “kin explosion” may be an important, heretofore unobserved factor in China’s remarkable economic performance since Mao Zedong’s death in 1976.
  • China is now on the cusp of a severe and unavoidable “kin crash,” driven by prolonged subreplacement fertility
  • China’s rising generations will likely have fewer living relatives than ever before in Chinese history.
  • A “kin famine” will thus unfold unforgivingly over the next 30 years—starting now. As it intensifies, the Chinese family—the most important institution protecting Chinese people against adversity in bad times and helping them seize opportunity in good times—will increasingly falter in both these crucial functions.
  • China’s withering of the family is set to collide with a tsunami of new social need from the country’s huge elderly population, whose ranks will more than double between 2020 and 2050
  • By 2050 living parents and in-laws will outnumber children for middle-aged Chinese men and women. Thus exigency may overturn basic familial arrangements that have long been taken for granted. The focus of the family in China will necessarily turn from the rearing of the young to the care of the old.
  • The reliability and durability of familial bonds of duty will be an increasingly critical question—perhaps even a matter of life and death for many, including frail and impecunious elders in the Chinese hinterlands
  • growing numbers of men in decades ahead will enter old age without spouses or children—the traditional sources of support for the elderly.
  • by 2050, 18% of China’s men in their 60s will have no living descendants, twice the fraction today.
  • who will look after these unfortunates?
  • Still worse than the macroeconomic implications of old-age dependency may be the effect of China’s family crisis on the so-called micro-foundations of the national economy—the little things that make markets work.
  • Since earliest recorded history, China’s guanxi networks, a distinctive form of special relationships and professional connections, have helped get business done by reducing uncertainty and transaction costs. The proliferation of blood relatives was likely a powerful stimulant for growth during the era of China’s phenomenal upswing.
  • the kin dearth may prove an economic depressant well beyond what current “head count” projections suggest.
  • China’s coming family revolution could easily conduce to a rise in personal risk aversion. Risk aversion may in turn dampen mobility, including migration.
  • Less migration means less urbanization, which means less growth—and possibly still more pessimism and risk aversion.
  • If the waning of the family requires China to build a huge social welfare state over the coming generation, as we surmise it will, Beijing would have that much less wherewithal for influencing events abroad through economic diplomacy and defense policy.
  • by 2050 at least half of China’s overall pool of male military-age manpower will be made up of only children. Any encounter by China’s security forces involving significant loss of life will presage lineage extinction for many Chinese families.
  • Autocracies are typically tolerant of casualties—but maybe not in the only-child China of today and the decades ahead.
  • Failure to contemplate the implications of the coming changes in Chinese family structure could prove a costly blind spot for the Communist Party. Blind spots expose governments to the risk of strategic surprise. The consequences of social, economic and political risks tend to be greatest when states aren’t prepared for them.
Javier E

Elusive 'Einstein' Solves a Longstanding Math Problem - The New York Times - 0 views

  • after a decade of failed attempts, David Smith, a self-described shape hobbyist of Bridlington in East Yorkshire, England, suspected that he might have finally solved an open problem in the mathematics of tiling: That is, he thought he might have discovered an “einstein.”
  • In less poetic terms, an einstein is an “aperiodic monotile,” a shape that tiles a plane, or an infinite two-dimensional flat surface, but only in a nonrepeating pattern. (The term “einstein” comes from the German “ein stein,” or “one stone” — more loosely, “one tile” or “one shape.”)
  • Your typical wallpaper or tiled floor is part of an infinite pattern that repeats periodically; when shifted, or “translated,” the pattern can be exactly superimposed on itself
  • ...18 more annotations...
  • An aperiodic tiling displays no such “translational symmetry,” and mathematicians have long sought a single shape that could tile the plane in such a fashion. This is known as the einstein problem.
  • “I’m always messing about and experimenting with shapes,” said Mr. Smith, 64, who worked as a printing technician, among other jobs, and retired early. Although he enjoyed math in high school, he didn’t excel at it, he said. But he has long been “obsessively intrigued” by the einstein problem.
  • now a new paper — by Mr. Smith and three co-authors with mathematical and computational expertise — proves Mr. Smith’s discovery true. The researchers called their einstein “the hat,
  • “The most significant aspect for me is that the tiling does not clearly fall into any of the familiar classes of structures that we understand.”
  • black and white squares also can make weird nonperiodic patterns, in addition to the familiar, periodic checkerboard pattern. “It’s really pretty trivial to be able to make weird and interesting patterns,” he said. The magic of the two Penrose tiles is that they make only nonperiodic patterns — that’s all they can do.“But then the Holy Grail was, could you do with one — one tile?” Dr. Goodman-Strauss said.
  • Sir Roger found the proofs “very complicated.” Nonetheless, he was “extremely intrigued” by the einstein, he said: “It’s a really good shape, strikingly simple.”
  • The simplicity came honestly. Mr. Smith’s investigations were mostly by hand; one of his co-authors described him as an “imaginative tinkerer.”
  • When in November he found a tile that seemed to fill the plane without a repeating pattern, he emailed Craig Kaplan, a co-author and a computer scientist at the University of Waterloo.
  • “It was clear that something unusual was happening with this shape,” Dr. Kaplan said. Taking a computational approach that built on previous research, his algorithm generated larger and larger swaths of hat tiles. “There didn’t seem to be any limit to how large a blob of tiles the software could construct,”
  • The first step, Dr. Kaplan said, was to “define a set of four ‘metatiles,’ simple shapes that stand in for small groupings of one, two, or four hats.” The metatiles assemble into four larger shapes that behave similarly. This assembly, from metatiles to supertiles to supersupertiles, ad infinitum, covered “larger and larger mathematical ‘floors’ with copies of the hat,” Dr. Kaplan said. “We then show that this sort of hierarchical assembly is essentially the only way to tile the plane with hats, which turns out to be enough to show that it can never tile periodically.”
  • some might wonder whether this is a two-tile, not one-tile, set of aperiodic monotiles.
  • Dr. Goodman-Strauss had raised this subtlety on a tiling listserv: “Is there one hat or two?” The consensus was that a monotile counts as such even using its reflection. That leaves an open question, Dr. Berger said: Is there an einstein that will do the job without reflection?
  • “the hat” was not a new geometric invention. It is a polykite — it consists of eight kites. (Take a hexagon and draw three lines, connecting the center of each side to the center of its opposite side; the six shapes that result are kites.)
  • “It’s likely that others have contemplated this hat shape in the past, just not in a context where they proceeded to investigate its tiling properties,” Dr. Kaplan said. “I like to think that it was hiding in plain sight.”
  • Incredibly, Mr. Smith later found a second einstein. He called it “the turtle” — a polykite made of not eight kites but 10. It was “uncanny,” Dr. Kaplan said. He recalled feeling panicked; he was already “neck deep in the hat.”
  • Dr. Myers, who had done similar computations, promptly discovered a profound connection between the hat and the turtle. And he discerned that, in fact, there was an entire family of related einsteins — a continuous, uncountable infinity of shapes that morph one to the next.
  • this einstein family motivated the second proof, which offers a new tool for proving aperiodicity. The math seemed “too good to be true,” Dr. Myers said in an email. “I wasn’t expecting such a different approach to proving aperiodicity — but everything seemed to hold together as I wrote up the details.”
  • Mr. Smith was amazed to see the research paper come together. “I was no help, to be honest.” He appreciated the illustrations, he said: “I’m more of a pictures person.”
Javier E

How Sam Bankman-Fried Put Effective Altruism on the Defensive - The New York Times - 0 views

  • To hear Bankman-Fried tell it, the idea was to make billions through his crypto-trading firm, Alameda Research, and FTX, the exchange he created for it — funneling the proceeds into the humble cause of “bed nets and malaria,” thereby saving poor people’s lives.
  • ast summer Bankman-Fried was telling The New Yorker’s Gideon Lewis-Kraus something quite different. “He told me that he never had a bed-nets phase, and considered neartermist causes — global health and poverty — to be more emotionally driven,” Lewis-Kraus wrote in August. Effective altruists talk about both “neartermism” and “longtermism.
  • Bankman-Fried said he wanted his money to address longtermist threats like the dangers posed by artificial intelligence spiraling out of control. As he put it, funding for the eradication of tropical diseases should come from other people who actually cared about tropical diseases: “Like, not me or something.”
  • ...20 more annotations...
  • To the uninitiated, the fact that Bankman-Fried saw a special urgency in preventing killer robots from taking over the world might sound too outlandish to seem particularly effective or altruistic. But it turns out that some of the most influential E.A. literature happens to be preoccupied with killer robots too.
  • Holden Karnofsky, a former hedge funder and a founder of GiveWell, an organization that assesses the cost-effectiveness of charities, has spoken about the need for “worldview diversification” — recognizing that there might be multiple ways of doing measurable good in a world filled with suffering and uncertainty
  • The books, however, are another matter. Considerations of immediate need pale next to speculations about existential risk — not just earthly concerns about climate change and pandemics but also (and perhaps most appealingly for some tech entrepreneurs) more extravagant theorizing about space colonization and A.I.
  • there’s a remarkable intellectual homogeneity; the dominant voices belong to white male philosophers at Oxford.
  • Among his E.A. innovations has been the career research organization known as 80,000 Hours, which promotes “earning to give” — the idea that altruistic people should pursue careers that will earn them oodles of money, which they can then donate to E.A. causes.
  • each of those terse sentences glosses over a host of additional questions, and it takes MacAskill an entire book to address them. Take the notion that “future people count.” Leaving aside the possibility that the very contemplation of a hypothetical person may not, for some real people, be “intuitive” at all, another question remains: Do future people count for more or less than existing people count for right now?
  • MacAskill cites the philosopher Derek Parfit, whose ideas about population ethics in his 1984 book “Reasons and Persons” have been influential in E.A. Parfit argued that an extinction-level event that destroyed 100 percent of the population should worry us much more than a near-extinction event that spared a minuscule population (which would presumably go on to procreate), because the number of potential lives dwarfs the number of existing ones.
  • If you’re a utilitarian committed to “the greatest good for the greatest number,” the arithmetic looks irrefutable. The Times’s Ezra Klein has written about his support for effective altruism while also thoughtfully critiquing longtermism’s more fanatical expressions of “mathematical blackmail.”
  • In 2015, MacAskill published “Doing Good Better,” which is also about the virtues of effective altruism. His concerns in that book (blindness, deworming) seem downright quaint when compared with the astral-plane conjectures (A.I., building an “interstellar civilization”) that he would go on to pursue in “What We Owe the Future.”
  • In both books he emphasizes the desirability of seeking out “neglectedness” — problems that haven’t attracted enough attention so that you, as an effective altruist, can be more “impactful.” So climate change, MacAskill says, isn’t really where it’s at anymore; readers would do better to focus on “the issues around A.I. development,” which are “radically more neglected.
  • In his recent best seller, “What We Owe the Future” (2022), MacAskill says that the case for effective altruism giving priority to the longtermist view can be distilled into three simple sentences: “Future people count. There could be a lot of them. We can make their lives go better.”
  • “Earning to give” has its roots in the work of the radical utilitarian philosopher Peter Singer, whose 1972 essay “Famine, Affluence and Morality” has been a foundational E.A. text. It contains his parable of the drowning child: If you’re walking past a shallow pond and see a child drowning, you should wade in and save the child, even if it means muddying your clothes
  • Extrapolating from that principle suggests that if you can save a life by donating an amount of money that won’t pose any significant problems for you, a decision not to donate that money would be not only uncharitable or ungenerous but morally wrong.
  • Singer has also written his own book about effective altruism, “The Most Good You Can Do” (2015), in which he argues that going into finance would be an excellent career choice for the aspiring effective altruist. He acknowledges the risks for harm, but he deems them worth it
  • Chances are, if you don’t become a charity worker, someone else will ably do the job; whereas if you don’t become a financier who gives his money away, who’s to say that the person who does become a financier won’t hoard all his riches for himself?
  • On Nov. 11, when FTX filed for bankruptcy amid allegations of financial impropriety, MacAskill wrote a long Twitter thread expressing his shock and his anguish, as he wrestled in real time with what Bankman-Fried had wrought.
  • “If those involved deceived others and engaged in fraud (whether illegal or not) that may cost many thousands of people their savings, they entirely abandoned the principles of the effective altruism community,” MacAskill wrote in a Tweet, followed by screenshots from “What We Owe the Future” and Ord’s “The Precipice” that emphasized the importance of honesty and integrity.
  • I’m guessing that Bankman-Fried may not have read the pertinent parts of those books — if, that is, he read any parts of those books at all. “I would never read a book,” Bankman-Fried said earlier this year. “I’m very skeptical of books. I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that.”
  • Avoiding books is an efficient method for absorbing the crudest version of effective altruism while gliding past the caveats
  • For all of MacAskill’s galaxy-brain disquisitions on “A.I. takeover” and the “moral case for space settlement,” perhaps the E.A. fixation on “neglectedness” and existential risks made him less attentive to more familiar risks — human, banal and closer to home.
Javier E

Elon Musk's Distraction Is Just One of Tesla's Problems - The New York Times - 0 views

  • Survey data indicate that Mr. Musk’s behavior has hurt Tesla’s brand among liberals, the group most likely to buy electric cars. Tesla’s net favorability rating — the number of people who view the company positively minus those with a negative view — plummeted to 10 percentage points in November from 31 percentage points at the beginning of the year
  • The sour mood surrounding Mr. Musk is beginning to rub off on German drivers, with a clear majority saying his takeover of Twitter has had a negative effect on Tesla’s image, especially among women and among people 50 or older.
  • “Increasingly, Tesla is becoming a pretty partisan brand, and that could have pretty serious implications for Tesla in the future,”
  • ...3 more annotations...
  • Tesla’s net favorability rating among Republicans has improved slightly, to 27 percentage points in November from 21 percentage points in August,
  • Nearly half of Germans who are contemplating or actively looking to buy a new car said the Twitter takeover had turned them away from considering a Tesla
  • Tesla’s sales in China through November were 59 percent higher than a year earlier, according to data from the China Passenger Car Association, but that was slower than overall growth of “new energy vehicles” — a category that includes all-electric cars and plug-in hybrids. Sales of these vehicle have doubled, while BYD, the market leader, increased its sales more than threefold
Javier E

Opinion | Let's Imagine We Knew Exactly How the Pandemic Started - The New York Times - 0 views

  • To some, it all sounds like noise. “Whether Covid came accidentally from a lab in Wuhan or a seafood market is almost beside the point,” Edward Luce wrote in The Financial Times last month,
  • This has always struck me as an exceedingly strange perspective. Perhaps it is a truism to say that the events that brought about the deaths of perhaps 20 million people around the world and the jagged disruption of many billions of other lives are of enormous consequence and that dismissing the matter of its cause as simply a “blame game” is a form of not just historical but moral incuriosity.
  • It is consequential as long as it remains unresolved, as well. That’s because our collective uncertainty about the origin of the pandemic has itself shaped the way we’ve come to think about what we’ve all just lived through, the way we responded in the first place and the way the pandemic has played out, often weaponized, in geopolitics.
  • ...27 more annotations...
  • Three years since its start we are still more likely to see the pandemic in partisan rather than world-historical terms. And the grandly tragic story of the pandemic takes on a profoundly different shape and color depending on the nature of its first act.
  • In a world where a natural origin was confirmed beyond all doubt, we might look back and narrate the pandemic as one particular kind of story: a morality tale showcasing the incomplete triumph of modern civilization and the enduring threats from nature, and highlighting the way that, whatever we might have told ourselves in 2019 or 2009 about the fortress of the wealthy world, pandemic disease remained a humbling civilization-scale challenge no nation had very good answers for.
  • in a world where a lab-leak origin had been confirmed instead, we would probably find ourselves telling a very different set of stories — primarily about humanity’s Icarian hubris, or perhaps about scientists’ Faustian indifference to the downside risks of new research, or the way in which very human impulses to cover up mistakes and wrongdoing might have compounded those mistakes to disastrous global effect.
  • It would have been, “We brought this on ourselves.” Or perhaps, if we were feeling xenophobic rather than humbly human, “They brought this on us,”
  • the pandemic would probably have joined nuclear weapons as a conventional illustration of the dark side of human knowledge, perhaps even surpassed them — 20 million dead is nothing to trifle with, after all, though it remains less than the overall death toll of World War II or even the Great Leap Forward.
  • the horror would also offer a silver lining: If human action was responsible for this pandemic, then in theory, human action could prevent the next one as well.
  • It is as though we’ve decided both that the pandemic was “man-made” and that its emergence was a kind of inevitability we can’t do much about.
  • if the figures are even mostly reliable, they reflect a remarkable indifference on the part of the country to the source of a once-in-a-century disease disaster
  • a definitive confirmation of a lab origin probably would not mean that responsibility lay in any simplistic way with China. But that isn’t to say the case wouldn’t have been made, probably in a variety of forms — calls for “reparations,” demands for global provision of free vaccines — that would only have contributed additional antagonism and resentment to the world stage, further polarizing the great-power landscape.
  • It would be as though following a catastrophic earthquake, we didn’t bother to sort out whether it had been caused by local fracking but instead argued endlessly about the imperfections of disaster response
  • as we piece together a working history of the past few years, you might hope we’d grow more focused on nailing the story down.
  • it seems likely to me that in the very earliest days of 2020, with cases exploding in China but not yet elsewhere, knowing that the disease was a result of gain-of-function research and had escaped from a lab probably would have produced an even more significant wave of global fear
  • it is hard to think “superbug” and not panic.
  • presumably, many fewer people contemplating the initial news would’ve assumed that the outbreak would be largely limited to Asia, as previous outbreaks had been; public health messengers in places like the United States probably would not have so casually reassuring; and even more dramatic circuit-breaking responses like a monthlong international travel ban might’ve been instituted quite quickly
  • As the pandemic wore on, I suspect that effect would have lingered beyond the initial panic. At first, it might’ve been harder to decide that the virus was just something to live with if we knew simultaneously that it was something introduced to the world in error.
  • And later, when the vaccines arrived, I suspect there might have been considerably less resistance to them, particularly on the American right, where anxiety and xenophobia might have trumped public-health skepticism and legacy anti-vaccine sentiment
  • the opposite counterfactual is just as illuminating
  • The question and its unresolvability have mattered enormously for geopolitics,
  • n a world where neither narrative has been confirmed, and where pandemic origins are governed by an epistemological fog, I worry we have begun to collate the two stories in a somewhat paradoxical and self-defeating way
  • The disease and global response may well have accelerated our “new Cold War,” as Luce writes, but it is hard to imagine an alternate history where a known lab-leak origin didn’t move the world there much faster.
  • On the other hand, the natural logic of a confirmed zoonotic origin would probably have been to push nations of the world closer together into networks of collaboration and cooperation
  • the direction of change would have most likely been toward more integration rather than less. After all, this is to some degree what happened in the wake of the initial outbreaks of SARS and MERS and the Ebola outbreaks of the past decade.
  • Instead, the geopolitics remain unsteady, which is to say, a bit jagged
  • The United States can weaponize a narrative about lab origin — as China hawks in both the Trump and Biden administrations have repeatedly done — without worrying too much about providing real proof or suffering concrete backlash.
  • And China can stonewall origin investigations by citing sovereignty rights and a smoke screen story about the disease originating in frozen food shipped in from abroad without paying much of an international price for the intransigence or bad-faith argumentation, either.
  • each has carried forward a gripe that needn’t be substantiated in order to be deployed.
  • ambiguity also offers plausible deniability, which means that without considerably more Chinese transparency and cooperation, those pushing both stories will find themselves still making only probabilistic cases. We’re probably going to be living with that uncertainty, in a political and social world shaped by it, for the foreseeable future
Javier E

'He checks in on me more than my friends and family': can AI therapists do better than the real thing? | Counselling and therapy | The Guardian - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
Javier E

Even Rats Are Taking Selfies Now (and Enjoying It) - The New York Times - 0 views

  • Mr. Lignier built his own version of a Skinner box — a tall, transparent tower with an attached camera — and released two pet-store rats inside. Whenever the rats pressed the button inside the box, they got a small dose of sugar and the camera snapped their photo. The resulting images were immediately displayed on a screen, where the rats could see them. (“But honestly I don’t think they understood it,” Mr. Lignier said.)
  • The rodents quickly became enthusiastic button pushers. “They are very clever,”
  • after this training phase, the rewards became more unpredictable. Although the rats were still photographed every time they hit the button, the sweet treats came only once in a while, by design. These kinds of intermittent rewards can be especially powerful, scientists have found, keeping animals glued to their experimental slot machines as they await their next jackpot.
  • ...4 more annotations...
  • Indeed, in the face of these unpredictable rewards, Augustin and Arthur — the rats — persisted. Sometimes, they ignored the sugar even when it did arrive, Mr. Lignier said, and just kept pressing the button anyway.
  • To Mr. Lignier, the parallel is obvious. “Digital and social media companies use the same concept to keep the attention of the viewer as long as possible,”
  • Indeed, social media has been described as “a Skinner Box for the modern human,” doling out periodic, unpredictable rewards — a like, a follow, a promising romantic match — that keep us glued to our phones.
  • Maybe we would rather sit around and push whatever levers are in front of us — even those that might make us feel bad — than sit with ourselves in quiet contemplation.
Javier E

Opinion | Why Boys Today Struggle With Human Connection - The New York Times - 0 views

  • By the time he left Discord a year or so later, he’d had about 200 calls with different people, both men and women, who spoke of contemplating suicide.
  • But it was the boys who seemed the most desperately lonely and isolated. On the site, he said, he found “a lot more unhealthy men than unhealthy women.” He added: “With men, there is a huge thing about mental health and shame because you’re not supposed to be weak. You’re not supposed to be broken.” A male mental-health crisis was flying under the radar.
  • I have come to believe the conditions of modern boyhood amount to a perfect storm for loneliness
  • ...28 more annotations...
  • All the old deficiencies and blind spots of male socialization are still in circulation — the same mass failure to teach boys relational skills and emotional intelligence, the same rigid masculinity norms and social prohibitions that push them away from intimacy and emotionality.
  • in many ways this environment has apparently had the opposite effect — it has shut them down even further.
  • The micro-generation that was just hitting puberty as the #MeToo movement exploded in 2017 is now of college (and voting) age. They have lived their whole adolescence not just in the digital era, with a glorious array of virtual options to avoid the angst of real-world socializing, but also in the shadow of a wider cultural reckoning around toxic masculinity.
  • We have spent the past half-decade wrestling with ideas of gender and privilege, attempting to challenge the old stereotypes and power structures. These conversations should have been an opportunity to throw out the old pressures and norms of manhood, and to help boys and men be more emotionally open and engaged.
  • in screen-addicted, culture war-torn America, we have also added new ones.
  • For many progressives, weary from a pileup of male misconduct, the refusal to engage with men’s feelings has now become almost a point of principle
  • For every right-wing tough guy urging his crying son to “man up,” there’s a voice from the left telling him that to express his concerns is to take airtime away from a woman or someone more marginalized
  • In many cases, the same people who are urging boys and men to become more emotionally expressive are also taking a moral stand against hearing how they actually feel
  • For many boys, it can seem as though their emotions get dismissed by both sides. This political isolation has combined with existing masculine norms to push a worrying number of boys into a kind of resentful, semi-politicized reclusion.
  • Over a quarter of men under 30 say they have no close friends
  • Teenage boys now spend two hours less a week socializing than girls and they also spend about seven hours more per week than their female peers on screens.
  • my own research has fed my fears.
  • the same theme came up over and over for boys who on the face of it had little else in common. They were lonely.
  • almost all of them had the nagging sense that something important was missing in those friendships. They found it almost impossible to talk to their male peers about anything intimate or express vulnerability.
  • One teenager described his social circle, a group of boys who had been best friends since kindergarten, as a “very unsupportive support system.” Another revealed that he could recall only one emotionally open conversation with a male friend in his life, and that even his twin brother had not seen him cry in years
  • they felt unable to articulate this pain or seek help, because of a fear that, because they were boys, no one would listen.
  • As one 20-year-old put it, “If a man voices any concern, they get deflected with all of their so-called privileges.” He added: “They’d be like, ‘Whatever. Women have suffered more than you, so you have no right to complain.’”
  • Almost without exception, the boys I talked to craved closer, more emotionally open relationships, but had neither the skills nor the social permission to change the story.
  • Perhaps it’s not surprising that boys don’t know how to listen and engage with their friends’ emotions on any deeper level; after all, no one really engages with theirs.
  • in a sexist society, male opinions hold outsized value. But the world — including their own parents — has less time for their feelings.
  • One study from 2014 showed that parents were more likely to use emotional words when talking with their 4-year-old daughters than those speaking to their 4-year-old sons.
  • A more recent study comparing fathers of boys with fathers of girls found that fathers of boys were less attentively engaged with their boys, spent less time talking about their son’s sad feelings and instead were more likely to roughhouse with them. They even used subtly different vocabularies when talking with boys, with fewer feelings-centered words, and more competition and winning-focused language.
  • Spend any time in the manosphere, and it’s easy to start to hate men and boys. The extreme misogyny, the gleeful hate speech, the violent threats and thrum of menace make it hard to summon much sympathy for male concerns, and easy to forget the ways that patriarchy harms them, too.
  • in the grip of the culture wars, caring about boys has become subtly coded as a right-wing cause,
  • Men have had way more than their fair share of our concern already, the reasoning goes, and now it’s time for them to pipe down
  • But for boys, privilege and harm intertwine in complex ways — male socialization is a strangely destructive blend of indulgence and neglect. Under patriarchy, boys and men get everything, except the thing that’s most worth having: human connection.
  • The prescription for creating a generation of healthier, more socially and emotionally competent men is the same in the wider political discourse as it is in our own homes — to approach boys generously rather than punitively
  • We need to acknowledge boys’ feelings, to talk with our sons in the same way we do our daughters, to hear them and empathize rather than dismiss or minimize, and engage with them as fully emotional beings.
« First ‹ Previous 81 - 99 of 99
Showing 20 items per page