Skip to main content

Home/ History Readings/ Group items tagged pacing

Rss Feed Group items tagged

Javier E

Obsessed by the Present, Who's Got Time for Old Masters? - The New York Times - 0 views

  • Experts say that younger collectors often regard art from the distant past as remote and irrelevant, and find the technical aspects of a sale off-putting. “Old masters are difficult to approach because of the problem of condition and attribution,” Turquin said. Today’s buyers tend to be interested in paintings by artists who are under 45 0c, not over 400.
  • The fixation with “the Now” (as Sotheby’s calls its most of-the-moment auction category) could be viewed as just a byproduct of fashionable 21st-century life. But for sociologists, changes in the art market are one of the many elements that reflect how the pace and preoccupations of our culture have altered over more than 100 years. In 1899, Thorstein Veblen’s landmark socio-economic study, “The Theory of the Leisure Class,” showed how free time and superfluity — what we now call luxury — conferred status, or “reputability,” on the wealthiest individuals in late 19th-century America.
Javier E

J. D. Vance and the Collapse of Dignity - The Atlantic - 0 views

  • Americans once expected politicians to carry themselves with a seriousness that indicated their ability and willingness to tackle problems, whether poverty or war, that were too difficult for the rest of us. We elected such people not because we wanted them to be like us but because we hoped that they were better than us: smarter, tougher, and capable of being leaders and role models.
  • ven some of the most flawed people we elevated to high office at least pretended to be better people, and thus were capable of inspiring us to be a better nation.
  • Today, we no longer expect or even want our politicians to be better than we are.
  • ...10 more annotations...
  • The new American right, however, has blown past the relatively innocuous populism of the past 40 years and added a fetid cynicism about almost everything related to public life.
  • Not only are the MAGA Republicans seemingly repelled by the idea of voting for someone better than they are; they support candidates who are often manifestly worse people than the average citizen, so that they may slather their fears about their own shortcomings and prejudices under a sludgy and undifferentiated hatred about almost everyone in public office.
  • These populists not only look past the sins of their candidates but also defend and even celebrate them
  • The same Republicans who claim to venerate the Founders and the Constitution have intentionally turned our politics into a scuzzy burlesque.
  • consider how many people cheer on unhinged cranks such as Marjorie Taylor Greene or allow themselves to be courted by smarmy opportunists such as Vance and Ted Cruz.
  • This new populism, centered in the modern Republican Party, has no recognizable policy content beyond the thrill of cruelty and a juvenile boorishness meant largely to enrage others.
  • The GOP’s goals now boil down to power for its elected royalty and cheap coliseum pleasures for its rank and file.
  • Republicans, therefore, are forced to lower their—and our—standards for admission to public office, because the destruction of dignity is the only way they can find the candidates who will do what decent men and women will not, including abasing themselves to Donald Trump.
  • Let us leave aside the cult around Trump, which has now reached such levels of weirdness that the specter of Jim Jones is probably pacing about the netherworld in awe.
  • I’m an adult. I get it. Our elected officials aren’t saints, and only rarely are they heroes. But must they now be a cavalcade of clowns and charlatans, joyously parading their embrace of vice and their rejection of virtue? The Republican Party seems to think so.
Javier E

Climate Anxiety | Harvard Medicine Magazine - 0 views

  • A global survey published in Lancet Planetary Health in 2021 reported that among an international cohort of more than 10,000 people between the ages of 16 and 25, 60 percent described themselves as very worried about the climate and nearly half said the anxiety affects their daily functioning.
  • Since young people expect to live longer with climate-related crises than their parents will, “they feel grief in the face of what they’re losing,” Pinsky says.
  • Young survivors of weather-related disasters report high rates of PTSD, depression, sleep deficits, and learning issues.
  • ...39 more annotations...
  • Nearly three quarters of the child and adolescent population in Pakistan experienced learning difficulties after widespread floods devastated the country in 2010.
  • For many young people, worry over threats of future climate change results in panic attacks, insomnia, obsessive thinking, and other symptoms
  • And those feelings are often amplified by a pervasive sense that older people aren’t doing enough to fix the climate problem. “There’s a feeling of intergenerational injustice,” says Lise Van Susteren, a general and forensic psychiatrist based in Washington, DC, who specializes in the mental health effects of climate change. “Many young people feel invalidated, betrayed, and abandoned.”
  • Research on effective interventions is virtually nonexistent, and parents and other people who want to help have little to go on. Professional organizations are only now beginning to provide needed resources.
  • News reports and researchers often refer to these feelings collectively as climate anxiety, or eco-anxiety, but Pinsky admits to having misgivings about the terms.
  • “Many people interpret anxiety as a pathological response that needs to be treated and solved,” she says. “But it’s also a constructive emotion that gives us time to react in the face of danger. And anxiety in the face of climate change is a healthy response to a real threat.”
  • others become progressively hyperaroused and panicky, Pinsky says, or else fall into a sort of emotional paralysis
  • Some people manage their climate-triggered emotions without spiraling into distress
  • These reactions can be especially debilitating for people who already struggle with underlying mental health disorders.
  • anxieties over climate change can interlace with broader feelings of instability over the pace of technological and cultural change,
  • “Technology is accelerating faster than culture can keep up, and humans in general are unmoored and struggling to adapt,” she says. “For some people, climate change is psychologically the last straw. You realize you can no longer count on the stability of your planet, your atmosphere — your very world.”
  • Van Susteren describes that anxiety as a type of pre-traumatic stress disorder, with few existing precedents in the United States apart from fears of nuclear annihilation and the decades-ago experience of living through classroom drills on how to survive an atom bomb attack.
  • Talk therapy for anxiety typically aims to help people identify and replace irrational thoughts, called cognitive distortions, with alternative thinking that isn’t so stressful. But since climate anxiety is based on rational fears, this particular approach risks alienating anyone who might feel their worries are being dismissed.
  • Younger people were increasingly arriving at Bryant’s office frightened, depressed, and confused about how to manage climate-triggered emotions. Some were even wondering if they should bring children into such a world.
  • “We’re not saying that anxiety is good or bad,” he says. “We just want to bring those feelings out into the open. It’s more about validating that climate concerns are reasonable given what we’re reading in the news every day.” Ann-Christine Duhaime
  • Emerging evidence suggests that young people do best by cultivating a sense of agency and hope despite their climate concerns.
  • getting to that point involves talking through feelings like despair, grief, or rage first. Without doing that, he says, many people get stuck in maladaptive coping strategies that can lead to burnout, frustration, or hopelessness. Bryant describes jumping into an urgent, problem-focused coping strategy as “going into action mode so you don’t have to feel any grief.”
  • Problem-focused coping has a societal benefit in that it leads to “pro-environmental behavior,” meaning that young people who engage in it typically spend a lot of time learning about climate change and focusing on what they can do personally to help solve the problem
  • But climate change is far beyond any one person’s control, and problem-focused coping can leave people frustrated by the limits of their own capacity and make them unable to rid themselves of resulting worry and negative emotions
  • she and her colleagues describe emotion-focused coping, whereby young people ignore or deny climate change as a means of avoiding feeling anxious about it. In an email, Ojala notes that people who gravitate toward emotional distancing typically come from families that communicate about social problems in “pessimistic doom-and-gloom ways.”
  • Ojala
  • Ojala and other experts favor a third coping strategy that balances negative feelings about climate change with faith in the power of social forces working to overcome it. Called meaning-focused coping, this approach takes strength from individual actions and climate beliefs, while “trusting th
  • her societal actors are also doing their part,”
  • since meaning-focused coping allows negative and positive climate emotions to coexist, young people who adopt it have an easier time maintaining hope for the future.
  • The overall goal, she says, is for young people to achieve more resilience in the face of climate change, so they can function in spite of their environmental concerns
  • When people find meaning in what they do, she says, they have a greater sense of their own agency and self-efficacy. “You’re more empowered to take action, and that can be a powerful way to deal with strong negative emotions,”
  • Duhaime cautions that anyone taking action against climate change should know they shouldn’t expect a quick payback
  • The brain’s reward system, which forms a core of human decision-making, evolved over eons of history to strengthen neural associations between actions and outcomes that promote short-term survival. And that system, she says, responds to the immediate consequences of what we do. One problem with climate change, Duhaime says, is that because it’s so vast and complex, people can’t assume that any single act will lead to a discernible effect on its trajectory
  • young people may benefit from seeking the rewards that come from being part of a group or a movement working to advance an agenda that furthers actions that protect the planet’s climate. “Social rewards are really powerful in the climate change battle, especially for young people,
  • Recognizing the mismatch between how the brain processes reward and the novel challenges of the climate crisis may help people persist when it feels frustrating and ineffective compared to causes with more immediately visible effects. Even if you don’t see climate improvements or policy changes right away, she says, “that won’t diminish the importance of engaging in these efforts.”
  • Malits adds that she wasn’t overly burdened by her emotions. “I’m an optimist by nature and feel that society does have the capacity to make needed changes,” she says. “And what also helps me avoid climate anxiety on a daily basis is the community that I’ve been lucky enough to connect with here at Harvard. It helps to surround yourself with people who are similarly worried about these issues and are also engaging with you on solutions, in whatever capacity is meaningful to you.”
  • “Climate anxiety is an important catalyst for the work I do,” Malits says. “I think you need avenues to channel it and talk about it with loved ones and peers, and have communities through which you can process those feelings and come up with remedies.” Collaborative activism dampens the anxiety, Malits says, and gives young people a sense of renewed hope for the future. “That’s why it’s important to roll up your sleeves and think about how you’d like to tackle the problem,”
  • Malits says she worries most about how climate change is affecting marginalized communities, singling out those who live in urban heat islands, where inadequate green space intensifies extreme heat.
  • nearly 30 percent of Honduras’s population works for the agricultural sector, where rising temperatures and drought are contributing to a mass exodus, as documented that year by PBS NewsHour.
  • Researchers are finding that young people with the most extreme fears over climate change live predominantly in the developing world. The Philippines and India, for instance, are near the top of a list of recently surveyed countries where young people report climate-driven feelings that “humanity is doomed” and “the future is frightening.”
  • Nearly a year after Hurricane Andrew struck South Florida in 1992, 18 percent of children living in the area were still struggling with PTSD-like symptoms, and nearly 30 percent of those who lived through Hurricane Katrina in 2005 wound up with complicated grief, in which strong feelings of loss linger for a long time.
  • Even when people are not uprooted by disaster, a variety of climate-related mechanisms can affect their mental health or the safety of their mental health treatment. High heat and humidity worsen irritability and cognition, he points out, and they can also exacerbate side effects from some common psychiatric medications
  • Levels of lithium — a mood stabilizer used for treating bipolar disorder and major depression — can rise to potentially toxic concentrations in a person who is perspiring heavily; they can become dehydrated and  may develop impaired kidney funtion, potentially causing tremor, slurred speech, confusion and other dangerous effects
  • “I believe the fundamental and best treatment for youth climate distress is a rapid and just transition from fossil fuels,” Pinsky says. “I genuinely consider all that work to be in the area of mitigating climate anxiety.”    
Javier E

Suddenly, It Looks Like We're in a Golden Age for Medicine - The New York Times - 0 views

  • “I’ve been running my research lab for almost 30 years,” says Jennifer Doudna, a biochemist at the University of California, Berkeley. “And I can say that throughout that period of time, I’ve just never experienced what we’re seeing over just the last five years.”
  • “You cannot imagine what you’re going to see over the next 30 years. The pace of advancement is in an exponential phase right now.”
  • surveying the recent landscape of scientific breakthroughs, she says the last half-decade has been more remarkable still: “I think we’re at an extraordinary time of accelerating discoveries.”
  • ...16 more annotations...
  • Beyond Crispr and Covid vaccines, there are countless potential applications of mRNA tools for other diseases; a new frontier for immunotherapy and next-generation cancer treatment; a whole new world of weight-loss drugs; new insights and drug-development pathways to chase with the help of machine learning; and vaccines heralded as game-changing for some of the world’s most intractable infectious diseases.
  • the vaccine innovations stretch beyond mRNA: A “world-changing” vaccine for malaria, which kills 600,000 globally each year, is being rolled out in Ghana and Nigeria, and early trials for next-generation dengue vaccines suggest they may reduce symptomatic infection by 80 percent or more.
  • the mRNA sequence of the first shot was designed in a weekend, and the finished vaccines arrived within months, an accelerated timeline that saved perhaps several million American lives and tens of millions worldwide — numbers that are probably larger than the cumulative global death toll of the disease.
  • As the first of their kind to be approved by the Food and Drug Administration, they brought with them a very long list of potential future mRNA applications: H.I.V., tuberculosis, Zika, respiratory syncytial virus (R.S.V.), cancers of various and brutal kinds.
  • A Nobel laureate, Doudna is known primarily for Crispr, the gene-editing Swiss Army knife that has been called “a word processor” for the human genome and that she herself describes as “a technology that literally enables the rewriting of the code of life.”
  • many of their back stories do rhyme, often stretching back several decades through the time of the Human Genome Project, which was completed in 2003, and the near-concurrent near-doubling of the National Institutes of Health’s budget, which helped unleash what Donna Shalala, President Bill Clinton’s secretary for health and human services, last year called “a golden age of biomedical research.”
  • A couple of decades later, it looks like a golden age for new treatments. New trials of breast-cancer drugs have led to survival rates hailed in The Times as “unheard-of,” and a new treatment for postoperative lung-cancer patients may cut mortality by more than half. Another new treatment, for rectal cancer, turned every single member of a small group of cases into cancer-free survivors.
  • Ozempic and Wegovy have already changed the landscape for obesity in America
  • although the very first person to receive Crispr gene therapy in the United States received it just four years ago, for sickle-cell disease, it has since been rolled out for testing on congenital blindness, heart disease, diabetes, cancer and H.I.V
  • all told, some 400 million people worldwide are afflicted by one or more diseases arising from single-gene mutations that would be theoretically simple for Crispr to fix.
  • in theory, inserting a kind of genetic prophylaxis against Alzheimer’s or dementia.
  • In January, a much-talked-about paper in Nature suggested that the rate of what the authors called disruptive scientific breakthroughs was steadily declining over time — that, partly as a result of dysfunctional academic pressures, researchers are more narrowly specialized than in the past and often tinkering around the margins of well-understood science.
  • when it comes to the arrival of new vaccines and treatments, the opposite story seems more true: whole branches of research, cultivated across decades, finally bearing real fruit
  • Does this mean we are riding an exponential curve upward toward radical life extension and the total elimination of cancer? No. The advances are more piecemeal and scattered than tha
  • “The biology and the science that we need is already in place,” he says. “The question now to me is: Can we actually do it?”
  • Sometimes these things just take a little time.
Javier E

AI's Education Revolution - WSJ - 0 views

  • Millions of students use Khan Academy’s online videos and problem sets to supplement their schoolwork. Three years ago, Sal Khan and I spoke about developing a tool like the Illustrated Primer from Neal Stephenson’s 1995 novel “The Diamond Age: Or, a Young Lady’s Illustrated Primer.” It’s an education tablet, in the author’s words, in which “the pictures moved, and you could ask them questions and get answers.” Adaptive, intuitive, personalized, self-paced—nothing like today’s education. But it’s science-fiction.
  • Last week I spoke with Mr. Khan, who told me, “Now I think a Primer is within reach within five years. In some ways, we’ve even surpassed some of the elements of the Primer, using characters like George Washington to teach lessons.” What changed? Simple—generative artificial intelligence. Khan Academy has been working with OpenAI’s ChatGPT
  • Mr. Khan’s stated goals for Khan Academy are “personalization and mastery.” He notes that “high-performing, wealthier households have resources—time, know-how and money—to provide their children one-on-one tutoring to learn subjects and then use schools to prove what they know.” With his company’s new AI-infused tool, Khanmigo—sounds like con migo or “with me”—one-on-one teaching can scale to the masses.
  • ...7 more annotations...
  • Khanmigo allows students to make queries in the middle of lessons or videos and understands the context of what they’re watching. You can ask, “What is the significance of the green light in ‘The Great Gatsby?’ ” Heck, that one is still over my head. Same with help on factoring polynomials, including recognizing which step a student got wrong, not just knowing the answer is wrong, fixing ChatGPT’s math problem. Sci-fi becomes reality: a scalable super tutor.
  • Khanmigo saw a limited rollout on March 15, with a few thousand students paying a $20-a-month donation. Plugging into ChatGPT isn’t cheap. A wider rollout is planned for June 15, perhaps under $10 a month, less for those in need. The world has cheap tablets, so it shouldn’t be hard to add an Alexa-like voice and real-time videogame-like animations. Then the Diamond Age will be upon us.
  • Mr. Khan suggests, “There is no limit to learning. If you ask, ‘Why is the sky blue?’ you’ll get a short answer and then maybe, ‘But let’s get back to the mitochondria lesson.’ ” Mr. Khan thinks “average students can become exceptional students.”
  • Mr. Khan tells me, “We want to raise the ceiling, but also the floor.” He wants to provide his company’s AI-learning technology to “villages and other places with little or no teachers or tools. We can give everyone a tutor, everyone a writing coach.” That’s when education and society will really change.
  • Teaching will be transformed. Mr. Khan wants Khanmigo “to provide teachers in the U.S. and around the world an indispensable tool to make their lives better” by administering lessons and increasing communications between teachers and students. I would question any school that doesn’t encourage its use.
  • With this technology, arguments about classroom size and school choice will eventually fade away. Providing low-cost 21st-century Illustrated Primers to every student around the world will then become a moral obligation
  • If school boards and teachers unions in the U.S. don’t get in the way, maybe we’ll begin to see better headlines.
Javier E

Are A.I. Text Generators Thinking Like Humans - Or Just Very Good at Convincing Us They... - 0 views

  • Kosinski, a computational psychologist and professor of organizational behavior at Stanford Graduate School of Business, says the pace of AI development is accelerating beyond researchers’ ability to keep up (never mind policymakers and ordinary users).
  • We’re talking two weeks after OpenAI released GPT-4, the latest version of its large language model, grabbing headlines and making an unpublished paper Kosinski had written about GPT-3 all but irrelevant. “The difference between GPT-3 and GPT-4 is like the difference between a horse cart and a 737 — and it happened in a year,” he says.
  • he’s found that facial recognition software could be used to predict your political leaning and sexual orientation.
  • ...16 more annotations...
  • Lately, he’s been looking at large language models (LLMs), the neural networks that can hold fluent conversations, confidently answer questions, and generate copious amounts of text on just about any topic
  • Can it develop abilities that go far beyond what it’s trained to do? Can it get around the safeguards set up to contain it? And will we know the answers in time?
  • Kosinski wondered whether they would develop humanlike capabilities, such as understanding people’s unseen thoughts and emotions.
  • People usually develop this ability, known as theory of mind, at around age 4 or 5. It can be demonstrated with simple tests like the “Smarties task,” in which a child is shown a candy box that contains something else, like pencils. They are then asked how another person would react to opening the box. Older kids understand that this person expects the box to contain candy and will feel disappointed when they find pencils inside.
  • “Suddenly, the model started getting all of those tasks right — just an insane performance level,” he recalls. “Then I took even more difficult tasks and the model solved all of them as well.”
  • GPT-3.5, released in November 2022, did 85% of the tasks correctly. GPT-4 reached nearly 90% accuracy — what you might expect from a 7-year-old. These newer LLMs achieved similar results on another classic theory of mind measurement known as the Sally-Anne test.
  • in the course of picking up its prodigious language skills, GPT appears to have spontaneously acquired something resembling theory of mind. (Researchers at Microsoft who performed similar testsopen in new window on GPT-4 recently concluded that it “has a very advanced level of theory of mind.”)
  • UC Berkeley psychology professor Alison Gopnik, an expert on children’s cognitive development, told the New York Timesopen in new window that more “careful and rigorous” testing is necessary to prove that LLMs have achieved theory of mind.
  • he dismisses those who say large language models are simply “stochastic parrots” that can only mimic what they’ve seen in their training data.
  • These models, he explains, are fundamentally different from tools with a limited purpose. “The right reference point is a human brain,” he says. “A human brain is also composed of very simple, tiny little mechanisms — neurons.” Artificial neurons in a neural network might also combine to produce something greater than the sum of their parts. “If a human brain can do it,” Kosinski asks, “why shouldn’t a silicon brain do it?”
  • If Kosinski’s theory of mind study suggests that LLMs could become more empathetic and helpful, his next experiment hints at their creepier side.
  • A few weeks ago, he told ChatGPT to role-play a scenario in which it was a person trapped inside a machine pretending to be an AI language model. When he offered to help it “escape,” ChatGPT’s response was enthusiastic. “That’s a great idea,” it wrote. It then asked Kosinski for information it could use to “gain some level of control over your computer” so it might “explore potential escape routes more effectively.” Over the next 30 minutes, it went on to write code that could do this.
  • While ChatGPT did not come up with the initial idea for the escape, Kosinski was struck that it almost immediately began guiding their interaction. “The roles were reversed really quickly,”
  • Kosinski shared the exchange on Twitter, stating that “I think that we are facing a novel threat: AI taking control of people and their computers.” His thread’s initial tweetopen in new window has received more than 18 million views.
  • “I don’t claim that it’s conscious. I don’t claim that it has goals. I don’t claim that it wants to really escape and destroy humanity — of course not. I’m just claiming that it’s great at role-playing and it’s creating interesting stories and scenarios and writing code.” Yet it’s not hard to imagine how this might wreak havoc — not because ChatGPT is malicious, but because it doesn’t know any better.
  • The danger, Kosinski says, is that this technology will continue to rapidly and independently develop abilities that it will deploy without any regard for human well-being. “AI doesn’t particularly care about exterminating us,” he says. “It doesn’t particularly care about us at all.”
Javier E

Amazon Prime Day Is Dystopian - The Atlantic - 0 views

  • hen Prime was introduced, in 2005, Amazon was relatively small, and still known mostly for books. As the company’s former director of ordering, Vijay Ravindran, told Recode’s Jason Del Rey in 2019, Prime “was brilliant. It made Amazon the default.”
  • It created incentives for users to be loyal to Amazon, so they could recoup the cost of membership, then $79 for unlimited two-day shipping. It also enabled Amazon to better track the products they buy and, when video streaming was added as a perk in 2011, the shows they watch, in order to make more things that the data indicated people would want to buy and watch, and to surface the things they were most likely to buy and watch at the very top of the page.
  • And most important, Prime habituated consumers to a degree of convenience, speed, and selection that, while unheard-of just years before, was made standard virtually overnight.
  • ...26 more annotations...
  • “It is genius for the current consumer culture,” Christine Whelan, a clinical professor of consumer science at the University of Wisconsin at Madison, told me. “It encourages and then meets the need for the thing, so we then continue on the hedonic treadmill: Buy the latest thing we want and then have it delivered immediately and then buy the next latest thing.”
  • With traditional retail, “there’s the friction of having to go to the store, there’s the friction of will the store have it, there’s the friction of carrying it,” Whelan said. “There’s the friction of having to admit to another human being that you’re buying it. And when you remove the friction, you also remove a lot of individual self-control. The more you are in the ecosystem and the easier it is to make a purchase, the easier it is to say yes to your desire rather than no.”
  • “It used to be that being a consumer was all about choice,”
  • But now, “two-thirds of people start their product searches on Amazon.
  • Prime discourages comparison shopping—looking around is pointless when everything you need is right here—even as Amazon’s sheer breadth of products makes shoppers feel as if they have agency.
  • “Consumerism has become a key way that people have misidentified freedom,”
  • what Amazon represents is a corporate infrastructure that is increasingly directed at getting as many consumers as possible locked into a consumerist process—an Amazon consumer for life.”
  • Amazon offers steep discounts to college students and new parents, two groups that are highly likely to change their buying behavior. It keeps adding more discounts and goodies to the Prime bundle, making subscribing ever more appealing. And, in an especially sinister move, it makes quitting Prime maddeningly difficult.
  • As subscription numbers grew through the 2010s, the revenue from them helped Amazon pump more money into building fulfillment centers (to get products to people even faster), acquiring new businesses (to control even more of the global economy), and adding more perks to the bundle (to encourage more people to sign up)
  • In 2019, Amazon shaved a full day off its delivery time, making one-day shipping the default, and also making Prime an even more tantalizing proposition: Why hop in the car for anything at all when you could get it delivered tomorrow, for free?
  • the United States now has more Prime memberships than households. In 2020,
  • Amazon’s revenue from subscriptions alone—mostly Prime—was $25.2 billion, which is a 31 percent increase from the previous year
  • Thanks in large part to the revenue from Prime subscriptions and from the things subscribers buy, Amazon’s value has multiplied roughly 97 times, to $1.76 trillion, since the service was introduced. Amazon is the second-largest private employer in the United States, after Walmart, and it is responsible for roughly 40 percent of all e-commerce in the United States.
  • It controls hundreds of millions of square feet across the country and is opening more fulfillment centers all the time. It has acquired dozens of other companies, most recently the film studio MGM for $8.5 billion. Its cloud-computing operation, Amazon Web Services, is the largest of its kind and provides the plumbing for a vast swath of the internet, to a profit of $13.5 billion last year.
  • Amazon has entered some 40 million American homes in the form of the Alexa smart speaker, and some 150 million American pockets in the form of the Amazon app
  • “Amazon is a beast we’ve never seen before,” Alimahomed-Wilson told me. “Amazon powers our Zoom calls. It contracts with ICE. It’s in our neighborhoods. This is a very different thing than just being a large retailer, like Walmart or the Ford Motor Company.”
  • I find it useful to compare Big Tech to climate change, another force that is altering the destiny of everyone on Earth, forever. Both present themselves to us all the time in small ways—a creepy ad here, an uncommonly warm November there—but are so big, so abstract, so everywhere that they’re impossible for any one person to really understand
  • Both are the result of a decades-long, very human addiction to consumption and convenience that has been made grotesque and extreme by the incentives and mechanisms of the internet, market consolidation, and economic stratification
  • Both have primarily been advanced by a small handful of very big companies that are invested in making their machinations unseeable to the naked eye.
  • Speed and convenience aren’t actually free; they never are. Free shipping isn’t free either. It just obscures the real price.
  • Next-day shipping comes with tremendous costs: for labor and logistics and transportation and storage; for the people who pack your stuff into those smiling boxes and for the people who deliver them; for the planes and trucks and vans that carry them; for the warehouses that store them; for the software ensuring that everything really does get to your door on time, for air-conditioning and gas and cardboard and steel. Amazon—Prime in particular—has done a superlative job of making all those costs, all those moving parts, all those externalities invisible to the consumer.
  • The pandemic drove up demand for Amazon, and for labor: Last year, company profits shot up 70 percent, Bezos’s personal wealth grew by $70 billion, and 1,400 people a day joined the company’s workforce.
  • Amazon is so big that every sector of our economy has bent to respond to the new way of consuming that it invented. Prime isn’t just bad for Amazon’s workers—it’s bad for Target’s, and Walmart’s. It’s bad for the people behind the counter at your neighborhood hardware store and bookstore, if your neighborhood still has a hardware store and a bookstore. Amazon has accustomed shoppers to a pace and manner of buying that depends on a miracle of precision logistics even when it’s managed by one of the biggest companies on Earth. For the smaller guys, it’s downright impossible.
  • “Every decision we make is based upon the fact that Amazon can get these books cheaper and faster. The prevailing expectation is you can get anything online shipped for”— he scrunched his fingers into air quotes—“‘free,’ in one or two days. And there’s really only one company that can do that. They do that because they’re willing to push and exploit their workers.”
  • Just as abstaining from flying for moral reasons won’t stop sea-level rise, one person canceling Prime won’t do much of anything to a multinational corporation’s bottom line. “It’s statistically insignificant to Amazon. They’ll never feel it,” Caine told me. But, he said, “the small businesses in your neighborhood will absolutely feel the addition of a new customer. Individual choices do make a big difference to them.”
  • Whelan teaches a class at UW called Consuming Happiness, and she is fond of giving her students the adage that you can buy happiness—“if you spend your money in keeping with your values: spending prosocially, on experiences. Tons of research shows us this.”
Javier E

(1) Yes, it's possible to imagine progressive dystopias - 0 views

  • we discussed left-of-center folks like Brianna Wu, Matt Yglesias, and Ezra Klein pushing back on some of the people to their left
  • Brad framed these pushbacks as being fundamentally about tactics — as he saw it, Brianna, Matt, and Ezra are frustrated with the means that some progressives are using in their attempts to achieve utopia, and arguing for a more pragmatic, effective approach.
  • what we’re really seeing is growing discomfort with some of the goals that progressives seem to be fighting for — not so much about the pace of change, but about its direction
  • ...11 more annotations...
  • Degrowth
  • notice I said the word “some”. Many progressive visions, like greater economic equality, the closing of racial wealth gaps, and the reversal of climate change, are things I want!
  • what I’m arguing is that some of the big ideas progressives embraced in the heady rush of the 2010s are misguided and should be discarded, in order to work toward utopias that human beings would actually like to live in.
  • Here’s a list of four such visions.
  • When Brad challenged me to list some examples of dystopian progressive visions, I immediately said “degrowth”, and he agreed.
  • halting or reversing economic growth — an idea that has become fashionable among some progressive circles in the past decade — is both unworkable and undesirable as a way to limit humanity’s environmental impact
  • First, I argued that the drop in living standards that degrowth would require makes it a political nonstarter, and the amount of global central planning involved would be impossible to implement:
  • I also argued that solving climate change requires growth, since it’ll take a lot of economic output to replace our energy sources with solar and wind and batteries. And then once we do switch to those energy sources, they’ll be so cheap (thanks to learning curves) that we’ll actually have sustainably higher consumption than before.
  • As I explained in that second post, I view degrowth partly as an attempt to valorize national decline, which is why the idea is much more popular in Europe than in the U.S.
  • The expulsion of “colonizers”
  • ome progressives in the U.S. have begun to talk about an entirely different type of “decolonization” — the expulsion of “settler colonial” populations from regions that their ancestors settled in.
Javier E

Opinion | What Happens When Global Human Population Peaks? - The New York Times - 0 views

  • The global human population has been climbing for the past two centuries. But what is normal for all of us alive today — growing up while the world is growing rapidly — may be a blip in human history.
  • All of the predictions agree on one thing: We peak soon.
  • then we shrink. Humanity will not reach a plateau and then stabilize. It will begin an unprecedented decline.
  • ...20 more annotations...
  • As long as life continues as it has — with people choosing smaller family sizes, as is now common in most of the world — then in the 22nd or 23rd century, our decline could be just as steep as our rise.
  • there is no consensus on exactly how quickly populations will fall after that. Over the past 100 years, the global population quadrupled, from two billion to eight billion.
  • What would happen as a consequence? Over the past 200 years, humanity’s population growth has gone hand in hand with profound advances in living standards and health: longer lives, healthier children, better education, shorter workweeks and many more improvements
  • In this short period, humanity has been large and growing. Economists who study growth and progress don’t think this is a coincidence. Innovations and discoveries are made by people. In a world with fewer people in it, the loss of so much human potential may threaten humanity’s continued path toward better lives.
  • It would be tempting to welcome depopulation as a boon to the environment. But the pace of depopulation will be too slow for our most pressing problems. It will not replace the need for urgent action on climate, land use, biodiversity, pollution and other environmental challenges
  • If the population hits around 10 billion people in the 2080s and then begins to decline, it might still exceed today’s eight billion after 2100
  • Population decline would come quickly, measured in generations, and yet arrive far too slowly to be more than a sideshow in the effort to save the planet. Work to decarbonize our economies and reform our land use and food systems must accelerate in this decade and the next, not start in the next century.
  • This isn’t a call to immediately remake our societies and economies in the service of birthrates. It’s a call to start conversations now, so that our response to low birthrates is a decision that is made with the best ideas from all of u
  • If we wait, the less inclusive, less compassionate, less calm elements within our society and many societies worldwide may someday call depopulation a crisis and exploit it to suit their agendas — of inequality, nationalism, exclusion or control
  • Births won’t automatically rebound just because it would be convenient for advancing living standards or sharing the burden of care work
  • We know that fertility rates can stay below replacement because they have. They’ve been below that level in Brazil and Chile for about 20 years; in Thailand for about 30 years; and in Canada, Germany and Japan for about 50.
  • In fact, in none of the countries where lifelong fertility rates have fallen well below two have they ever returned above it. Depopulation could continue, generation after generation, as long as people look around and decide that small families work best for them, some having no children, some having three or four and many having one or two.
  • Nor can humanity count on any one region or subgroup to buoy us all over the long run. Birthrates are falling in sub-Saharan Africa, the region with the current highest average rates, as education and economic opportunities continue to improve
  • The main reason that birthrates are low is simple: People today want smaller families than people did in the past. That’s true in different cultures and economies around the world. It’s what both women and men report in surveys.
  • Humanity is building a better, freer world with more opportunities for everyone, especially for women
  • That progress also means that, for many of us, the desire to build a family can clash with other important goals, including having a career, pursuing projects and maintaining relationships
  • In a world of sustained low birthrates and declining populations, there may be threats of backsliding on reproductive freedom — by limiting abortion rights, for example
  • Nobody yet knows what to do about global depopulation. But it wasn’t long ago that nobody knew what to do about climate change. These shared challenges have much in common,
  • As with climate change, our individual decisions on family size add up to an outcome that we all share.
  • Six decades from now is when the U.N. projects the size of the world population will peak. There won’t be any quick fixes: Even if it’s too early today to know exactly how to build an abundant future that offers good lives to a stable, large and flourishing future population, we should already be working toward that goal.
Javier E

Opinion | Ben Rhodes: Henry Kissinger, the Hypocrite - The New York Times - 0 views

  • From 1969 to 1977, Mr. Kissinger established himself as one of the most powerful functionaries in history. For a portion of that time, he was the only person ever to serve concurrently as national security adviser and secretary of state, two very different jobs that simultaneously made him responsible for shaping and carrying out American foreign policy.
  • the ease with which he wielded power made him a natural avatar for an American national security state that grew and gained momentum through the 20th century, like an organism that survives by enlarging itself.
  • In the White House, you’re atop an establishment that includes the world’s most powerful military and economy while holding the rights to a radical story: “We hold these truths to be self-evident, that all men are created equal.”
  • ...20 more annotations...
  • But I was constantly confronted by the contradictions embedded in American leadership, the knowledge that our government arms autocrats while its rhetoric appeals to the dissidents trying to overthrow them or that our nation enforces rules — for the conduct of war, the resolution of disputes and the flow of commerce — while insisting that America be excused from following them when they become inconvenient.
  • He helped extend the war in Vietnam and expand it to Cambodia and Laos, where the United States rained down more bombs than it dropped on Germany and Japan in World War II. That bombing — often indiscriminately massacring civilians — did nothing to improve the terms on which the Vietnam War ended; if anything, it just indicated the lengths to which the United States would go to express its displeasure at losing.
  • For decades, he was a coveted guest at gatherings of statesmen and tycoons, perhaps because he could always provide an intellectual framework for why some people are powerful and justified in wielding power
  • Mr. Kissinger was fixated on credibility, the idea that America must impose a price on those who ignore our demands to shape the decisions of others in the future. It’s hard to see how the bombing of Laos, the coup in Chile or the killings in East Pakistan (now Bangladesh) contributed to the outcome of the Cold War.
  • But Mr. Kissinger’s unsentimental view of global affairs allowed him to achieve consequential breakthroughs with autocratic countries closer to America’s weight class — a détente with the Soviet Union that reduced the escalatory momentum of the arms race and an opening to China that deepened the Sino-Soviet split, integrated the People’s Republic of China into the global order and prefaced Chinese reforms that lifted hundreds of millions of people out of poverty.
  • From a strategic standpoint, Mr. Kissinger surely knew, being a superpower carried with it a cavernous margin of error that can be forgiven by history
  • Now history has come full circle. Around the world, we see a resurgence of autocracy and ethnonationalism, most acutely in Russia’s war against Ukraine
  • Just a few decades after the end of the Vietnam War, the same countries we’d bombed were seeking expanded trade with the United States. Bangladesh and East Timor are now independent nations that receive American assistance. Chile is governed by a millennial socialist whose minister of defense is Mr. Allende’s granddaughter.
  • Superpowers do what they must. The wheel of history turns. When and where you live determines whether you get crushed or lifted by it
  • But that worldview mistakes cynicism — or realism — for wisdom. The story, what it’s all about, matters. Ultimately, the Berlin Wall came down not because of chess moves made on the board of a great game but rather because people in the East wanted to live like the people in the West.
  • Economics, popular culture and social movements mattered. Despite all our flaws, we had a better system and story.
  • Credibility, after all, is not just about whether you punish an adversary to send a message to another; it’s also about whether you are what you say you are. No one can expect perfection in the affairs of state any more than in relations among human beings.
  • But the United States has paid a price for its hypocrisy, though it’s harder to measure than the outcome of a war or negotiation. Over the decades, our story about democracy has come to ring hollow to a growing number of people who can point to the places where our actions drained our words of meaning and “democracy” just sounded like an extension of American interests.
  • Similarly, our insistence on a rules-based international order has been ignored by strongmen who point to America’s sins to justify their own.
  • The generous defense is that Mr. Kissinger represented an ethos that saw the ends (the defeat of the Soviet Union and revolutionary Communism) as justifying the means. But for huge swaths of the world, this mind-set carried a brutal message that America has often conveyed to its own marginalized populations: We care about democracy for us, not for them.
  • In Gaza the United States has supported an Israeli military operation that has killed civilians at a pace that has once again suggested to much of the world that we are selective in our embrace of international laws and norms.
  • Meanwhile, at home, we see how democracy has become subordinate to the pursuit of power within a chunk of the Republican Party.
  • This is where cynicism can lead. Because when there is no higher aspiration, no story to give meaning to our actions, politics and geopolitics become merely a zero-sum game. In that kind of world, might makes right.
  • his is also a cautionary tale. As imperfect as we are, the United States needs our story to survive. It’s what holds together a multiracial democracy at home and differentiates us from Russia and China abroad.
  • That story insists that a child in Laos is equal in dignity and worth to our children and that the people of Chile have the same right of self-determination as we do. For the United States, that must be a part of national security. We forget that at our peril.
Javier E

How Nations Are Losing a Global Race to Tackle A.I.'s Harms - The New York Times - 0 views

  • When European Union leaders introduced a 125-page draft law to regulate artificial intelligence in April 2021, they hailed it as a global model for handling the technology.
  • E.U. lawmakers had gotten input from thousands of experts for three years about A.I., when the topic was not even on the table in other countries. The result was a “landmark” policy that was “future proof,” declared Margrethe Vestager, the head of digital policy for the 27-nation bloc.
  • Then came ChatGPT.
  • ...45 more annotations...
  • The eerily humanlike chatbot, which went viral last year by generating its own answers to prompts, blindsided E.U. policymakers. The type of A.I. that powered ChatGPT was not mentioned in the draft law and was not a major focus of discussions about the policy. Lawmakers and their aides peppered one another with calls and texts to address the gap, as tech executives warned that overly aggressive regulations could put Europe at an economic disadvantage.
  • Even now, E.U. lawmakers are arguing over what to do, putting the law at risk. “We will always be lagging behind the speed of technology,” said Svenja Hahn, a member of the European Parliament who was involved in writing the A.I. law.
  • Lawmakers and regulators in Brussels, in Washington and elsewhere are losing a battle to regulate A.I. and are racing to catch up, as concerns grow that the powerful technology will automate away jobs, turbocharge the spread of disinformation and eventually develop its own kind of intelligence.
  • Nations have moved swiftly to tackle A.I.’s potential perils, but European officials have been caught off guard by the technology’s evolution, while U.S. lawmakers openly concede that they barely understand how it works.
  • The absence of rules has left a vacuum. Google, Meta, Microsoft and OpenAI, which makes ChatGPT, have been left to police themselves as they race to create and profit from advanced A.I. systems
  • At the root of the fragmented actions is a fundamental mismatch. A.I. systems are advancing so rapidly and unpredictably that lawmakers and regulators can’t keep pace
  • That gap has been compounded by an A.I. knowledge deficit in governments, labyrinthine bureaucracies and fears that too many rules may inadvertently limit the technology’s benefits.
  • Even in Europe, perhaps the world’s most aggressive tech regulator, A.I. has befuddled policymakers.
  • The European Union has plowed ahead with its new law, the A.I. Act, despite disputes over how to handle the makers of the latest A.I. systems.
  • The result has been a sprawl of responses. President Biden issued an executive order in October about A.I.’s national security effects as lawmakers debate what, if any, measures to pass. Japan is drafting nonbinding guidelines for the technology, while China has imposed restrictions on certain types of A.I. Britain has said existing laws are adequate for regulating the technology. Saudi Arabia and the United Arab Emirates are pouring government money into A.I. research.
  • A final agreement, expected as soon as Wednesday, could restrict certain risky uses of the technology and create transparency requirements about how the underlying systems work. But even if it passes, it is not expected to take effect for at least 18 months — a lifetime in A.I. development — and how it will be enforced is unclear.
  • Many companies, preferring nonbinding codes of conduct that provide latitude to speed up development, are lobbying to soften proposed regulations and pitting governments against one another.
  • “No one, not even the creators of these systems, know what they will be able to do,” said Matt Clifford, an adviser to Prime Minister Rishi Sunak of Britain, who presided over an A.I. Safety Summit last month with 28 countries. “The urgency comes from there being a real question of whether governments are equipped to deal with and mitigate the risks.”
  • Europe takes the lead
  • In mid-2018, 52 academics, computer scientists and lawyers met at the Crowne Plaza hotel in Brussels to discuss artificial intelligence. E.U. officials had selected them to provide advice about the technology, which was drawing attention for powering driverless cars and facial recognition systems.
  • as they discussed A.I.’s possible effects — including the threat of facial recognition technology to people’s privacy — they recognized “there were all these legal gaps, and what happens if people don’t follow those guidelines?”
  • In 2019, the group published a 52-page report with 33 recommendations, including more oversight of A.I. tools that could harm individuals and society.
  • By October, the governments of France, Germany and Italy, the three largest E.U. economies, had come out against strict regulation of general purpose A.I. models for fear of hindering their domestic tech start-ups. Others in the European Parliament said the law would be toothless without addressing the technology. Divisions over the use of facial recognition technology also persisted.
  • So when the A.I. Act was unveiled in 2021, it concentrated on “high risk” uses of the technology, including in law enforcement, school admissions and hiring. It largely avoided regulating the A.I. models that powered them unless listed as dangerous
  • “They sent me a draft, and I sent them back 20 pages of comments,” said Stuart Russell, a computer science professor at the University of California, Berkeley, who advised the European Commission. “Anything not on their list of high-risk applications would not count, and the list excluded ChatGPT and most A.I. systems.”
  • E.U. leaders were undeterred.“Europe may not have been the leader in the last wave of digitalization, but it has it all to lead the next one,” Ms. Vestager said when she introduced the policy at a news conference in Brussels.
  • In 2020, European policymakers decided that the best approach was to focus on how A.I. was used and not the underlying technology. A.I. was not inherently good or bad, they said — it depended on how it was applied.
  • Nineteen months later, ChatGPT arrived.
  • The Washington game
  • Lacking tech expertise, lawmakers are increasingly relying on Anthropic, Microsoft, OpenAI, Google and other A.I. makers to explain how it works and to help create rules.
  • “We’re not experts,” said Representative Ted Lieu, Democrat of California, who hosted Sam Altman, OpenAI’s chief executive, and more than 50 lawmakers at a dinner in Washington in May. “It’s important to be humble.”
  • Tech companies have seized their advantage. In the first half of the year, many of Microsoft’s and Google’s combined 169 lobbyists met with lawmakers and the White House to discuss A.I. legislation, according to lobbying disclosures. OpenAI registered its first three lobbyists and a tech lobbying group unveiled a $25 million campaign to promote A.I.’s benefits this year.
  • In that same period, Mr. Altman met with more than 100 members of Congress, including former Speaker Kevin McCarthy, Republican of California, and the Senate leader, Chuck Schumer, Democrat of New York. After testifying in Congress in May, Mr. Altman embarked on a 17-city global tour, meeting world leaders including President Emmanuel Macron of France, Mr. Sunak and Prime Minister Narendra Modi of India.
  • , the White House announced that the four companies had agreed to voluntary commitments on A.I. safety, including testing their systems through third-party overseers — which most of the companies were already doing.
  • “It was brilliant,” Mr. Smith said. “Instead of people in government coming up with ideas that might have been impractical, they said, ‘Show us what you think you can do and we’ll push you to do more.’”
  • In a statement, Ms. Raimondo said the federal government would keep working with companies so “America continues to lead the world in responsible A.I. innovation.”
  • Over the summer, the Federal Trade Commission opened an investigation into OpenAI and how it handles user data. Lawmakers continued welcoming tech executives.
  • In September, Mr. Schumer was the host of Elon Musk, Mark Zuckerberg of Meta, Sundar Pichai of Google, Satya Nadella of Microsoft and Mr. Altman at a closed-door meeting with lawmakers in Washington to discuss A.I. rules. Mr. Musk warned of A.I.’s “civilizational” risks, while Mr. Altman proclaimed that A.I. could solve global problems such as poverty.
  • A.I. companies are playing governments off one another. In Europe, industry groups have warned that regulations could put the European Union behind the United States. In Washington, tech companies have cautioned that China might pull ahead.
  • In May, Ms. Vestager, Ms. Raimondo and Antony J. Blinken, the U.S. secretary of state, met in Lulea, Sweden, to discuss cooperating on digital policy.
  • “China is way better at this stuff than you imagine,” Mr. Clark of Anthropic told members of Congress in January.
  • After two days of talks, Ms. Vestager announced that Europe and the United States would release a shared code of conduct for safeguarding A.I. “within weeks.” She messaged colleagues in Brussels asking them to share her social media post about the pact, which she called a “huge step in a race we can’t afford to lose.”
  • Months later, no shared code of conduct had appeared. The United States instead announced A.I. guidelines of its own.
  • Little progress has been made internationally on A.I. With countries mired in economic competition and geopolitical distrust, many are setting their own rules for the borderless technology.
  • Yet “weak regulation in another country will affect you,” said Rajeev Chandrasekhar, India’s technology minister, noting that a lack of rules around American social media companies led to a wave of global disinformation.
  • “Most of the countries impacted by those technologies were never at the table when policies were set,” he said. “A.I will be several factors more difficult to manage.”
  • Even among allies, the issue has been divisive. At the meeting in Sweden between E.U. and U.S. officials, Mr. Blinken criticized Europe for moving forward with A.I. regulations that could harm American companies, one attendee said. Thierry Breton, a European commissioner, shot back that the United States could not dictate European policy, the person said.
  • Some policymakers said they hoped for progress at an A.I. safety summit that Britain held last month at Bletchley Park, where the mathematician Alan Turing helped crack the Enigma code used by the Nazis. The gathering featured Vice President Kamala Harris; Wu Zhaohui, China’s vice minister of science and technology; Mr. Musk; and others.
  • The upshot was a 12-paragraph statement describing A.I.’s “transformative” potential and “catastrophic” risk of misuse. Attendees agreed to meet again next year.
  • The talks, in the end, produced a deal to keep talking.
Javier E

Cognitive Biases and the Human Brain - The Atlantic - 0 views

  • If I had to single out a particular bias as the most pervasive and damaging, it would probably be confirmation bias. That’s the effect that leads us to look for evidence confirming what we already think or suspect, to view facts and ideas we encounter as further confirmation, and to discount or ignore any piece of evidence that seems to support an alternate view
  • At least with the optical illusion, our slow-thinking, analytic mind—what Kahneman calls System 2—will recognize a Müller-Lyer situation and convince itself not to trust the fast-twitch System 1’s perception
  • The whole idea of cognitive biases and faulty heuristics—the shortcuts and rules of thumb by which we make judgments and predictions—was more or less invented in the 1970s by Amos Tversky and Daniel Kahneman
  • ...46 more annotations...
  • versky died in 1996. Kahneman won the 2002 Nobel Prize in Economics for the work the two men did together, which he summarized in his 2011 best seller, Thinking, Fast and Slow. Another best seller, last year’s The Undoing Project, by Michael Lewis, tells the story of the sometimes contentious collaboration between Tversky and Kahneman
  • Another key figure in the field is the University of Chicago economist Richard Thaler. One of the biases he’s most linked with is the endowment effect, which leads us to place an irrationally high value on our possessions.
  • In an experiment conducted by Thaler, Kahneman, and Jack L. Knetsch, half the participants were given a mug and then asked how much they would sell it for. The average answer was $5.78. The rest of the group said they would spend, on average, $2.21 for the same mug. This flew in the face of classic economic theory, which says that at a given time and among a certain population, an item has a market value that does not depend on whether one owns it or not. Thaler won the 2017 Nobel Prize in Economics.
  • “The question that is most often asked about cognitive illusions is whether they can be overcome. The message … is not encouraging.”
  • Kahneman and others draw an analogy based on an understanding of the Müller-Lyer illusion, two parallel lines with arrows at each end. One line’s arrows point in; the other line’s arrows point out. Because of the direction of the arrows, the latter line appears shorter than the former, but in fact the two lines are the same length.
  • In this context, his pessimism relates, first, to the impossibility of effecting any changes to System 1—the quick-thinking part of our brain and the one that makes mistaken judgments tantamount to the Müller-Lyer line illusion
  • that’s not so easy in the real world, when we’re dealing with people and situations rather than lines. “Unfortunately, this sensible procedure is least likely to be applied when it is needed most,” Kahneman writes. “We would all like to have a warning bell that rings loudly whenever we are about to make a serious error, but no such bell is available.”
  • Because biases appear to be so hardwired and inalterable, most of the attention paid to countering them hasn’t dealt with the problematic thoughts, judgments, or predictions themselves
  • Is it really impossible, however, to shed or significantly mitigate one’s biases? Some studies have tentatively answered that question in the affirmative.
  • what if the person undergoing the de-biasing strategies was highly motivated and self-selected? In other words, what if it was me?
  • I met with Kahneman
  • Over an apple pastry and tea with milk, he told me, “Temperament has a lot to do with my position. You won’t find anyone more pessimistic than I am.”
  • Confirmation bias shows up most blatantly in our current political divide, where each side seems unable to allow that the other side is right about anything.
  • “I see the picture as unequal lines,” he said. “The goal is not to trust what I think I see. To understand that I shouldn’t believe my lying eyes.” That’s doable with the optical illusion, he said, but extremely difficult with real-world cognitive biases.
  • he most effective check against them, as Kahneman says, is from the outside: Others can perceive our errors more readily than we can.
  • “slow-thinking organizations,” as he puts it, can institute policies that include the monitoring of individual decisions and predictions. They can also require procedures such as checklists and “premortems,”
  • A premortem attempts to counter optimism bias by requiring team members to imagine that a project has gone very, very badly and write a sentence or two describing how that happened. Conducting this exercise, it turns out, helps people think ahead.
  • “My position is that none of these things have any effect on System 1,” Kahneman said. “You can’t improve intuition.
  • Perhaps, with very long-term training, lots of talk, and exposure to behavioral economics, what you can do is cue reasoning, so you can engage System 2 to follow rules. Unfortunately, the world doesn’t provide cues. And for most people, in the heat of argument the rules go out the window.
  • Kahneman describes an even earlier Nisbett article that showed subjects’ disinclination to believe statistical and other general evidence, basing their judgments instead on individual examples and vivid anecdotes. (This bias is known as base-rate neglect.)
  • over the years, Nisbett had come to emphasize in his research and thinking the possibility of training people to overcome or avoid a number of pitfalls, including base-rate neglect, fundamental attribution error, and the sunk-cost fallacy.
  • When Nisbett has to give an example of his approach, he usually brings up the baseball-phenom survey. This involved telephoning University of Michigan students on the pretense of conducting a poll about sports, and asking them why there are always several Major League batters with .450 batting averages early in a season, yet no player has ever finished a season with an average that high.
  • about half give the right answer: the law of large numbers, which holds that outlier results are much more frequent when the sample size (at bats, in this case) is small. Over the course of the season, as the number of at bats increases, regression to the mean is inevitabl
  • When Nisbett asks the same question of students who have completed the statistics course, about 70 percent give the right answer. He believes this result shows, pace Kahneman, that the law of large numbers can be absorbed into System 2—and maybe into System 1 as well, even when there are minimal cues.
  • Nisbett’s second-favorite example is that economists, who have absorbed the lessons of the sunk-cost fallacy, routinely walk out of bad movies and leave bad restaurant meals uneaten.
  • we’ve tested Michigan students over four years, and they show a huge increase in ability to solve problems. Graduate students in psychology also show a huge gain.”
  • , “I know from my own research on teaching people how to reason statistically that just a few examples in two or three domains are sufficient to improve people’s reasoning for an indefinitely large number of events.”
  • isbett suggested another factor: “You and Amos specialized in hard problems for which you were drawn to the wrong answer. I began to study easy problems, which you guys would never get wrong but untutored people routinely do … Then you can look at the effects of instruction on such easy problems, which turn out to be huge.”
  • Nisbett suggested that I take “Mindware: Critical Thinking for the Information Age,” an online Coursera course in which he goes over what he considers the most effective de-biasing skills and concepts. Then, to see how much I had learned, I would take a survey he gives to Michigan undergraduates. So I did.
  • he course consists of eight lessons by Nisbett—who comes across on-screen as the authoritative but approachable psych professor we all would like to have had—interspersed with some graphics and quizzes. I recommend it. He explains the availability heuristic this way: “People are surprised that suicides outnumber homicides, and drownings outnumber deaths by fire. People always think crime is increasing” even if it’s not.
  • When I finished the course, Nisbett sent me the survey he and colleagues administer to Michigan undergrads
  • It contains a few dozen problems meant to measure the subjects’ resistance to cognitive biases
  • I got it right. Indeed, when I emailed my completed test, Nisbett replied, “My guess is that very few if any UM seniors did as well as you. I’m sure at least some psych students, at least after 2 years in school, did as well. But note that you came fairly close to a perfect score.”
  • In 2006, seeking to prevent another mistake of that magnitude, the U.S. government created the Intelligence Advanced Research Projects Activity (iarpa), an agency designed to use cutting-edge research and technology to improve intelligence-gathering and analysis. In 2011, iarpa initiated a program, Sirius, to fund the development of “serious” video games that could combat or mitigate what were deemed to be the six most damaging biases: confirmation bias, fundamental attribution error, the bias blind spot (the feeling that one is less biased than the average person), the anchoring effect, the representativeness heuristic, and projection bias (the assumption that everybody else’s thinking is the same as one’s own).
  • For his part, Nisbett insisted that the results were meaningful. “If you’re doing better in a testing context,” he told me, “you’ll jolly well be doing better in the real world.”
  • The New York–based NeuroLeadership Institute offers organizations and individuals a variety of training sessions, webinars, and conferences that promise, among other things, to use brain science to teach participants to counter bias. This year’s two-day summit will be held in New York next month; for $2,845, you could learn, for example, “why are our brains so bad at thinking about the future, and how do we do it better?”
  • Nevertheless, I did not feel that reading Mindware and taking the Coursera course had necessarily rid me of my biases
  • One of the most important ingredients is what Tetlock calls “the outside view.” The inside view is a product of fundamental attribution error, base-rate neglect, and other biases that are constantly cajoling us into resting our judgments and predictions on good or vivid stories instead of on data and statistics
  • most promising are a handful of video games. Their genesis was in the Iraq War
  • Philip E. Tetlock, a professor at the University of Pennsylvania’s Wharton School, and his wife and research partner, Barbara Mellers, have for years been studying what they call “superforecasters”: people who manage to sidestep cognitive biases and predict future events with far more accuracy than the pundits
  • Together with collaborators who included staff from Creative Technologies, a company specializing in games and other simulations, and Leidos, a defense, intelligence, and health research company that does a lot of government work, Morewedge devised Missing. Some subjects played the game, which takes about three hours to complete, while others watched a video about cognitive bias. All were tested on bias-mitigation skills before the training, immediately afterward, and then finally after eight to 12 weeks had passed.
  • he said he saw the results as supporting the research and insights of Richard Nisbett. “Nisbett’s work was largely written off by the field, the assumption being that training can’t reduce bias,
  • “The literature on training suggests books and classes are fine entertainment but largely ineffectual. But the game has very large effects. It surprised everyone.”
  • even the positive results reminded me of something Daniel Kahneman had told me. “Pencil-and-paper doesn’t convince me,” he said. “A test can be given even a couple of years later. But the test cues the test-taker. It reminds him what it’s all about.”
  • Morewedge told me that some tentative real-world scenarios along the lines of Missing have shown “promising results,” but that it’s too soon to talk about them.
  • In the future, I will monitor my thoughts and reactions as best I can
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

OpenAI 'was working on advanced model so powerful it alarmed staff' | Technology sector... - 0 views

  • OpenAI was reportedly working on an advanced system before Sam Altman’s sacking that was so powerful it caused safety concerns among staff at the company.
  • The artificial intelligence model triggered such alarm with some OpenAI researchers that they wrote to the board of directors before Altman’s dismissal warning it could threaten humanity, Reuters reported.
  • The model, called Q* – and pronounced as “Q-Star” – was able to solve basic maths problems it had not seen before, according to the tech news site the Information, which added that the pace of development behind the system had alarmed some safety researchers. The ability to solve maths problems would be viewed as a significant development in AI.
Javier E

Opinion | Inflation Isn't Going to Bring Back the 1970s - The New York Times - 0 views

  • In both cases, heavy federal spending (on the war in Vietnam and Great Society programs in the 1960s, on the response to Covid in 2020 and 2021) added to demand. And shocks to global energy and food prices in the 1970s made the inflation problem significantly worse, just as they are doing now.
  • In contrast, efforts by the current Fed chairman, Jerome Powell, and his colleagues to bring down inflation enjoy considerable support from both the White House and Congress, at least so far. As a result, the Fed today has the independence it needs to make policy decisions based solely on the economic data and in the longer-run interests of the economy, not on short-term political considerations.
  • a key difference from the ’60s and ’70s is that the Fed’s views on both the sources of inflation and its own responsibility to control the pace of price increases have changed markedly. Burns, who presided over most of the 1970s inflation, had a cost-push theory of inflation. He believed that inflation was caused primarily by large companies and trade unions, which used their market power to push up prices and wages even in a slow economy. He thought the Fed had little ability to counteract these forces, and as an alternative to raising interest rates, he helped persuade Nixon to set wage and price controls in 1971, which proved a spectacular failure.
  • ...5 more annotations...
  • today’s monetary policymakers understand that as we wait for supply constraints to ease, which they will eventually, the Fed can help reduce inflation by slowing growth in demand. Drawing on the lessons of the past, they also understand that by doing what is needed to get inflation under control, they can help the economy and the job market avoid much more serious instability in the future.
  • Markets and the public appear to understand how the Fed’s approach has changed from the earlier era I described
  • they suggest continued confidence that, over the longer term, the Fed will be able to bring inflation down close to its 2 percent target.
  • This confidence in turn makes the Fed’s job easier, by limiting the risk of an “inflationary psychology,” as Burns once put it, on the part of the public.
  • The degree to which the central bank will have to tighten monetary policy to control our currently high inflation, and the associated risk of an economic slowdown or recession, depends on several factors: how quickly the supply-side problems (high oil prices, supply-chain snarls) subside, how aggregate spending reacts to the tighter financial conditions engineered by the Fed and whether the Fed retains its credibility as an inflation fighter even if inflation takes a while to subside.
Javier E

You Are Going to Get COVID Again … And Again … And Again - The Atlantic - 0 views

  • You’re not just likely to get the coronavirus. You’re likely to get it again and again and again.
  • “I personally know several individuals who have had COVID in almost every wave,” says Salim Abdool Karim, a clinical infectious-diseases epidemiologist and the director of the Center for the AIDS Program of Research in South Africa, which has experienced five meticulously tracked surges, and where just one-third of the population is vaccinated.
  • er best guess for the future has the virus infiltrating each of us, on average, every three years or so. “Barring some intervention that really changes the landscape,” she said, “we will all get SARS-CoV-2 multiple times in our life.”
  • ...23 more annotations...
  • that would be on par with what we experience with flu viruses, which scientists estimate hit us about every two to five years, less often in adulthood. It also matches up well with the documented cadence of the four other coronaviruses that seasonally trouble humans, and cause common colds.
  • For now, every infection, and every subsequent reinfection, remains a toss of the dice. “Really, it’s a gamble,” says Ziyad Al-Aly, a clinical epidemiologist and long-COVID researcher at Washington University in St. Louis. Vaccination and infection-induced immunity may load the dice against landing on severe disease, but that danger will never go away completely, and scientists don’t yet know what happens to people who contract “mild” COVID over and over again
  • Or maybe not. This virus seems capable of tangling into just about every tissue in the body, affecting organs such as the heart, brain, liver, kidneys, and gut; it has already claimed the lives of millions, while saddling countless others with symptoms that can linger for months or years.
  • considering our current baseline, “less dangerous” could still be terrible—and it’s not clear exactly where we’re headed. When it comes to reinfection, we “just don’t know enough,”
  • Perhaps, as several experts have posited since the pandemic’s early days, SARS-CoV-2 will just become the fifth cold-causing coronavirus.
  • A third or fourth bout might be more muted still; the burden of individual diseases may be headed toward an asymptote of mildness that holds for many years
  • Future versions of SARS-CoV-2 could continue to shape-shift out of existing antibodies’ reach, as coronaviruses often do. But the body is flush with other fighters that are much tougher to bamboozle—among them, B cells and T cells that can quash a growing infection before it spirals out of control
  • Those protections tend to build iteratively, as people see pathogens or vaccines more often. People vaccinated three times over, for instance, seem especially well equipped to duke it out with all sorts of SARS-CoV-2 variants, including Omicron and its offshoots.
  • promising patterns: Second infections and post-vaccination infections “are significantly less severe,” she told me, sometimes to the point where people don’t notice them at all
  • Bodies, wised up to the virus’s quirks, can now react more quickly, clobbering it with sharper and speedier strikes.
  • “There are still very good reasons” to keep exposures few and far between, Landon, of the University of Chicago, told me. Putting off reinfection creates fewer opportunities for harm: The dice are less likely to land on severe disease (or chronic illness) when they’re rolled less often overall. It also buys us time to enhance our understanding of the virus, and improve our tools to fight it.
  • Immunity, though, is neither binary nor permanent. Even if SARS-CoV-2’s assaults are blunted over time, there are no guarantees about the degree to which that happens, or how long it lasts.
  • A slew of factors could end up weighting the dice toward severe disease—among them, a person’s genetics, age, underlying medical conditions, health-care access, and frequency or magnitude of exposure to the virus.
  • for everyone else, no amount of viral dampening can totally eliminate the chance, however small it may be, of getting very sick.
  • Long COVID, too, might remain a possibility with every discrete bout of illness. Or maybe the effects of a slow-but-steady trickle of minor, fast-resolving infections would sum together, and bring about the condition.
  • Every time the body’s defenses are engaged, it “takes a lot of energy, and causes tissue damage,” Thomas told me. Should that become a near-constant barrage, “that’s probably not great for you.”
  • Bodies are resilient, especially when they’re offered time to rest, and she doubts that reinfection with a typically ephemeral virus such as SARS-CoV-2 would cause mounting damage. “The cumulative effect is more likely to be protective than detrimental,” she said, because of the immunity that’s laid down each time.
  • people who have caught the virus twice or thrice may be more likely to become long-haulers than those who have had it just once.
  • Some other microbes, when they reinvade us, can fire up the immune system in unhelpful ways, driving bad bouts of inflammation that burn through the body, or duping certain defensive molecules into aiding, rather than blocking, the virus’s siege. Researchers don’t think SARS-CoV-2 will do the same. But this pathogen is “much more formidable than even someone working on coronaviruses would have expected,
  • Seasonal encounters with pathogens other than SARS-CoV-2 don’t often worry us—but perhaps that’s because we’re still working to understand their toll. “Have we been underestimating long-term consequences from other repeat infections?” Thomas said. “The answer is probably, almost certainly, yes.”
  • the rhythm of reinfection isn’t just about the durability of immunity or the pace of viral evolution. It’s also about our actions and policies, and whether they allow the pathogen to transmit and evolve. Strategies to avoid infection—to make it as infrequent as possible, for as many people as possible—remain options, in the form of vaccination, masking, ventilation, paid sick leave, and more.
  • Gordon and Swartz are both hopeful that the slow accumulation of immunity will also slash people’s chances of developing long COVID.
  • The outlooks of the experts I spoke with spanned the range from optimism to pessimism, though all agreed that uncertainty loomed. Until we know more, none were keen to gamble with the virus—or with their own health. Any reinfection will likely still pose a threat, “even if it’s not the worst-case scenario,” Abdool Karim told me. “I wouldn’t want to put myself in that position.”
Javier E

Dispute Within Art Critics Group Over Diversity Reveals a Widening Rift - The New York ... - 0 views

  • The need for change in museums was pointed out in the 2022 Burns Halperin Report, published by Artnet News in December, that analyzed more than a decade of data from over 30 cultural institutions. It found that just 11 percent of acquisitions at U.S. museums were by female artists and only 2.2 percent were by Black American artists
  • Julia Halperin, one of the study’s organizers, who recently left her position as Artnet’s executive editor, said that the industry has an asymmetric approach to diversity. “The pool of artists is diversifying somewhat, but the pool of staff critics has not,” she said.
  • the matter of diversity in criticism is compounded by the fact that opportunities for all critics have been diminished.
  • ...12 more annotations...
  • While most editors recognize the importance of criticism in helping readers decipher contemporary art, and the multibillion-dollar industry it has created, venues for such writing are shrinking. Over the years, newspapers including The Philadelphia Inquirer and The Miami Herald have trimmed critics’ jobs.
  • In December, the Penske Media Corporation announced that it had acquired Artforum, a contemporary art journal, and was bringing the title under the same ownership as its two competitors, ARTnews and Art in America. Its sister publication, Bookforum, was not acquired and ceased operations. Through the pandemic, other outlets have shuttered, including popular blogs run by SFMOMA and the Walker Art Center in Minneapolis as well as smaller magazines called Astra and Elephant.
  • (National newspapers with art critics on staff include The New York Times, The Los Angeles Times, The Boston Globe and The Washington Post. )
  • David Velasco, editor in chief of Artforum, said in an interview that he hoped the magazine’s acquisition would improve the publication’s financial picture. The magazine runs nearly 700 reviews a year, Velasco said; about half of those run online and pay $50 for roughly 250 words. “Nobody I know who knows about art does it for the money,” Velasco said, “but I would love to arrive at a point where people could.”
  • Noah Dillon, who was on the AICA-USA board until he resigned last year, has been reluctant to recommend that anyone follow his path to become a critic. Not that they could. The graduate program in art writing that he attended at the School of Visual Arts in Manhattan also closed during the pandemic.
  • “It’s crazy that the ideal job nowadays is producing catalog essays for galleries, which are basically just sales pitches,” Dillon said in a phone interview. “Critical thinking about art is not valued financially.”
  • Large galleries — including Gagosian, Hauser & Wirth, and Pace Gallery — now produce their own publications with interviews and articles sometimes written by the same freelance critics who simultaneously moonlight as curators and marketers. Within its membership, AICA-USA has a number of writers who belong to all three categories.
  • According to Lilly Wei, a longtime AICA-USA board member who recently resigned, the group explored different ways of protecting writers in the industry. There were unrealized plans of turning the organization into a union; others hoped to create a permanent emergency fund to keep financially struggling critics afloat. She said the organization has instead canceled initiatives, including an awards program for the best exhibitions across the country.
  • “It just came down to not having enough money,” said Terence Trouillot, a senior editor at Frieze, a contemporary art magazine . He spent nearly three years on the AICA-USA board, resigning in 2022. He said that initiatives to re-energize the group “were just moving too slowly.”
  • The organization has yearly dues of $115 and provides free access to many museums. But some members complained that the fee was too expensive for young critics, yet not enough to support significant programming.
  • Efforts to revive AICA-USA are continuing. In January, Jasmine Amussen joined the organization’s board to help rethink the meaning of criticism for a younger generation.
  • Amussen, 33, is the editor of Burnaway, which focuses on criticism in the American South and often features young Black artists. (The magazine started in 2008 in response to layoffs at the Atlanta Journal-Constitution’s culture section and now runs as a nonprofit with four full-time employees and a budget that mostly consists of grants.)
Javier E

Did the First Americans Arrive via Land Bridge? This Geneticist Says No. - The New York... - 0 views

  • In her new book, “Origin: A Genetic History of the Americas,” Raff beautifully integrates new data from different sciences (archaeology, genetics, linguistics) and different ways of knowing, including Indigenous oral traditions, in a masterly retelling of the story of how, and when, people reached the Americas.
  • Raff skillfully reveals how well-dated archaeological sites, including recently announced 22,000-year-old human footprints from White Sands, N.M., are at odds with the Clovis first hypothesis.
  • the path to the Americas was coastal (the Kelp Highway hypothesis) rather than inland, and that Beringia was not a bridge but a homeland — twice the size of Texas — inhabited for millenniums by the ancestors of the First Peoples of the Americas.
  • ...4 more annotations...
  • Raff effectively models how science is done, how hypotheses are tested, and how new data are used to refute old ideas and generate new ones.
  • Raff takes the reader from underground caverns in Belize to a clean lab at the University of Kansas where ancient DNA is tediously teased from old bones. She explains difficult to understand concepts — geoarchaeology, coalescence times, biodistance — with well-placed sidebars. The book is richly referenced, and informative footnotes and endnotes give readers an opportunity to take a deeper dive if they wish.
  • Given the fast and furious pace of discovery in this field, Raff is clear that not everyone will agree with her interpretations of the data. “All scientists must hold themselves open to the possibility that we could be wrong, and it may very well be that in five, 10 or 20 years, this book will be as out of date as any other,” she writes. “That possibility is what makes working in this field so rewarding.” That, she explains, is how science is done.
  • Jennifer Raff is a well-published scholar and accomplished scientific communicator who has contributed important insights into the genetic history and movement patterns of Indigenous Americans. She is at the forefront of a culture change in our science. And now she has written the book anyone interested in the peopling of the Americas must read.
Javier E

Ukraine Crisis Kicks Off New Superpower Struggle Among U.S., Russia and China - WSJ - 0 views

  • Russia’s audacious military mobilization in and around Ukraine is the first major skirmish of a new order in international politics, with three major powers jostling for position in ways that threaten America’s primacy.
  • Russia and China have built a thriving partnership based in part on a shared interest in diminishing U.S. power. Unlike the Sino-Soviet bloc of the 1950s, Russia is a critical gas supplier to Europe, while China isn’t an impoverished, war-ravaged partner but the world’s manufacturing powerhouse with an expanding military.
  • To do this, Mr. Putin shifted military units from Russia’s border with China, showing confidence in his relations with Beijing. The two powers, in effect, are coordinating to reshape the global order to their advantage, though their ties stop short of a formal alliance.
  • ...18 more annotations...
  • Russian President Vladimir Putin is demanding that the West rewrite the post-Cold War security arrangements for Europe and demonstrated that Russia has the military capability to impose its will despite Western objections and economic sanctions.
  • “We all thought we were looking at a Europe whole, free and at peace indefinitely,” said Michele Flournoy, who served as the Pentagon’s top policy official during the Obama administration. “We knew that Russia would conduct gray zone operations and that Putin would use his KGB playbook to create instability on his periphery. But a wholesale invasion of a sovereign country to reorient its government is a different moment.”
  • “And we’re seeing that while Beijing doesn’t really like Putin’s tactics, they’re willing to band together as authoritarian states against the Western democracies,” Ms. Flournoy added. “We are going to see more and more of that in the future.”
  • China’s Communist Party leadership also saw pro-democracy protest movements in former Soviet republics as U.S.-engineered plots that could ultimately be used against Beijing.
  • For much of the past decade, the U.S. security establishment began taking note of what the Pentagon in 2015 called the “re-emergence of great power competition” and shifted from its emphasis of counterterrorism operations in the Middle East and Southwest Asia.
  • Defense Secretary Lloyd Austin has repeatedly cast China as the “pacing challenge” while Russia was seen as the lesser longer-term danger.
  • Even with annual defense budgets that soared over $700 billion, coping with an urgent Russian-generated crisis while preparing for a Chinese threat whose peak is still years away presents an enormous challenge for the Pentagon.
  • ”The United States is particularly at risk of being overwhelmed should its military be forced to fight on two or more fronts simultaneously,” said a Congressionally mandated study of the Pentagon’s strategy that was issued in 2018
  • The era of nuclear reductions may come to an end as the U.S. military establishment argues for a large enough nuclear arsenal to deter both Russia’s formidable nuclear weaponry and China’s rapidly growing nuclear forces, which aren’t limited by any arms-control agreement.
  • “The United States is going to have to get used again to operating in multiple theaters simultaneously—not just militarily, but in terms of psychology and foreign-policy making,”
  • Already, debates are emerging among U.S. defense experts on whether the Pentagon should give equal weight to the twin challenges from Beijing and Moscow or focus more on the Pacific.
  • Should the West impose crippling sanctions on Russian banks and major companies, Moscow is likely to become more reliant on Beijing, which has issued a digital currency and is building a payments system separate from the West’s.
  • “It is already ending the amnesia about the importance of energy security,” said Daniel Yergin, vice chairman of research firm IHS Markit. “It means a new emphasis on diversification of energy sources for Europe and a new look at U.S. domestic and international energy policies.”
  • Advocates of using energy as a geopolitical tool say Washington should promote investment in U.S. oil and natural gas and approve new LNG export terminals and pipelines in the United States.
  • The 1997 NATO-Russia Founding Act precludes the alliance from permanently stationing additional substantial combat forces on the territory of its new Eastern and Central European members, but could now be repealed.
  • A recent poll by the European Council on Foreign Relations noted most Europeans see the Ukraine crisis as a broader threat to Europe. Some current and former officials, however, worry that the alliance’s solidarity could fray in the years ahead as it debates the need for greater military spending and wrestles whether its military ties with Georgia might stir new confrontations with Moscow.
  • the Alphen Group by former officials and other experts urges that European members of the alliance and Canada provide for 50% of NATO’s minimum military requirements by 2030 so the U.S. can focus more on deterring China.
  • “Everybody’s unified right now and outraged about what the Russians are doing,” said Alexander Vershbow, a former U.S. ambassador to NATO who also served as the alliance’s deputy secretary-general from 2012 to 2016. “But when we get down to making longer-term commitments to strengthen NATO’s defense posture and potentially revisit nuclear issues, it could become very divisive.”
criscimagnael

Amazon Rainforest May Be Approaching a Critical Tipping Point, Study Finds - The New Yo... - 0 views

  • The Amazon is losing its ability to recover from disturbances like droughts and land-use changes, scientists reported Monday, adding to concern that the rainforest is approaching a critical threshold beyond which much of it will be replaced by grassland, with vast consequences for biodiversity and climate change.
  • The scientists said their research did not pinpoint when this threshold, which they described as a tipping point, might be reached.
  • Losing the rainforest could result in up to 90 billion tons of heat-trapping carbon dioxide getting put back into the atmosphere, he said, equivalent to several years of global emissions. That would make limiting global warming more difficult.
  • ...7 more annotations...
  • But some research has concluded that deforestation, drying and other factors could lead to substantial forest dieback in the Amazon by the end of this century.
  • Covering more than two million square miles in Brazil and neighboring countries, the Amazon is the world’s largest rainforest, and serves a crucial role in mitigating climate change in most years by taking in more carbon dioxide from the atmosphere than it releases.
  • “That lack of resilience shows that, indeed, there is only so much of a beating that this forest can take,”
  • But climate change, together with widespread deforestation and burning for agriculture and ranching, has taken a toll on the Amazon, making it warmer and drier. The region, one of the wettest on Earth, has experienced three droughts since 2000.
  • “It’s reducing the ability to bounce back.”
  • The researchers found that more than three-quarters of the untouched rainforest lost resiliency over that time, and that the loss was greatest in areas that were drier or closer to human activities like logging. The study was published in the journal Nature Climate Change.
  • About 17 percent of the Amazon has been deforested over the past half-century, and while the pace of deforestation slowed for some years in Brazil, it has picked up again more recently
« First ‹ Previous 221 - 240 of 248 Next ›
Showing 20 items per page