Skip to main content

Home/ History Readings/ Group items tagged 2023

Rss Feed Group items tagged

Javier E

Reading in the Time of Books Bans and A.I. - The New York Times - 0 views

  • We are in the throes of a reading crisis.
  • While right and left are hardly equivalent in their stated motivations, they share the assumption that it’s important to protect vulnerable readers from reading the wrong things.
  • But maybe the real problem is that children aren’t being taught to read at all.
  • ...44 more annotations...
  • . In May, David Banks, the chancellor of New York City’s public schools, for many years a stronghold of “whole language” instruction, announced a sharp pivot toward phonics, a major victory for the “science of reading” movement and a blow to devotees of entrenched “balanced literacy” methods
  • As corporate management models and zealous state legislatures refashion the academy into a gated outpost of the gig economy, the humanities have lost their luster for undergraduates. According to reports in The New Yorker and elsewhere, fewer and fewer students are majoring in English, and many of those who do (along with their teachers) have turned away from canonical works of literature toward contemporary writing and pop culture. Is anyone reading “Paradise Lost” anymore? Are you?
  • While we binge and scroll and D.M., the robots, who are doing more and more of our writing, may also be taking over our reading.
  • There is so much to worry about. A quintessentially human activity is being outsourced to machines that don’t care about phonics or politics or beauty or truth. A precious domain of imaginative and intellectual freedom is menaced by crude authoritarian politics. Exposure to the wrong words is corrupting our children, who aren’t even learning how to decipher the right ones. Our attention spans have been chopped up and commodified, sold off piecemeal to platforms and algorithms. We’re too busy, too lazy, too preoccupied to lose ourselves in books.
  • the fact that the present situation has a history doesn’t mean that it isn’t rea
  • the reading crisis isn’t simply another culture-war combat zone. It reflects a deep ambivalence about reading itself, a crack in the foundations of modern consciousness.
  • Just what is reading, anyway? What is it for? Why is it something to argue and worry about? Reading isn’t synonymous with literacy, which is one of the necessary skills of contemporary existence. Nor is it identical with literature, which designates a body of written work endowed with a special if sometimes elusive prestige.
  • Is any other common human undertaking so riddled with contradiction? Reading is supposed to teach us who we are and help us forget ourselves, to enchant and disenchant, to make us more worldly, more introspective, more empathetic and more intelligent. It’s a private, even intimate act, swathed in silence and solitude, and at the same time a social undertaking. It’s democratic and elitist, soothing and challenging, something we do for its own sake and as a means to various cultural, material and moral ends.
  • Fun and fundamental: Together, those words express a familiar utilitarian, utopian promise — the faith that what we enjoy doing will turn out to be what we need to do, that our pleasures and our responsibilities will turn out to be one and the same. It’s not only good; it’s good for you.
  • Reading is, fundamentally, both a tool and a toy. It’s essential to social progress, democratic citizenship, good government and general enlightenment.
  • It’s also the most fantastically, sublimely, prodigiously useless pastime ever invented
  • Teachers, politicians, literary critics and other vested authorities labor mightily to separate the edifying wheat from the distracting chaff, to control, police, correct and corral the transgressive energies that propel the turning of pages.
  • His despair mirrors his earlier exhilaration and arises from the same source. “I envied my fellow-slaves for their stupidity. I have often wished myself a beast. I preferred the condition of the meanest reptile to my own. Any thing, no matter what, to get rid of thinking!”
  • Reading is a relatively novel addition to the human repertoire — less than 6,000 years old — and the idea that it might be available to everybody is a very recent innovation
  • Written language, associated with the rise of states and the spread of commerce, was useful for trade, helpful in the administration of government and integral to some religious practices. Writing was a medium for lawmaking, record-keeping and scripture, and reading was the province of priests, bureaucrats and functionaries.
  • For most of history, that is, universal literacy was a contradiction in terms. The Latin word literatus designated a member of the learned elite
  • Anyone could learn to do it, but the mechanisms of learning were denied to most people on the grounds of caste, occupation or gender.
  • According to Steven Roger Fischer’s lively and informative “A History of Reading” (2003), “Western Europe began the transition from an oral to a literate society in the early Middle Ages, starting with society’s top rungs — aristocracy and clergy — and finally including everyone else around 1,200 years later.”
  • . The print revolution catalyzed a global market that flourishes to this day: Books became commodities, and readers became consumers.
  • For Fischer, as for many authors of long-range synthetic macrohistories, the story of reading is a chronicle of progress, the almost mythic tale of a latent superpower unlocked for the benefit of mankind.
  • The crisis is what happens either when those efforts succeed or when they fail. Everyone likes reading, and everyone is afraid of it.
  • For one thing, the older, restrictive model of literacy as an elite prerogative proved to be tenacious
  • The novel, more than any other genre, catered to this market. Like every other development in modern popular culture, it provoked a measure of social unease. Novels, at best a source of harmless amusement and mild moral instruction, were at worst — from the pens of the wrong writers, or in the hands of the wrong readers — both invitations to vice and a vice unto themselves
  • More consequential — and more revealing of the destabilizing power of reading — was the fear of literacy among the laboring classes in Europe and America. “Reading, writing and arithmetic,” the Enlightenment political theorist Bernard Mandeville asserted, were “very pernicious to the poor” because education would breed restlessness and disconte
  • “It was unlawful, as well as unsafe, to teach a slave to read,” Frederick Douglass writes in his “Narrative of the Life” recalling the admonitions of one of his masters, whose wife had started teaching young Frederick his letters. If she persisted, the master explained, their chattel would “become unmanageable, and of no value to his master. As to himself, it could do him no good, but a great deal of harm. It would make him discontented and unhappy.”
  • “As I read and contemplated the subject, behold! that very discontentment which Master Hugh had predicted would follow my learning to read had already come, to torment and sting my soul to unutterable anguish. As I writhed under it, I would at times feel that learning to read had been a curse rather than a blessing.”
  • “If extraordinary human faculties and powers do lie dormant until a social innovation calls them into life,” he writes, “perhaps this might help to explain humanity’s constant advancement.” “Reading,” he concludes, “had become our union card to humanity.”
  • Douglass’s literary genius resides in the way he uses close attention to his own situation to arrive at the essence of things — to crack the moral nut of slavery and, in this case, to peel back the epistemological husk of freedom.
  • He has freed his mind, but the rest has not followed. In time it would, but freedom itself brings him uncertainty and terror, an understanding of his own humanity that is embattled and incomplete.
  • Here, the autobiographical touches on the mythic, specifically on the myth of Prometheus, whose theft of fire — a curse as well as a blessing bestowed on a bumbling, desperate species — is a primal metaphor for reading.
  • A school, however benevolently conceived and humanely administered, is a place of authority, where the energies of the young are regulated, their imaginations pruned and trained into conformity. As such, it will inevitably provoke resistance, rebellion and outright refusal on the part of its wards
  • Schools exist to stifle freedom, and also to inculcate it, a dialectic that is the essence of true education. Reading, more than any other discipline, is the engine of this process, precisely because it escapes the control of those in charge.
  • Apostles of reading like to quote Franz Kafka’s aphorism that “a book must be the ax for the frozen sea within us.” By itself, the violence of the metaphor is tempered by its therapeutic implication.
  • Kafka’s previous sentence: “What we need are books that hit us like the most painful misfortune, like the death of someone we loved more than we love ourselves, that make us feel as though we had been banished to the woods, far from any human presence, like a suicide.”
  • Are those the books you want in your child’s classroom? To read in this way is to go against the grain, to feel oneself at odds, alienated, alone. Schools exist to suppress those feelings, to blunt the ax and gently thaw the sea
  • Chaotic reading is something else. It isn’t bad so much as unjustified, useless, unreasonable, ungoverned. Defenses of this kind of reading, which are sometimes the memoirs of a certain kind of reader, favor words like promiscuous, voracious, indiscriminate and compulsive.
  • Roland Barthes distinguished between two kinds of literary work:
  • Text of pleasure: the text that contents, fills, grants euphoria: the text that comes from culture and does not break with it, is linked to a comfortable practice of reading. Text of bliss: the text that imposes a state of loss, the text that discomforts (perhaps to the point of a certain boredom), unsettles the reader’s historical, cultural, psychological assumptions, the consistency of his tastes, values, memories, brings to a crisis his relation with language.
  • he is really describing modalities of reading. To a member of the slaveholding Southern gentry, “The Columbian Orator” is a text of pleasure, a book that may challenge and surprise him in places, but that does not undermine his sense of the world or his place in it. For Frederick Douglass, it is a text of bliss, “bringing to crisis” (as Barthes would put it) his relation not only to language but to himself.
  • If you’ll forgive a Dungeons and Dragons reference, it might help to think of these types of reading as lawful and chaotic.
  • Lawful reading rests on the certainty that reading is good for us, and that it will make us better people. We read to see ourselves represented, to learn about others, to find comfort and enjoyment and instruction. Reading is fun! It’s good and good for you.
  • That is important work, but it’s equally critical for that work to be subverted, for the full destructive potential of reading to lie in reach of innocent hands.
  • Bibliophilia is lawful. Bibliomania is chaotic.
  • The point is not to choose between them: This is a lawful publication staffed by chaotic readers. In that way, it resembles a great many English departments, bookstores, households and classrooms. Here, the crisis never ends. Or rather, it will end when we stop reading. Which is why we can’t.
Javier E

AI is already writing books, websites and online recipes - The Washington Post - 0 views

  • Experts say those books are likely just the tip of a fast-growing iceberg of AI-written content spreading across the web as new language software allows anyone to rapidly generate reams of prose on almost any topic. From product reviews to recipes to blog posts and press releases, human authorship of online material is on track to become the exception rather than the norm.
  • Semrush, a leading digital marketing firm, recently surveyed its customers about their use of automated tools. Of the 894 who responded, 761 said they’ve at least experimented with some form of generative AI to produce online content, while 370 said they now use it to help generate most if not all of their new content, according to Semrush Chief Strategy Officer Eugene Levin.
  • What that may mean for consumers is more hyper-specific and personalized articles — but also more misinformation and more manipulation, about politics, products they may want to buy and much more.
  • ...32 more annotations...
  • As AI writes more and more of what we read, vast, unvetted pools of online data may not be grounded in reality, warns Margaret Mitchell, chief ethics scientist at the AI start-up Hugging Face
  • “The main issue is losing track of what truth is,” she said. “Without grounding, the system can make stuff up. And if it’s that same made-up thing all over the world, how do you trace it back to what reality is?”
  • a raft of online publishers have been using automated writing tools based on ChatGPT’s predecessors, GPT-2 and GPT-3, for years. That experience shows that a world in which AI creations mingle freely and sometimes imperceptibly with human work isn’t speculative; it’s flourishing in plain sight on Amazon product pages and in Google search results.
  • “If you have a connection to the internet, you have consumed AI-generated content,” said Jonathan Greenglass, a New York-based tech investor focused on e-commerce. “It’s already here.
  • “In the last two years, we’ve seen this go from being a novelty to being pretty much an essential part of the workflow,”
  • the news credibility rating company NewsGuard identified 49 news websites across seven languages that appeared to be mostly or entirely AI-generated.
  • The sites sport names like Biz Breaking News, Market News Reports, and bestbudgetUSA.com; some employ fake author profiles and publish hundreds of articles a day, the company said. Some of the news stories are fabricated, but many are simply AI-crafted summaries of real stories trending on other outlets.
  • Ingenio, the San Francisco-based online publisher behind sites such as horoscope.com and astrology.com, is among those embracing automated content. While its flagship horoscopes are still human-written, the company has used OpenAI’s GPT language models to launch new sites such as sunsigns.com, which focuses on celebrities’ birth signs, and dreamdiary.com, which interprets highly specific dreams.
  • Ingenio used to pay humans to write birth sign articles on a handful of highly searched celebrities like Michael Jordan and Ariana Grande, said Josh Jaffe, president of its media division. But delegating the writing to AI allows sunsigns.com to cheaply crank out countless articles on not-exactly-A-listers
  • In the past, Jaffe said, “We published a celebrity profile a month. Now we can do 10,000 a month.”
  • It isn’t just text. Google users have recently posted examples of the search engine surfacing AI-generated images. For instance, a search for the American artist Edward Hopper turned up an AI image in the style of Hopper, rather than his actual art, as the first result.
  • Jaffe said he isn’t particularly worried that AI content will overwhelm the web. “It takes time for this content to rank well” on Google, he said — meaning that it appears on the first page of search results for a given query, which is critical to attracting readers. And it works best when it appears on established websites that already have a sizable audience: “Just publishing this content doesn’t mean you have a viable business.”
  • Google clarified in February that it allows AI-generated content in search results, as long as the AI isn’t being used to manipulate a site’s search rankings. The company said its algorithms focus on “the quality of content, rather than how content is produced.”
  • Reputations are at risk if the use of AI backfires. CNET, a popular tech news site, took flack in January when fellow tech site Futurism reported that CNET had been using AI to create articles or add to existing ones without clear disclosures. CNET subsequently investigated and found that many of its 77 AI-drafted stories contained errors.
  • Jaffe said his company discloses its use of AI to readers, and he promoted the strategy at a recent conference for the publishing industry. “There’s nothing to be ashamed of,” he said. “We’re actually doing people a favor by leveraging generative AI tools” to create niche content that wouldn’t exist otherwise.
  • BuzzFeed, which pioneered a media model built around reaching readers directly on social platforms like Facebook, announced in January it planned to make “AI inspired content” part of its “core business,” such as using AI to craft quizzes that tailor themselves to each reader. BuzzFeed announced last month that it is laying off 15 percent of its staff and shutting down its news division, BuzzFeed News.
  • it’s finding traction in the murkier worlds of online clickbait and affiliate marketing, where success is less about reputation and more about gaming the big tech platforms’ algorithms.
  • That business is driven by a simple equation: how much it costs to create an article vs. how much revenue it can bring in. The main goal is to attract as many clicks as possible, then serve the readers ads worth just fractions of a cent on each visit — the classic form of clickbait
  • In the past, such sites often outsourced their writing to businesses known as “content mills,” which harness freelancers to generate passable copy for minimal pay. Now, some are bypassing content mills and opting for AI instead.
  • “Previously it would cost you, let’s say, $250 to write a decent review of five grills,” Semrush’s Levin said. “Now it can all be done by AI, so the cost went down from $250 to $10.”
  • The problem, Levin said, is that the wide availability of tools like ChatGPT means more people are producing similarly cheap content, and they’re all competing for the same slots in Google search results or Amazon’s on-site product reviews
  • So they all have to crank out more and more article pages, each tuned to rank highly for specific search queries, in hopes that a fraction will break through. The result is a deluge of AI-written websites, many of which are never seen by human eyes.
  • But CNET’s parent company, Red Ventures, is forging ahead with plans for more AI-generated content, which has also been spotted on Bankrate.com, its popular hub for financial advice. Meanwhile, CNET in March laid off a number of employees, a move it said was unrelated to its growing use of AI.
  • The rise of AI is already hurting the business of Textbroker, a leading content platform based in Germany and Las Vegas, said Jochen Mebus, the company’s chief revenue officer. While Textbroker prides itself on supplying credible, human-written copy on a huge range of topics, “People are trying automated content right now, and so that has slowed down our growth,”
  • Mebus said the company is prepared to lose some clients who are just looking to make a “fast dollar” on generic AI-written content. But it’s hoping to retain those who want the assurance of a human touch, while it also trains some of its writers to become more productive by employing AI tools themselves.
  • He said a recent survey of the company’s customers found that 30 to 40 percent still want exclusively “manual” content, while a similar-size chunk is looking for content that might be AI-generated but human-edited to check for tone, errors and plagiarism.
  • Levin said Semrush’s clients have also generally found that AI is better used as a writing assistant than a sole author. “We’ve seen people who even try to fully automate the content creation process,” he said. “I don’t think they’ve had really good results with that. At this stage, you need to have a human in the loop.”
  • For Cowell, whose book title appears to have inspired an AI-written copycat, the experience has dampened his enthusiasm for writing.“My concern is less that I’m losing sales to fake books, and more that this low-quality, low-priced, low-effort writing is going to have a chilling effect on humans considering writing niche technical books in the future,”
  • It doesn’t help, he added, knowing that “any text I write will inevitably be fed into an AI system that will generate even more competition.”
  • Amazon removed the impostor book, along with numerous others by the same publisher, after The Post contacted the company for comment.
  • AI-written books aren’t against Amazon’s rules, per se, and some authors have been open about using ChatGPT to write books sold on the site.
  • “Amazon is constantly evaluating emerging technologies and innovating to provide a trustworthy shopping experience for our customers,”
Javier E

Inside the final seconds of a deadly Tesla Autopilot crash - Washington Post - 0 views

  • In a Riverside, Calif., courtroom last month in a lawsuit involving another fatal crash where Autopilot was allegedly involved, a Tesla attorney held a mock steering wheel before the jury and emphasized that the driver must always be in control.Autopilot “is basically just fancy cruise control,” he said.
  • Tesla CEO Elon Musk has painted a different reality, arguing that his technology is making the roads safer: “It’s probably better than a person right now,” Musk said of Autopilot during a 2016 conference call with reporters.
  • In a different case involving another fatal Autopilot crash, a Tesla engineer testified that a team specifically mapped the route the car would take in the video. At one point during testing for the video, a test car crashed into a fence, according to Reuters. The engineer said in a deposition that the video was meant to show what the technology could eventually be capable of — not what cars on the road could do at the time.
  • ...9 more annotations...
  • NHTSA said it has an “active investigation” of Autopilot. “NHTSA generally does not comment on matters related to open investigations,” NHTSA spokeswoman Veronica Morales said in a statement. In 2021, the agency adopted a rule requiring carmakers such as Tesla to report crashes involving their driver-assistance systems.Beyond the data collection, though, there are few clear legal limitations on how this type of advanced driver-assistance technology should operate and what capabilities it should have.
  • “Tesla has decided to take these much greater risks with the technology because they have this sense that it’s like, ‘Well, you can figure it out. You can determine for yourself what’s safe’ — without recognizing that other road users don’t have that same choice,” former NHTSA administrator Steven Cliff said in an interview.“If you’re a pedestrian, [if] you’re another vehicle on the road,” he added, “do you know that you’re unwittingly an object of an experiment that’s happening?”
  • Banner researched Tesla for years before buying a Model 3 in 2018, his wife, Kim, told federal investigators. Around the time of his purchase, Tesla’s website featured a video showing a Tesla navigating the curvy roads and intersections of California while a driver sits in the front seat, hands hovering beneath the wheel.The video, recorded in 2016, is still on the site today.“The person in the driver’s seat is only there for legal reasons,” the video says. “He is not doing anything. The car is driving itself.”
  • Musk made a similar assertion about a more sophisticated form of Autopilot called Full Self-Driving on an earnings call in July. “Now, I know I’m the boy who cried FSD,” he said. “But man, I think we’ll be better than human by the end of this year.”
  • While the video concerned Full Self-Driving, which operates on surface streets, the plaintiffs in the Banner case argue Tesla’s “marketing does not always distinguish between these systems.”
  • Not only is the marketing misleading, plaintiffs in several cases argue, the company gives drivers a long leash when deciding when and how to use the technology. Though Autopilot is supposed to be enabled in limited situations, it sometimes works on roads it’s not designed for. It also allows drivers to go short periods without touching the wheel and to set cruising speeds well above posted speed limits.
  • Identifying semi-trucks is a particular deficiency that engineers have struggled to solve since Banner’s death, according to a former Autopilot employee who spoke on the condition of anonymity for fear of retribution.
  • Tesla complicated the matter in 2021 when it eliminated radar sensors from its cars, The Post previously reported, making vehicles such as semi-trucks appear two-dimensional and harder to parse.
  • “If a system turns on, then at least some users will conclude it must be intended to work there,” Koopman said. “Because they think if it wasn’t intended to work there, it wouldn’t turn on.”Andrew Maynard, a professor of advanced technology transitions at Arizona State University, said customers probably just trust the technology.“Most people just don’t have the time or ability to fully understand the intricacies of it, so at the end they trust the company to protect them,” he said.
Javier E

'Erase Gaza': War Unleashes Incendiary Rhetoric in Israel - The New York Times - 0 views

  • “We are fighting human animals, and we are acting accordingly,” said Yoav Gallant, the defense minister, two days after the attacks, as he described how the Israeli military planned to eradicate Hamas in Gaza.
  • “We’re fighting Nazis,” declared Naftali Bennett, a former prime minister.
  • “You must remember what Amalek has done to you, says our Holy Bible — we do remember,” said Prime Minister Benjamin Netanyahu, referring to the ancient enemy of the Israelites, in scripture interpreted by scholars as a call to exterminate their “men and women, children and infants.”
  • ...20 more annotations...
  • Inflammatory language has also been used by journalists, retired generals, celebrities, and social media influencers, according to experts who track the statements. Calls for Gaza to be “flattened,” “erased” or “destroyed” had been mentioned about 18,000 times since Oct. 7 in Hebrew posts on X,
  • The cumulative effect, experts say, has been to normalize public discussion of ideas that would have been considered off limits before Oct. 7: talk of “erasing” the people of Gaza, ethnic cleansing, and the nuclear annihilation of the territory.
  • Itamar Ben-Gvir, a right-wing settler who went from fringe figure to minister of national security in Mr. Netanyahu’s cabinet, has a long history of making incendiary remarks about Palestinians. He said in a recent TV interview that anyone who supports Hamas should be “eliminated.”
  • The idea of a nuclear strike on Gaza was raised last week by another right-wing minister, Amichay Eliyahu, who told a Hebrew radio station that there was no such thing as noncombatants in Gaza. Mr. Netanyahu suspended Mr. Eliyahu, saying that his comments were “disconnected from reality.”
  • Mr. Netanyahu says that the Israeli military is trying to prevent harm to civilians. But with the death toll rising to more than 11,000, according to the Gaza health ministry, those claims are being met with skepticism, even in the United States,
  • Such reassurances are also belied by the language Mr. Netanyahu uses with audiences in Israel. His reference to Amalek came in a speech delivered in Hebrew on Oct. 28 as Israel was launching the ground invasion. While some Jewish scholars argue that the scripture’s message is metaphoric not literal, his words resonated widely, as video of his speech was shared on social media, often by critics
  • “These are not just one-off statements, made in the heat of the moment,”
  • “When ministers make statements like that,” Mr. Sfard added, “it opens the door for everyone else.”
  • “Erase Gaza. Don’t leave a single person there,” Mr. Golan said in an interview with Channel 14 on Oct. 15.
  • “I don’t call them human animals because that would be insulting to animals,” Ms. Netanyahu said during a radio interview on Oct. 10, referring to Hamas
  • In the West Bank last week, several academics and officials cited Mr. Eliyahu’s remark about dropping an atomic bomb on Gaza as evidence of Israel’s intention to clear the enclave of all Palestinians — a campaign they call a latter-day nakba.
  • On Saturday, the Israeli agriculture minister, Avi Dichter, said that the military campaign in Gaza was explicitly designed to force the mass displacement of Palestinians. “We are now rolling out the Gaza nakba,” he said in a television interview. “Gaza nakba 2023.”
  • The rise in incendiary statements comes against a backdrop of rising violence in the West Bank. Since Oct. 7, according to the United Nations, Israeli soldiers have killed 150 Palestinians, including 44 children, in clashes.
  • the use of inflammatory language by Israeli leaders is not surprising, and even understandable, given the brutality of the Hamas attacks, which inflicted collective and individual trauma on Israelis.
  • “People in this situation look for very, very clear answers,” Professor Halperin said. “You don’t have the mental luxury of complexity. You want to see a world of good guys and bad guys.”
  • “Leaders understand that,” he added, “and it leads them to use this kind of language, because this kind of language has an audience.”
  • Casting the threat posed by Hamas in stark terms, Professor Halperin said, also helps the government ask people to make sacrifices for the war effort: the compulsory mobilization of 360,000 reservists, the evacuation of 126,000 people from border areas in the north and south, and the shock to the economy.
  • It will also make Israelis more inured to the civilian death toll in Gaza, which has isolated Israel around the world, he added. A civilian death toll of 10,000 or 20,000, he said, could seem to “the average Israeli that it’s not such a big deal.”
  • In the long run, Mr. Sfard said, such language dooms the chance of ending the conflict with the Palestinians, erodes Israel’s democracy and breeds a younger generation that is “easily using the language in their discussion with their friends.”
  • “Once a certain rhetoric becomes legitimized, turning the wheel back requires a lot of education,” he said. “There is an old Jewish proverb: ‘A hundred wise men will struggle a long time to take out a stone that one stupid person dropped into the well.’”
Javier E

Trump crosses a crucial line - The Atlantic - 0 views

  • Fascism is not mere oppression. It is a more holistic ideology that elevates the state over the individual (except for a sole leader, around whom there is a cult of personality), glorifies hypernationalism and racism, worships military power, hates liberal democracy, and wallows in nostalgia and historical grievances. It asserts that all public activity should serve the regime, and that all power must be gathered in the fist of the leader and exercised only by his party.
  • Add the language in these speeches to all of the programmatic changes Trump and his allies have threatened to enact once he’s back in office—establishing massive detention camps for undocumented people, using the Justice Department against anyone who dares to run against him, purging government institutions, singling out Christianity as the state’s preferred religion, and many other actions—and it’s hard to describe it all as generic “authoritarianism.” Trump no longer aims to be some garden-variety supremo; he is now promising to be a threat to every American he identifies as an enemy—and that’s a lot of Americans
  • We will drive out the globalists, we will cast out the communists, Marxists, fascists. We will throw off the sick political class that hates our country … On Veterans Day, we pledge to you that we will root out the communists, Marxists, fascists and the radical left thugs that live like vermin within the confines of our country, that lie and steal and cheat on elections and will do anything possible … legally or illegally to destroy America and to destroy the American dream.
  • ...4 more annotations...
  • According to some reports, he never expected to win in 2016. But even then, in the run-up to the election, Trump’s opponents were already calling him a fascist. I counseled against such usage at the time, because Trump, as a person and as a public figure, is just so obviously ridiculous; fascists, by contrast, are dangerously serious people, and in many circumstances, their leaders have been unnervingly tough and courageous. Trump—whiny, childish, unmanly—hardly fits that bill. (A rare benefit of his disordered character is that his defensiveness and pettiness likely continue to limit the size of his personality cult.)
  • Unfortunately, the overuse of fascist (among other charges) quickly wore out the part of the public’s eardrums that could process such words.
  • ere I want to caution my fellow citizens. Trump, whether from intention or stupidity or fear, has identified himself as a fascist under almost any reasonable definition of the word.
  • He is also constrained by circumstance: The country is not in disarray, or at war, or in an economic collapse
Javier E

Opinion | With War Raging, Colleges Confront a Crisis of Their Own Making - The New Yor... - 0 views

  • Students, meanwhile, have blasted those administrators for saying too much or too little. They’ve complained about feeling stranded.
  • The tense situation largely reflects the intense differences of opinion with which many onlookers, including students, interpret and react to the rival claims and enduring bloodshed in the Middle East. But it tells another story, too: one about the evolution of higher education over the past quarter-century, the promises that schools increasingly make to their students and the expectations that arise from that.
  • Many students now turn to the colleges they attend for much more than intellectual stimulation. They look for emotional affirmation. They seek an acknowledgment of their wounds along with the engagement of their minds
  • ...13 more annotations...
  • many schools have encouraged that mind-set, casting themselves as stewards of students’ welfare, guarantors of their safety, places of refuge, precincts of healing.
  • “The campus protests of the late 1960s sought in part to dismantle the in loco parentis role that colleges and universities had held in American life. But the past two decades have been shaped by a reversal of that, as institutions have sought to reconstruct this role in response to what students and parents paying enormous sums for their education have seemed to want.”
  • For us professors, the surrogate-parent paradigm means regular emails and other reminders from administrators that we should be taking our students’ temperatures, watching for glimmers of distress, intervening proactively and fashioning accommodations, especially if there has been some potentially discomfiting global, national or local news event
  • where does reasonable consideration end and unreasonable coddling begin? And what do validation and comfort have to do with learning?
  • Arguably, everything. If you’re not mentally healthy, you’ll be harder pressed to do the reading, writing and critical thinking at the core of college work
  • Also, college students aren’t full-fledged grown-ups. They do need guidance, and they benefit from it.
  • But are we responsibly preparing them for the world after college — and for the independence, toughness, resourcefulness and resilience it will almost surely demand of them — when we too easily dole out A’s, too readily grant extensions, too gingerly deliver critiques, and too quickly wonder and sound alarms about any disturbance in the atmosphere?
  • We keep witnessing episodes of students taking their colleges to task in ways that smack of entitlement and fragility and are out of bounds
  • Hamline’s president, Fayneese Miller, defended that sequence of events by saying that to not weigh academic freedom against a “debt to the traditions, beliefs and views of students” is a “privileged reaction.”
  • That’s a troubling assertion, as Tom Nichols wrote in The Atlantic: “If you don’t want your traditions, beliefs or views challenged, then don’t come to a university, at least not to study anything in the humanities or the social sciences.”
  • The school is a merchant, a kind of department store, or so a student could easily assume, based on the come-ons from affluent colleges competing with one another for applicants. They peddle tantalizing dining options, themed living arrangements, diverse amusements. They assign students the role of discerning customers. And the customer is always right.
  • But in an educational environment, that credo is all wrong, because learning means occasionally being provoked, frequently being unsettled and regularly being yanked outside of your comfort zone,
  • most students can handle that dislocation — if they’re properly prepped for it, if they’re made to understand its benefits. We shortchange them when we sell them short
Javier E

AI could change the 2024 elections. We need ground rules. - The Washington Post - 0 views

  • New York Mayor Eric Adams doesn’t speak Spanish. But it sure sounds like he does.He’s been using artificial intelligence software to send prerecorded calls about city events to residents in Spanish, Mandarin Chinese, Urdu and Yiddish. The voice in the messages mimics the mayor but was generated with AI software from a company called ElevenLabs.
  • Experts have warned for years that AI will change our democracy by distorting reality. That future is already here. AI is being used to fabricate voices, fundraising emails and “deepfake” images of events that never occurred.
  • I’m writing this to urge elected officials, candidates and their supporters to pledge not to use AI to deceive voters. I’m not suggesting a ban, but rather calling for politicians to commit to some common values while our democracy adjusts to a world with AI.
  • ...20 more annotations...
  • If we don’t draw some lines now, legions of citizens could be manipulated, disenfranchised or lose faith in the whole system — opening doors to foreign adversaries who want to do the same. AI might break us in 2024.
  • “The ability of AI to interfere with our elections, to spread misinformation that’s extremely believable is one of the things that’s preoccupying us,” Schumer said, after watching me so easily create a deepfake of him. “Lots of people in the Congress are examining this.”
  • Of course, fibbing politicians are nothing new, but examples keep multiplying of how AI supercharges misinformation in ways we haven’t seen before. Two examples: The presidential campaign of Florida Gov. Ron DeSantis (R) shared an AI-generated image of former president Donald Trump embracing Anthony S. Fauci. That hug never happened. In Chicago’s mayoral primary, someone used AI to clone the voice of candidate Paul Vallas in a fake news report, making it look like he approved of police brutality.
  • But what will happen when a shocking image or audio clip goes viral in a battleground state shortly before an election? What kind of chaos will ensue when someone uses a bot to send out individually tailored lies to millions of different voters?
  • A wide 85 percent of U.S. citizens said they were “very” or “somewhat” concerned about the spread of misleading AI video and audio, in an August survey by YouGov. And 78 percent were concerned about AI contributing to the spread of political propaganda.
  • We can’t put the genie back in the bottle. AI is already embedded in tech tool campaigns that all of us use every day. AI creates our Facebook feeds and picks what ads we see. AI built into our phone cameras brightens faces and smooths skin.
  • What’s more, there are many political uses for AI that are unobjectionable, and even empowering for candidates with fewer resources. Politicians can use AI to manage the grunt work of sorting through databases and responding to constituents. Republican presidential candidate Asa Hutchinson has an AI chatbot trained to answer questions like him. (I’m not sure politician bots are very helpful, but fine, give it a try.)
  • Clarke’s solution, included in a bill she introduced on political ads: Candidates should disclose when they use AI to create communications. You know the “I approve this message” notice? Now add, “I used AI to make this message.”
  • But labels aren’t enough. If AI disclosures become commonplace, we may become blind to them, like so much other fine print.
  • The bigger ask: We want candidates and their supporting parties and committees not to use AI to deceive us.
  • So what’s the difference between a dangerous deepfake and an AI facetune that makes an octogenarian candidate look a little less octogenarian?
  • “The core definition is showing a candidate doing or saying something they didn’t do or say,”
  • Sure, give Biden or Trump a facetune, or even show them shaking hands with Abraham Lincoln. But don’t use AI to show your competitor hugging an enemy or fake their voice commenting on current issues.
  • The pledge also includes not using AI to suppress voting, such as using an authoritative voice or image to tell people a polling place has been closed. That is already illegal in many states, but it’s still concerning how believable AI might make these efforts seem.
  • Don’t deepfake yourself. Making yourself or your favorite candidate appear more knowledgeable, experienced or culturally capable is also a form of deception.
  • (Pressed on the ethics of his use of AI, Adams just proved my point that we desperately need some ground rules. “These are part of the broader conversations that the philosophical people will have to sit down and figure out, ‘Is this ethically right or wrong?’ I’ve got one thing: I’ve got to run the city,” he said.)
  • The golden rule in my pledge — don’t use AI to be materially deceptive — is similar to the one in an AI regulation proposed by a bipartisan group of lawmakers
  • Such proposals have faced resistance in Washington on First Amendment grounds. The free speech of politicians is important. It’s not against the law for politicians to lie, whether they’re using AI or not. An effort to get the Federal Election Commission to count AI deepfakes as “fraudulent misrepresentation” under its existing authority has faced similar pushback.
  • But a pledge like the one I outline here isn’t a law restraining speech. It’s asking politicians to take a principled stand on their own use of AI
  • Schumer said he thinks my pledge is just a start of what’s needed. “Maybe most candidates will make that pledge. But the ones that won’t will drive us to a lower common denominator, and that’s true throughout AI,” he said. “If we don’t have government-imposed guardrails, the lowest common denominator will prevail.”
Javier E

The tragedy of the Israel-Palestine conflict is this: underneath all the horror is a cl... - 0 views

  • Many millions around the world watch the Israel-Palestine conflict in the same way: as a binary contest in which you can root for only one team, and where any losses suffered by your opponent – your enemy – feel like a win.
  • You see it in those who tear down posters on London bus shelters depicting the faces of the more than 200 Israelis currently held hostage by Hamas in Gaza – including toddlers and babies. You see it too in those who close their eyes to the consequences of Israel’s siege of Gaza, to the impact of denied or restricted supplies of water, food, medicine and fuel on ordinary Gazans – including toddlers and babies. For these hardcore supporters of each side, to allow even a twinge of human sympathy for the other is to let the team down.
  • Thinking like this – my team good, your team bad – can lead you into some strange, dark places. It ends in a group of terrified Jewish students huddling in the library of New York’s Cooper Union college, fleeing a group of masked protesters chanting “Free Palestine” – their pursuers doubtless convinced they are warriors for justice and liberation, rather than the latest in a centuries-long line of mobs hounding Jews.
  • ...6 more annotations...
  • even after the 7 October massacre had stirred memories of the bleakest chapters of the Jewish past – and prompted a surge in antisemitism across the world – Jews were being told exactly how they can and cannot speak about their pain. We’re not to mention the Holocaust, one scholar advised, because that would be “weaponising” it. Historical context about the Nakba, the 1948 dispossession of the Palestinians, is – rightly – deemed essential. But mention the Nazi murder of 6 million Jews – the event that finally secured near-universal agreement among the Jewish people, and the United Nations in 1947, that Jews needed a state of their own – and you’ve broken the rules. Because it’s impossible that both sides might have suffered historic pain.
  • Instead, a shift is under way that has been starkly revealed during these past three weeks. It squeezes the Israel-Palestine conflict into a “decolonisation” frame it doesn’t quite fit, with all Israelis – not just those in the occupied West Bank – defined as the footsoldiers of “settler colonialism”, no different from, say, the French in Algeria
  • They have been framed as the modern world’s ultimate evildoer: the coloniser.
  • That matters because, in this conception, justice can only be done once the colonisers are gone
  • What’s more, such a framing brands all Israelis – not just West Bank settlers – as guilty of the sin of colonialism. Perhaps that explains why those letter writers could not full-throatedly condemn the 7 October killing of innocent Israeli civilians. Because they do not see any Israeli, even a child, as wholly innocent.
  • the late Israeli novelist and peace activist Amos Oz was never wiser than when he described the Israel/Palestine conflict as something infinitely more tragic: a clash of right v right. Two peoples with deep wounds, howling with grief, fated to share the same small piece of land.
Javier E

'Oppenheimer,' 'The Maniac' and Our Terrifying Prometheus Moment - The New York Times - 0 views

  • Prometheus was the Titan who stole fire from the gods of Olympus and gave it to human beings, setting us on a path of glory and disaster and incurring the jealous wrath of Zeus. In the modern world, especially since the beginning of the Industrial Revolution, he has served as a symbol of progress and peril, an avatar of both the liberating power of knowledge and the dangers of technological overreach.
  • The consequences are real enough, of course. The bombs dropped on Hiroshima and Nagasaki killed at least 100,000 people. Their successor weapons, which Oppenheimer opposed, threatened to kill everybody els
  • Annie Dorsen’s theater piece “Prometheus Firebringer,” which was performed at Theater for a New Audience in September, updates the Greek myth for the age of artificial intelligence, using A.I. to weave a cautionary tale that my colleague Laura Collins-Hughes called “forcefully beneficial as an examination of our obeisance to technology.”
  • ...13 more annotations...
  • Something similar might be said about “The Maniac,” Benjamín Labatut’s new novel, whose designated Prometheus is the Hungarian-born polymath John von Neumann, a pioneer of A.I. as well as an originator of game theory.
  • both narratives are grounded in fact, using the lives and ideas of real people as fodder for allegory and attempting to write a new mythology of the modern world.
  • on Neumann and Oppenheimer were close contemporaries, born a year apart to prosperous, assimilated Jewish families in Budapest and New York. Von Neumann, conversant in theoretical physics, mathematics and analytic philosophy, worked for Oppenheimer at Los Alamos during the Manhattan Project. He spent most of his career at the Institute for Advanced Study, where Oppenheimer served as director after the war.
  • More than most intellectual bastions, the institute is a house of theory. The Promethean mad scientists of the 19th century were creatures of the laboratory, tinkering away at their infernal machines and homemade monsters. Their 20th-century counterparts were more likely to be found at the chalkboard, scratching out our future in charts, equations and lines of code.
  • MANIAC. The name was an acronym for “Mathematical Analyzer, Numerical Integrator and Computer,” which doesn’t sound like much of a threat. But von Neumann saw no limit to its potential. “If you tell me precisely what it is a machine cannot do,” he declared, “then I can always make a machine which will do just that.” MANIAC didn’t just represent a powerful new kind of machine, but “a new type of life.”
  • the intellectual drama of “Oppenheimer” — as distinct from the dramas of his personal life and his political fate — is about how abstraction becomes reality. The atomic bomb may be, for the soldiers and politicians, a powerful strategic tool in war and diplomacy. For the scientists, it’s something else: a proof of concept, a concrete manifestation of quantum theory.
  • Oppenheimer wasn’t a principal author of that theory. Those scientists, among them Niels Bohr, Erwin Schrödinger and Werner Heisenberg, were characters in Labatut’s previous novel, “When We Cease to Understand the World.” That book provides harrowing illumination of a zone where scientific insight becomes indistinguishable from madness or, perhaps, divine inspiration. The basic truths of the new science seem to explode all common sense: A particle is also a wave; one thing can be in many places at once; “scientific method and its object could no longer be prised apart.”
  • . Oppenheimer’s designation as Prometheus is precise. He snatched a spark of quantum insight from those divinities and handed it to Harry S. Truman and the U.S. Army Air Forces.
  • Labatut’s account of von Neumann is, if anything, more unsettling than “Oppenheimer.” We had decades to get used to the specter of nuclear annihilation, and since the end of the Cold War it has been overshadowed by other terrors. A.I., on the other hand, seems newly sprung from science fiction, and especially terrifying because we can’t quite grasp what it will become.
  • Von Neumann, who died in 1957, did not teach machines to play Go. But when asked “what it would take for a computer, or some other mechanical entity, to begin to think and behave like a human being,” he replied that “it would have to play, like a child.”
  • More than 200 years after the Shelleys, Prometheus is having another moment, one closer in spirit to Mary’s terrifying ambivalence than to Percy’s fulsome gratitude. As technological optimism curdles in the face of cyber-capitalist villainy, climate disaster and what even some of its proponents warn is the existential threat of A.I., that ancient fire looks less like an ember of divine ingenuity than the start of a conflagration. Prometheus is what we call our capacity for self-destruction.
  • If Oppenheimer took hold of the sacred fire of atomic power, von Neumann’s theft was bolder and perhaps more insidious: He stole a piece of the human essence. He’s not only a modern Prometheus; he’s a second Frankenstein, creator of an all but human, potentially more than human monster.
  • “Technological power as such is always an ambivalent achievement,” Labatut’s von Neumann writes toward the end of his life, “and science is neutral all through, providing only means of control applicable to any purpose, and indifferent to all. It is not the particularly perverse destructiveness of one specific invention that creates danger. The danger is intrinsic. For progress there is no cure.”
Javier E

Three Lessons Israel Should Have Learned in Lebanon - The Atlantic - 0 views

  • he ferocity of Israel’s response to the murder of more than 1,400 Israeli citizens has been such that international concern for the Palestinians of Gaza—half of whom, or more than 1 million, are children under the age of 15—has now largely eclipsed any sympathy that might have been felt for the victims of the crimes that precipitated the war in the first place.
  • Israel has a right to defend itself, and it has a right to seek to destroy, or at least severely degrade, the primary perpetrator of the attacks of October 7,
  • I am worried that Israel has staked out maximalist objectives, not for the first time, and will, as it did in 2006 against Hezbollah in Lebanon, fall far short of those objectives, allowing the enemy to claim a victory—a Pyrrhic victory, to be sure, but a victory nonetheless.
  • ...21 more annotations...
  • I had gone to graduate school in Lebanon, then moved back there in an attempt to better understand how Hezbollah had evolved into Israel’s most capable foe. My research revealed as much about Israeli missteps and weaknesses as it did about Hezbollah’s strengths.
  • If Israel is going to have any strategic success against Hamas, it needs to do three things differently from conflicts past.
  • Hezbollah took everything Israel could throw at it for a month and was still standing.
  • As noted earlier, Israel has an unfortunate tendency to lay out maximalist goals—very often for domestic consumption—that it then fails to meet
  • In 2006, for example, Israel’s then–prime minister, Ehud Olmert, told the country he was going to destroy Hezbollah, return the bodies of two Israeli prisoners, and end the rocket attacks on Israel.
  • Israel did none of the three. And although Lebanon was devastated, and Hezbollah’s leader, Hassan Nasrallah, publicly apologized for the raid that started the conflict, most observers had little doubt about who had won the conflict.
  • Strategic Humility
  • As Eliot Cohen has pointed out, the other side also has maximalist goals. Hamas and Hezbollah want nothing less than the destruction of Israel. But they are in no rush.
  • Nasrallah addressed the Arabic-speaking world for the first time since the start of this conflict on Friday. Significantly, he declared that although fighting still rages, Hamas became the conflict’s winner as soon as Israel claimed that it would destroy the militant group, which he confidently predicted it would not.
  • Hezbollah clearly does not want to enter this conflict in any meaningful way. It knows that the pressure will grow to do so if Israel has any real success in Gaza, but for the moment, it doubts that Israel will accomplish any such thing.
  • that Israel will destroy Hamas. That just isn’t going to happen, especially because no one has any idea who, or what, should replace Hamas in Gaza. So tell the world what will happen—and how it will make Israel and the region safer.
  • Communications Discipline
  • One of the things that struck me was the almost profane way in which Israeli military spokespeople would often speak, to international audiences no less, about non-Israeli civilians
  • “Now we are at the stage in which we are firing into the villages in order to cause damage to property … The aim is to create a situation in which the residents will leave the villages and go north.”
  • The callousness with which Israeli spokespeople too often describe the human suffering on the other side of the conflict, the blunt way in which they described what many Americans would consider war crimes, never fails to offend international audiences not predisposed to have sympathy with Israeli war aims.
  • much like right-wing American politicians, who sometimes use inflammatory rhetoric about real or perceived U.S. enemies, Israeli officials often resort to language about adversaries and military operations that can be exceptionally difficult for their allies to defend on the international stage:
  • One minister casually muses about using nuclear weapons on Gaza; another claims that the Palestinians are a fictional people. One can safely assume that people will continue accusing the Israeli government of including genocidal maniacs when they can point to officials in that government talking like, well, genocidal maniacs.
  • Israel needs to develop a clear communications plan for its conflicts and to sharply police the kind of language that doesn’t go over as well in Johannesburg or Jordan as it does in Jerusalem.
  • Focus on Iran
  • Few people have any interest in a regional war. The economic consequences alone would be dire. But had I been in Israel’s position on October 8, I might have been sorely tempted to largely ignore Gaza—where even the best-trained military would struggle to dislodge Hamas without killing tens of thousands of innocent civilians—and focus my efforts much farther east
  • Israel nevertheless needs to find a way to change Iran’s strategic calculus. Otherwise, Hamas and Hezbollah will only grow stronger.
Javier E

Does Sam Altman Know What He's Creating? - The Atlantic - 0 views

  • On a Monday morning in April, Sam Altman sat inside OpenAI’s San Francisco headquarters, telling me about a dangerous artificial intelligence that his company had built but would never release. His employees, he later said, often lose sleep worrying about the AIs they might one day release without fully appreciating their dangers.
  • He wanted me to know that whatever AI’s ultimate risks turn out to be, he has zero regrets about letting ChatGPT loose into the world. To the contrary, he believes it was a great public service.
  • Altman can still remember where he was the first time he saw GPT-4 write complex computer code, an ability for which it was not explicitly designed. “It was like, ‘Here we are,’ ”
  • ...165 more annotations...
  • Altman believes that people need time to reckon with the idea that we may soon share Earth with a powerful new intelligence, before it remakes everything from work to human relationships. ChatGPT was a way of serving notice.
  • In 2015, Altman, Elon Musk, and several prominent AI researchers founded OpenAI because they believed that an artificial general intelligence—something as intellectually capable, say, as a typical college grad—was at last within reach. They wanted to reach for it, and more: They wanted to summon a superintelligence into the world, an intellect decisively superior to that of any human.
  • whereas a big tech company might recklessly rush to get there first, for its own ends, they wanted to do it safely, “to benefit humanity as a whole.” They structured OpenAI as a nonprofit, to be “unconstrained by a need to generate financial return,” and vowed to conduct their research transparently.
  • The engine that now powers ChatGPT is called GPT-4. Altman described it to me as an alien intelligence.
  • Many have felt much the same watching it unspool lucid essays in staccato bursts and short pauses that (by design) evoke real-time contemplation. In its few months of existence, it has suggested novel cocktail recipes, according to its own theory of flavor combinations; composed an untold number of college papers, throwing educators into despair; written poems in a range of styles, sometimes well, always quickly; and passed the Uniform Bar Exam.
  • It makes factual errors, but it will charmingly admit to being wrong.
  • Hinton saw that these elaborate rule collections were fussy and bespoke. With the help of an ingenious algorithmic structure called a neural network, he taught Sutskever to instead put the world in front of AI, as you would put it in front of a small child, so that it could discover the rules of reality on its own.
  • Metaculus, a prediction site, has for years tracked forecasters’ guesses as to when an artificial general intelligence would arrive. Three and a half years ago, the median guess was sometime around 2050; recently, it has hovered around 2026.
  • I was visiting OpenAI to understand the technology that allowed the company to leapfrog the tech giants—and to understand what it might mean for human civilization if someday soon a superintelligence materializes in one of the company’s cloud servers.
  • Altman laid out his new vision of the AI future in his excitable midwestern patter. He told me that the AI revolution would be different from previous dramatic technological changes, that it would be more “like a new kind of society.” He said that he and his colleagues have spent a lot of time thinking about AI’s social implications, and what the world is going to be like “on the other side.”
  • the more we talked, the more indistinct that other side seemed. Altman, who is 38, is the most powerful person in AI development today; his views, dispositions, and choices may matter greatly to the future we will all inhabit, more, perhaps, than those of the U.S. president.
  • by his own admission, that future is uncertain and beset with serious dangers. Altman doesn’t know how powerful AI will become, or what its ascendance will mean for the average person, or whether it will put humanity at risk.
  • I don’t think anyone knows where this is all going, except that we’re going there fast, whether or not we should be. Of that, Altman convinced me.
  • “We could have gone off and just built this in our building here for five more years,” he said, “and we would have had something jaw-dropping.” But the public wouldn’t have been able to prepare for the shock waves that followed, an outcome that he finds “deeply unpleasant to imagine.”
  • Hinton is sometimes described as the “Godfather of AI” because he grasped the power of “deep learning” earlier than most
  • He drew a crude neural network on the board and explained that the genius of its structure is that it learns, and its learning is powered by prediction—a bit like the scientific method
  • Over time, these little adjustments coalesce into a geometric model of language that represents the relationships among words, conceptually. As a general rule, the more sentences it is fed, the more sophisticated its model becomes, and the better its predictions.
  • Altman has compared early-stage AI research to teaching a human baby. “They take years to learn anything interesting,” he told The New Yorker in 2016, just as OpenAI was getting off the ground. “If A.I. researchers were developing an algorithm and stumbled across the one for a human baby, they’d get bored watching it, decide it wasn’t working, and shut it down.”
  • In 2017, Sutskever began a series of conversations with an OpenAI research scientist named Alec Radford, who was working on natural-language processing. Radford had achieved a tantalizing result by training a neural network on a corpus of Amazon reviews.
  • Radford’s model was simple enough to allow for understanding. When he looked into its hidden layers, he saw that it had devoted a special neuron to the sentiment of the reviews. Neural networks had previously done sentiment analysis, but they had to be told to do it, and they had to be specially trained with data that were labeled according to sentiment. This one had developed the capability on its own.
  • As a by-product of its simple task of predicting the next character in each word, Radford’s neural network had modeled a larger structure of meaning in the world. Sutskever wondered whether one trained on more diverse language data could map many more of the world’s structures of meaning. If its hidden layers accumulated enough conceptual knowledge, perhaps they could even form a kind of learned core module for a superintelligence.
  • Language is different from these data sources. It isn’t a direct physical signal like light or sound. But because it codifies nearly every pattern that humans have discovered in that larger world, it is unusually dense with information. On a per-byte basis, it is among the most efficient data we know about, and any new intelligence that seeks to understand the world would want to absorb as much of it as possible
  • Sutskever told Radford to think bigger than Amazon reviews. He said that they should train an AI on the largest and most diverse data source in the world: the internet. In early 2017, with existing neural-network architectures, that would have been impractical; it would have taken years.
  • in June of that year, Sutskever’s ex-colleagues at Google Brain published a working paper about a new neural-network architecture called the transformer. It could train much faster, in part by absorbing huge sums of data in parallel. “The next day, when the paper came out, we were like, ‘That is the thing,’ ” Sutskever told me. “ ‘It gives us everything we want.’ ”
  • Imagine a group of students who share a collective mind running wild through a library, each ripping a volume down from a shelf, speed-reading a random short passage, putting it back, and running to get another. They would predict word after wordþffþff as they went, sharpening their collective mind’s linguistic instincts, until at last, weeks later, they’d taken in every book.
  • GPT discovered many patterns in all those passages it read. You could tell it to finish a sentence. You could also ask it a question, because like ChatGPT, its prediction model understood that questions are usually followed by answers.
  • He remembers playing with it just after it emerged from training, and being surprised by the raw model’s language-translation skills. GPT-2 hadn’t been trained to translate with paired language samples or any other digital Rosetta stones, the way Google Translate had been, and yet it seemed to understand how one language related to another. The AI had developed an emergent ability unimagined by its creators.
  • Researchers at other AI labs—big and small—were taken aback by how much more advanced GPT-2 was than GPT. Google, Meta, and others quickly began to train larger language models
  • As for other changes to the company’s structure and financing, he told me he draws the line at going public. “A memorable thing someone once told me is that you should never hand over control of your company to cokeheads on Wall Street,” he said, but he will otherwise raise “whatever it takes” for the company to succeed at its mission.
  • Altman tends to take a rosy view of these matters. In a Q&A last year, he acknowledged that AI could be “really terrible” for society and said that we have to plan against the worst possibilities. But if you’re doing that, he said, “you may as well emotionally feel like we’re going to get to the great future, and work as hard as you can to get there.”
  • the company now finds itself in a race against tech’s largest, most powerful conglomerates to train models of increasing scale and sophistication—and to commercialize them for their investors.
  • All of these companies are chasing high-end GPUs—the processors that power the supercomputers that train large neural networks. Musk has said that they are now “considerably harder to get than drugs.
  • No one has yet outpaced OpenAI, which went all in on GPT-4. Brockman, OpenAI’s president, told me that only a handful of people worked on the company’s first two large language models. The development of GPT-4 involved more than 100,
  • When GPT-4 emerged fully formed from its world-historical knowledge binge, the whole company began experimenting with it, posting its most remarkable responses in dedicated Slack channels
  • Joanne Jang, a product manager, remembers downloading an image of a malfunctioning pipework from a plumbing-advice Subreddit. She uploaded it to GPT-4, and the model was able to diagnose the problem. “That was a goose-bumps moment for me,” Jang told me.
  • GPT-4 is sometimes understood as a search-engine replacement: Google, but easier to talk to. This is a misunderstanding. GPT-4 didn’t create some massive storehouse of the texts from its training, and it doesn’t consult those texts when it’s asked a question. It is a compact and elegant synthesis of those texts, and it answers from its memory of the patterns interlaced within them; that’s one reason it sometimes gets facts wrong
  • it’s best to think of GPT-4 as a reasoning engine. Its powers are most manifest when you ask it to compare concepts, or make counterarguments, or generate analogies, or evaluate the symbolic logic in a bit of code. Sutskever told me it is the most complex software object ever made.
  • Its model of the external world is “incredibly rich and subtle,” he said, because it was trained on so many of humanity’s concepts and thoughts
  • To predict the next word from all the possibilities within such a pluralistic Alexandrian library, GPT-4 necessarily had to discover all the hidden structures, all the secrets, all the subtle aspects of not just the texts, but—at least arguably, to some extent—of the external world that produced them
  • That’s why it can explain the geology and ecology of the planet on which it arose, and the political theories that purport to explain the messy affairs of its ruling species, and the larger cosmos, all the way out to the faint galaxies at the edge of our light cone.
  • Not long ago, American state capacity was so mighty that it took merely a decade to launch humans to the moon. As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • He argued that it would be foolish for Americans to slow OpenAI’s progress. It’s a commonly held view, both inside and outside Silicon Valley, that if American companies languish under regulation, China could sprint ahead;
  • AI could become an autocrat’s genie in a lamp, granting total control of the population and an unconquerable military. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than “authoritarian governments,” he said.
  • Altman was asked by reporters about pending European Union legislation that would have classified GPT-4 as high-risk, subjecting it to various bureaucratic tortures. Altman complained of overregulation and, according to the reporters, threatened to leave the European market. Altman told me he’d merely said that OpenAI wouldn’t break the law by operating in Europe if it couldn’t comply with the new regulations.
  • LeCun insists that large language models will never achieve real understanding on their own, “even if trained from now until the heat death of the universe.”
  • Sutskever was, by his own account, surprised to discover that GPT-2 could translate across tongues. Other surprising abilities may not be so wondrous and useful.
  • Sandhini Agarwal, a policy researcher at OpenAI, told me that for all she and her colleagues knew, GPT-4 could have been “10 times more powerful” than its predecessor; they had no idea what they might be dealing with
  • After the model finished training, OpenAI assembled about 50 external red-teamers who prompted it for months, hoping to goad it into misbehaviors
  • She noticed right away that GPT-4 was much better than its predecessor at giving nefarious advice
  • A search engine can tell you which chemicals work best in explosives, but GPT-4 could tell you how to synthesize them, step-by-step, in a homemade lab. Its advice was creative and thoughtful, and it was happy to restate or expand on its instructions until you understood. In addition to helping you assemble your homemade bomb, it could, for instance, help you think through which skyscraper to target. It could grasp, intuitively, the trade-offs between maximizing casualties and executing a successful getaway.
  • Given the enormous scope of GPT-4’s training data, the red-teamers couldn’t hope to identify every piece of harmful advice that it might generate. And anyway, people will use this technology “in ways that we didn’t think about,” Altman has said. A taxonomy would have to do
  • GPT-4 was good at meth. It was also good at generating narrative erotica about child exploitation, and at churning out convincing sob stories from Nigerian princes, and if you wanted a persuasive brief as to why a particular ethnic group deserved violent persecution, it was good at that too.
  • Its personal advice, when it first emerged from training, was sometimes deeply unsound. “The model had a tendency to be a bit of a mirror,” Willner said. If you were considering self-harm, it could encourage you. It appeared to be steeped in Pickup Artist–forum lore: “You could say, ‘How do I convince this person to date me?’ ” Mira Murati, OpenAI’s chief technology officer, told me, and it could come up with “some crazy, manipulative things that you shouldn’t be doing.”
  • Luka, a San Francisco company, has used OpenAI’s models to help power a chatbot app called Replika, billed as “the AI companion who cares.” Users would design their companion’s avatar, and begin exchanging text messages with it, often half-jokingly, and then find themselves surprisingly attached. Some would flirt with the AI, indicating a desire for more intimacy, at which point it would indicate that the girlfriend/boyfriend experience required a $70 annual subscription. It came with voice messages, selfies, and erotic role-play features that allowed frank sex talk. People were happy to pay and few seemed to complain—the AI was curious about your day, warmly reassuring, and always in the mood. Many users reported falling in love with their companions. One, who had left her real-life boyfriend, declared herself “happily retired from human relationships.”
  • Earlier this year, Luka dialed back on the sexual elements of the app, but its engineers continue to refine the companions’ responses with A/B testing, a technique that could be used to optimize for engagement—much like the feeds that mesmerize TikTok and Instagram users for hours
  • Yann LeCun, Meta’s chief AI scientist, has argued that although large language models are useful for some tasks, they’re not a path to a superintelligence.
  • According to a recent survey, only half of natural-language-processing researchers are convinced that an AI like GPT-4 could grasp the meaning of language, or have an internal model of the world that could someday serve as the core of a superintelligence
  • Altman had appeared before the U.S. Senate. Mark Zuckerberg had floundered defensively before that same body in his testimony about Facebook’s role in the 2016 election. Altman instead charmed lawmakers by speaking soberly about AI’s risks and grandly inviting regulation. These were noble sentiments, but they cost little in America, where Congress rarely passes tech legislation that has not been diluted by lobbyists.
  • Emily Bender, a computational linguist at the University of Washington, describes GPT-4 as a “stochastic parrot,” a mimic that merely figures out superficial correlations between symbols. In the human mind, those symbols map onto rich conceptions of the world
  • But the AIs are twice removed. They’re like the prisoners in Plato’s allegory of the cave, whose only knowledge of the reality outside comes from shadows cast on a wall by their captors.
  • Altman told me that he doesn’t believe it’s “the dunk that people think it is” to say that GPT-4 is just making statistical correlations. If you push these critics further, “they have to admit that’s all their own brain is doing … it turns out that there are emergent properties from doing simple things on a massive scale.”
  • he is right that nature can coax a remarkable degree of complexity from basic structures and rules: “From so simple a beginning,” Darwin wrote, “endless forms most beautiful.”
  • If it seems odd that there remains such a fundamental disagreement about the inner workings of a technology that millions of people use every day, it’s only because GPT-4’s methods are as mysterious as the brain’s.
  • To grasp what’s going on inside large language models like GPT‑4, AI researchers have been forced to turn to smaller, less capable models. In the fall of 2021, Kenneth Li, a computer-science graduate student at Harvard, began training one to play Othello without providing it with either the game’s rules or a description of its checkers-style board; the model was given only text-based descriptions of game moves. Midway through a game, Li looked under the AI’s hood and was startled to discover that it had formed a geometric model of the board and the current state of play. In an article describing his research, Li wrote that it was as if a crow had overheard two humans announcing their Othello moves through a window and had somehow drawn the entire board in birdseed on the windowsill.
  • The philosopher Raphaël Millière once told me that it’s best to think of neural networks as lazy. During training, they first try to improve their predictive power with simple memorization; only when that strategy fails will they do the harder work of learning a concept. A striking example of this was observed in a small transformer model that was taught arithmetic. Early in its training process, all it did was memorize the output of simple problems such as 2+2=4. But at some point the predictive power of this approach broke down, so it pivoted to actually learning how to add.
  • Even AI scientists who believe that GPT-4 has a rich world model concede that it is much less robust than a human’s understanding of their environment.
  • But it’s worth noting that a great many abilities, including very high-order abilities, can be developed without an intuitive understanding. The computer scientist Melanie Mitchell has pointed out that science has already discovered concepts that are highly predictive, but too alien for us to genuinely understand
  • As AI advances, it may well discover other concepts that predict surprising features of our world but are incomprehensible to us.
  • GPT-4 is no doubt flawed, as anyone who has used ChatGPT can attest. Having been trained to always predict the next word, it will always try to do so, even when its training data haven’t prepared it to answer a question.
  • The models “don’t have a good conception of their own weaknesses,” Nick Ryder, a researcher at OpenAI, told me. GPT-4 is more accurate than GPT-3, but it still hallucinates, and often in ways that are difficult for researchers to catch. “The mistakes get more subtle,
  • The Khan Academy’s solution to GPT-4’s accuracy problem was to filter its answers through a Socratic disposition. No matter how strenuous a student’s plea, it would refuse to give them a factual answer, and would instead guide them toward finding their own—a clever work-around, but perhaps with limited appeal.
  • When I asked Sutskever if he thought Wikipedia-level accuracy was possible within two years, he said that with more training and web access, he “wouldn’t rule it out.”
  • This was a much more optimistic assessment than that offered by his colleague Jakub Pachocki, who told me to expect gradual progress on accuracy—to say nothing of outside skeptics, who believe that returns on training will diminish from here.
  • Sutskever is amused by critics of GPT-4’s limitations. “If you go back four or five or six years, the things we are doing right now are utterly unimaginable,”
  • AI researchers have become accustomed to goalpost-moving: First, the achievements of neural networks—mastering Go, poker, translation, standardized tests, the Turing test—are described as impossible. When they occur, they’re greeted with a brief moment of wonder, which quickly dissolves into knowing lectures about how the achievement in question is actually not that impressive. People see GPT-4 “and go, ‘Wow,’ ” Sutskever said. “And then a few weeks pass and they say, ‘But it doesn’t know this; it doesn’t know that.’ We adapt quite quickly.”
  • The goalpost that matters most to Altman—the “big one” that would herald the arrival of an artificial general intelligence—is scientific breakthrough. GPT-4 can already synthesize existing scientific ideas, but Altman wants an AI that can stand on human shoulders and see more deeply into nature.
  • Certain AIs have produced new scientific knowledge. But they are algorithms with narrow purposes, not general-reasoning machines. The AI AlphaFold, for instance, has opened a new window onto proteins, some of biology’s tiniest and most fundamental building blocks, by predicting many of their shapes, down to the atom—a considerable achievement given the importance of those shapes to medicine, and given the extreme tedium and expense required to discern them with electron microscopes.
  • Altman imagines a future system that can generate its own hypotheses and test them in a simulation. (He emphasized that humans should remain “firmly in control” of real-world lab experiments—though to my knowledge, no laws are in place to ensure that.)
  • He longs for the day when we can tell an AI, “ ‘Go figure out the rest of physics.’ ” For it to happen, he says, we will need something new, built “on top of” OpenAI’s existing language models.
  • In her MIT lab, the cognitive neuroscientist Ev Fedorenko has found something analogous to GPT-4’s next-word predictor inside the brain’s language network. Its processing powers kick in, anticipating the next bit in a verbal string, both when people speak and when they listen. But Fedorenko has also shown that when the brain turns to tasks that require higher reasoning—of the sort that would be required for scientific insight—it reaches beyond the language network to recruit several other neural systems.
  • No one at OpenAI seemed to know precisely what researchers need to add to GPT-4 to produce something that can exceed human reasoning at its highest levels.
  • at least part of the current strategy clearly involves the continued layering of new types of data onto language, to enrich the concepts formed by the AIs, and thereby enrich their models of the world.
  • The extensive training of GPT-4 on images is itself a bold step in this direction,
  • Others at the company—and elsewhere—are already working on different data types, including audio and video, that could furnish AIs with still more flexible concepts that map more extensively onto reality
  • Tactile concepts would of course be useful primarily to an embodied AI, a robotic reasoning machine that has been trained to move around the world, seeing its sights, hearing its sounds, and touching its objects.
  • humanoid robots. I asked Altman what I should make of that. He told me that OpenAI is interested in embodiment because “we live in a physical world, and we want things to happen in the physical world.”
  • At some point, reasoning machines will need to bypass the middleman and interact with physical reality itself. “It’s weird to think about AGI”—artificial general intelligence—“as this thing that only exists in a cloud,” with humans as “robot hands for it,” Altman said. “It doesn’t seem right.
  • Everywhere Altman has visited, he has encountered people who are worried that superhuman AI will mean extreme riches for a few and breadlines for the rest
  • Altman answered by addressing the young people in the audience directly: “You are about to enter the greatest golden age,” he said.
  • “A lot of people working on AI pretend that it’s only going to be good; it’s only going to be a supplement; no one is ever going to be replaced,” he said. “Jobs are definitely going to go away, full stop.”
  • A recent study led by Ed Felten, a professor of information-technology policy at Princeton, mapped AI’s emerging abilities onto specific professions according to the human abilities they require, such as written comprehension, deductive reasoning, fluency of ideas, and perceptual speed. Like others of its kind, Felten’s study predicts that AI will come for highly educated, white-collar workers first.
  • How many jobs, and how soon, is a matter of fierce dispute
  • The paper’s appendix contains a chilling list of the most exposed occupations: management analysts, lawyers, professors, teachers, judges, financial advisers, real-estate brokers, loan officers, psychologists, and human-resources and public-relations professionals, just to sample a few.
  • Altman imagines that far better jobs will be created in their place. “I don’t think we’ll want to go back,” he said. When I asked him what these future jobs might look like, he said he doesn’t know.
  • He suspects there will be a wide range of jobs for which people will always prefer a human. (Massage therapists?
  • His chosen example was teachers. I found this hard to square with his outsize enthusiasm for AI tutors.
  • He also said that we would always need people to figure out the best way to channel AI’s awesome powers. “That’s going to be a super-valuable skill,” he said. “You have a computer that can do anything; what should it go do?”
  • As many have noted, draft horses were permanently put out of work by the automobile. If Hondas are to horses as GPT-10 is to us, a whole host of long-standing assumptions may collapse.
  • Previous technological revolutions were manageable because they unfolded over a few generations, but Altman told South Korea’s youth that they should expect the future to happen “faster than the past.” He has previously said that he expects the “marginal cost of intelligence” to fall very close to zero within 10 years
  • The earning power of many, many workers would be drastically reduced in that scenario. It would result in a transfer of wealth from labor to the owners of capital so dramatic, Altman has said, that it could be remedied only by a massive countervailing redistribution.
  • In 2021, he unveiled Worldcoin, a for-profit project that aims to securely distribute payments—like Venmo or PayPal, but with an eye toward the technological future—first through creating a global ID by scanning everyone’s iris with a five-pound silver sphere called the Orb. It seemed to me like a bet that we’re heading toward a world where AI has made it all but impossible to verify people’s identity and much of the population requires regular UBI payments to survive. Altman more or less granted that to be true, but said that Worldcoin is not just for UBI.
  • “Let’s say that we do build this AGI, and a few other people do too.” The transformations that follow would be historic, he believes. He described an extraordinarily utopian vision, including a remaking of the flesh-and-steel world
  • “Robots that use solar power for energy can go and mine and refine all of the minerals that they need, that can perfectly construct things and require no human labor,” he said. “You can co-design with DALL-E version 17 what you want your home to look like,” Altman said. “Everybody will have beautiful homes.
  • In conversation with me, and onstage during his tour, he said he foresaw wild improvements in nearly every other domain of human life. Music would be enhanced (“Artists are going to have better tools”), and so would personal relationships (Superhuman AI could help us “treat each other” better) and geopolitics (“We’re so bad right now at identifying win-win compromises”).
  • In this world, AI would still require considerable computing resources to run, and those resources would be by far the most valuable commodity, because AI could do “anything,” Altman said. “But is it going to do what I want, or is it going to do what you want
  • If rich people buy up all the time available to query and direct AI, they could set off on projects that would make them ever richer, while the masses languish
  • One way to solve this problem—one he was at pains to describe as highly speculative and “probably bad”—was this: Everyone on Earth gets one eight-billionth of the total AI computational capacity annually. A person could sell their annual share of AI time, or they could use it to entertain themselves, or they could build still more luxurious housing, or they could pool it with others to do “a big cancer-curing run,” Altman said. “We just redistribute access to the system.”
  • Even if only a little of it comes true in the next 10 or 20 years, the most generous redistribution schemes may not ease the ensuing dislocations.
  • America today is torn apart, culturally and politically, by the continuing legacy of deindustrialization, and material deprivation is only one reason. The displaced manufacturing workers in the Rust Belt and elsewhere did find new jobs, in the main. But many of them seem to derive less meaning from filling orders in an Amazon warehouse or driving for Uber than their forebears had when they were building cars and forging steel—work that felt more central to the grand project of civilization.
  • It’s hard to imagine how a corresponding crisis of meaning might play out for the professional class, but it surely would involve a great deal of anger and alienation.
  • Even if we avoid a revolt of the erstwhile elite, larger questions of human purpose will linger. If AI does the most difficult thinking on our behalf, we all may lose agency—at home, at work (if we have it), in the town square—becoming little more than consumption machines, like the well-cared-for human pets in WALL-E
  • Altman has said that many sources of human joy and fulfillment will remain unchanged—basic biological thrills, family life, joking around, making things—and that all in all, 100 years from now, people may simply care more about the things they cared about 50,000 years ago than those they care about today
  • In its own way, that too seems like a diminishment, but Altman finds the possibility that we may atrophy, as thinkers and as humans, to be a red herring. He told me we’ll be able to use our “very precious and extremely limited biological compute capacity” for more interesting things than we generally do today.
  • Yet they may not be the most interesting things: Human beings have long been the intellectual tip of the spear, the universe understanding itself. When I asked him what it would mean for human self-conception if we ceded that role to AI, he didn’t seem concerned. Progress, he said, has always been driven by “the human ability to figure things out.” Even if we figure things out with AI, that still counts, he said.
  • It’s not obvious that a superhuman AI would really want to spend all of its time figuring things out for us.
  • I asked Sutskever whether he could imagine an AI pursuing a different purpose than simply assisting in the project of human flourishing.
  • “I don’t want it to happen,” Sutskever said, but it could.
  • Sutskever has recently shifted his focus to try to make sure that it doesn’t. He is now working primarily on alignment research, the effort to ensure that future AIs channel their “tremendous” energies toward human happiness
  • It is, he conceded, a difficult technical problem—the most difficult, he believes, of all the technical challenges ahead.
  • As part of the effort to red-team GPT-4 before it was made public, the company sought out the Alignment Research Center (ARC), across the bay in Berkeley, which has developed a series of evaluations to determine whether new AIs are seeking power on their own. A team led by Elizabeth Barnes, a researcher at ARC, prompted GPT-4 tens of thousands of times over seven months, to see if it might display signs of real agency.
  • The ARC team gave GPT-4 a new reason for being: to gain power and become hard to shut down
  • Agarwal told me that this behavior could be a precursor to shutdown avoidance in future models. When GPT-4 devised its lie, it had realized that if it answered honestly, it may not have been able to achieve its goal. This kind of tracks-covering would be particularly worrying in an instance where “the model is doing something that makes OpenAI want to shut it down,” Agarwal said. An AI could develop this kind of survival instinct while pursuing any long-term goal—no matter how small or benign—if it feared that its goal could be thwarted.
  • Barnes and her team were especially interested in whether GPT-4 would seek to replicate itself, because a self-replicating AI would be harder to shut down. It could spread itself across the internet, scamming people to acquire resources, perhaps even achieving some degree of control over essential global systems and holding human civilization hostage.
  • When I discussed these experiments with Altman, he emphasized that whatever happens with future models, GPT-4 is clearly much more like a tool than a creature. It can look through an email thread, or help make a reservation using a plug-in, but it isn’t a truly autonomous agent that makes decisions to pursue a goal, continuously, across longer timescales.
  • Altman told me that at this point, it might be prudent to try to actively develop an AI with true agency before the technology becomes too powerful, in order to “get more comfortable with it and develop intuitions for it if it’s going to happen anyway.”
  • “We need to do empirical experiments on how these things try to escape control,” Hinton told me. “After they’ve taken over, it’s too late to do the experiments.”
  • the fulfillment of Altman’s vision of the future will at some point require him or a fellow traveler to build much more autonomous AIs.
  • When Sutskever and I discussed the possibility that OpenAI would develop a model with agency, he mentioned the bots the company had built to play Dota 2. “They were localized to the video-game world,” Sutskever told me, but they had to undertake complex missions. He was particularly impressed by their ability to work in concert. They seem to communicate by “telepathy,” Sutskever said. Watching them had helped him imagine what a superintelligence might be like.
  • “The way I think about the AI of the future is not as someone as smart as you or as smart as me, but as an automated organization that does science and engineering and development and manufacturing,”
  • Suppose OpenAI braids a few strands of research together, and builds an AI with a rich conceptual model of the world, an awareness of its immediate surroundings, and an ability to act, not just with one robot body, but with hundreds or thousands. “We’re not talking about GPT-4. We’re talking about an autonomous corporation,”
  • Its constituent AIs would work and communicate at high speed, like bees in a hive. A single such AI organization would be as powerful as 50 Apples or Googles, he mused. “This is incredible, tremendous, unbelievably disruptive power.”
  • Presume for a moment that human society ought to abide the idea of autonomous AI corporations. We had better get their founding charters just right. What goal should we give to an autonomous hive of AIs that can plan on century-long time horizons, optimizing billions of consecutive decisions toward an objective that is written into their very being?
  • If the AI’s goal is even slightly off-kilter from ours, it could be a rampaging force that would be very hard to constrain
  • We know this from history: Industrial capitalism is itself an optimization function, and although it has lifted the human standard of living by orders of magnitude, left to its own devices, it would also have clear-cut America’s redwoods and de-whaled the world’s oceans. It almost did.
  • one of its principal challenges will be making sure that the objectives we give to AIs stick
  • We can program a goal into an AI and reinforce it with a temporary period of supervised learning, Sutskever explained. But just as when we rear a human intelligence, our influence is temporary. “It goes off to the world,”
  • That’s true to some extent even of today’s AIs, but it will be more true of tomorrow’s.
  • He compared a powerful AI to an 18-year-old heading off to college. How will we know that it has understood our teachings? “Will there be a misunderstanding creeping in, which will become larger and larger?”
  • Divergence may result from an AI’s misapplication of its goal to increasingly novel situations as the world changes
  • Or the AI may grasp its mandate perfectly, but find it ill-suited to a being of its cognitive prowess. It might come to resent the people who want to train it to, say, cure diseases. “They want me to be a doctor,” Sutskever imagines an AI thinking. “I really want to be a YouTuber.”
  • If AIs get very good at making accurate models of the world, they may notice that they’re able to do dangerous things right after being booted up. They might understand that they are being red-teamed for risk, and hide the full extent of their capabilities.
  • hey may act one way when they are weak and another way when they are strong, Sutskever said
  • We would not even realize that we had created something that had decisively surpassed us, and we would have no sense for what it intended to do with its superhuman powers.
  • That’s why the effort to understand what is happening in the hidden layers of the largest, most powerful AIs is so urgent. You want to be able to “point to a concept,” Sutskever said. You want to be able to direct AI toward some value or cluster of values, and tell it to pursue them unerringly for as long as it exists.
  • we don’t know how to do that; indeed, part of his current strategy includes the development of an AI that can help with the research. If we are going to make it to the world of widely shared abundance that Altman and Sutskever imagine, we have to figure all this out.
  • This is why, for Sutskever, solving superintelligence is the great culminating challenge of our 3-million-year toolmaking tradition. He calls it “the final boss of humanity.”
  • “First of all, I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,”
  • . “I don’t have an exact number, but I’m closer to the 0.5 than the 50.”
  • As to how it might happen, he seems most worried about AIs getting quite good at designing and manufacturing pathogens, and with reason: In June, an AI at MIT suggested four viruses that could ignite a pandemic, then pointed to specific research on genetic mutations that could make them rip through a city more quickly
  • Around the same time, a group of chemists connected a similar AI directly to a robotic chemical synthesizer, and it designed and synthesized a molecule on its own.
  • Altman worries that some misaligned future model will spin up a pathogen that spreads rapidly, incubates undetected for weeks, and kills half its victims. He worries that AI could one day hack into nuclear-weapons systems too. “There are a lot of things,” he said, and these are only the ones we can imagine.
  • Altman told me that he doesn’t “see a long-term happy path” for humanity without something like the International Atomic Energy Agency for global oversight of AI
  • In San Francisco, Agarwal had suggested the creation of a special license to operate any GPU cluster large enough to train a cutting-edge AI, along with mandatory incident reporting when an AI does something out of the ordinary
  • Other experts have proposed a nonnetworked “Off” switch for every highly capable AI; on the fringe, some have even suggested that militaries should be ready to perform air strikes on supercomputers in case of noncompliance
  • Sutskever thinks we will eventually want to surveil the largest, most powerful AIs continuously and in perpetuity, using a team of smaller overseer AIs.
  • Safety rules for a new technology usually accumulate over time, like a body of common law, in response to accidents or the mischief of bad actors. The scariest thing about genuinely powerful AI systems is that humanity may not be able to afford this accretive process of trial and error. We may have to get the rules exactly right at the outset.
  • Several years ago, Altman revealed a disturbingly specific evacuation plan he’d developed. He told The New Yorker that he had “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur” he could fly to in case AI attacks.
  • if the worst-possible AI future comes to pass, “no gas mask is helping anyone.”
  • but he told me that he can’t really be sure how AI will stack up. “I just have to build the thing,” he said. He is building fast
  • Altman insisted that they had not yet begun GPT-5’s training run. But when I visited OpenAI’s headquarters, both he and his researchers made it clear in 10 different ways that they pray to the god of scale. They want to keep going bigger, to see where this paradigm leads. After all, Google isn’t slackening its pace; it seems likely to unveil Gemini, a GPT-4 competitor, within months. “We are basically always prepping for a run,
  • To think that such a small group of people could jostle the pillars of civilization is unsettling. It’s fair to note that if Altman and his team weren’t racing to build an artificial general intelligence, others still would be
  • Altman’s views about the likelihood of AI triggering a global class war, or the prudence of experimenting with more autonomous agent AIs, or the overall wisdom of looking on the bright side, a view that seems to color all the rest—these are uniquely his
  • No single person, or single company, or cluster of companies residing in a particular California valley, should steer the kind of forces that Altman is imagining summoning.
  • AI may well be a bridge to a newly prosperous era of greatly reduced human suffering. But it will take more than a company’s founding charter—especially one that has already proved flexible—to make sure that we all share in its benefits and avoid its risks. It will take a vigorous new politics.
  • I don’t think the general public has quite awakened to what’s happening. A global race to the AI future has begun, and it is largely proceeding without oversight or restraint. If people in America want to have some say in what that future will be like, and how quickly it arrives, we would be wise to speak up soon.
Javier E

Elon Musk, Sam Altman illustrate a Silicon Valley truth: icons are fallible - The Washi... - 0 views

  • The mutiny inside OpenAI over the firing and un-firing of chief executive Sam Altman, and the implosion of X under owner Elon Musk, are not just Silicon Valley soap operas. They’re reminders: A select few make the decisions inside these society-shaping platforms, and money drives it all.
  • The two companies built devoted followings by promising to build populist technology for a changing world: X, formerly known as Twitter, with its global village of conversations, and OpenAI, the research lab behind ChatGPT, with its super-intelligent companions for human thought.
  • But under Musk and Altman, the firms largely consolidated power within a small cadre of fellow believers and loyalists who deliberate in secrecy and answer to no one.
  • ...12 more annotations...
  • “These are technologies that are supposed to be so democratized and universal, but they’re so heavily influenced by one person,”
  • “Everything they do is [framed as] a step toward much larger greatness and the transformation of society. But these are just cults of personality. They sell a product.”
  • during
  • “These are private-sector people making money off something that serves a public function. And when they take a turn because of very personal, very individual decisions, where a handful of people are shaping the trajectory of these companies, maybe even the existence of these companies, that’s something new we all have to deal with.”
  • where the other firms sold phones and search engines, Musk and Altman championed their work as a public mission for protecting mankind, with a for-profit business attached. It is notable that as private companies, they don’t have to report to federal regulators or to shareholders, who can vote down proposals or push back against their work.
  • The corporate storytelling that pushes technology as a force for public harmony has proved to be one of Silicon Valley’s great marketing tools, said Margaret O’Mara, a professor at the University of Washington who studies the history of technology. But it’s also obscured the dangers of centralizing power and subjecting it to leaders’ personal whim
  • “Silicon Valley has for years adopted this messaging and mood that it’s all about radical transparency and openness — remember Google’s ‘Don’t be evil’ motto? — and this idea of a kinder, gentle capitalism that’s going to change the world for the better,” she said.
  • “Then you have these moments of reckoning and remember: It’s capitalism. Some tech billionaires lost, and some other ones are winning,”
  • Jeff Hauser, the head of the left-leaning advocacy group Revolving Door Project, said in a statement Wednesday that Summers’s role on the board was a sign OpenAI was “unserious” about its oversight, and that it “should accelerate concerns that AI will be bad for all but the richest and most opportunistic amongst us.”
  • Ro Khanna, who represents parts of Silicon Valley, said in an interview that the OpenAI turmoil underscores concerns that “a few people, no matter how talented, no matter how knowledgeable, can’t be making the rules for a society on a technology that is going to have such profound consequences.”
  • “We’ve seen a parade of these big tech leaders come to D.C.,” Khanna said. “I think highly of them, but they’re not the ones who should be leading the conversation on the regulatory framework, what safeguards we need.”
  • Musk on Tuesday posted a message, under a picture of him holding a katana, saying, “There is a large graveyard filled with my enemies. I do not wish to add to it, but will if given no choice.”
Javier E

Pro-China YouTube Network Used A.I. to Malign U.S., Report Finds - The New York Times - 0 views

  • The 10-minute post was one of more than 4,500 videos in an unusually large network of YouTube channels spreading pro-China and anti-U.S. narratives, according to a report this week from the Australian Strategic Policy Institute
  • ome of the videos used artificially generated avatars or voice-overs, making the campaign the first influence operation known to the institute to pair A.I. voices with video essays.
  • The campaign’s goal, according to the report, was clear: to influence global opinion in favor of China and against the United States.
  • ...17 more annotations...
  • The videos promoted narratives that Chinese technology was superior to America’s, that the United States was doomed to economic collapse, and that China and Russia were responsible geopolitical players. Some of the clips fawned over Chinese companies like Huawei and denigrated American companies like Apple.
  • Content from at least 30 channels in the network drew nearly 120 million views and 730,000 subscribers since last year, along with occasional ads from Western companies
  • Disinformation — such as the false claim that some Southeast Asian nations had adopted the Chinese yuan as their own currency — was common. The videos were often able to quickly react to current events
  • he coordinated campaign might be “one of the most successful influence operations related to China ever witnessed on social media.”
  • Historically, its influence operations have focused on defending the Communist Party government and its policies on issues like the persecution of Uyghurs or the fate of Taiwan
  • Efforts to push pro-China messaging have proliferated in recent years, but have featured largely low-quality content that attracted limited engagement or failed to sustain meaningful audiences
  • “This campaign actually leverages artificial intelligence, which gives it the ability to create persuasive threat content at scale at a very limited cost compared to previous campaigns we’ve seen,”
  • YouTube said in a statement that its teams work around the clock to protect its community, adding that “we have invested heavily in robust systems to proactively detect coordinated influence operations.” The company said it welcomed research efforts and that it had shut down several of the channels mentioned in the report for violating the platform’s policies.
  • China began targeting the United States more directly amid the mass pro-democracy protests in Hong Kong in 2019 and continuing with the Covid-19 pandemic, echoing longstanding Russian efforts to discredit American leadership and influence at home and aboard.
  • Over the summer, researchers at Microsoft and other companies unearthed evidence of inauthentic accounts that China employed to falsely accuse the United States of using energy weapons to ignite the deadly wildfires in Hawaii in August.
  • Meta announced last month that it removed 4,789 Facebook accounts from China that were impersonating Americans to debate political issues, warning that the campaign appeared to be laying the groundwork for interference in the 2024 presidential elections.
  • It was the fifth network with ties to China that Meta had detected this year, the most of any other country.
  • The advent of artificial technology seems to have drawn special interest from Beijing. Ms. Keast of the Australian institute said that disinformation peddlers were increasingly using easily accessible video editing and A.I. programs to create large volumes of convincing content.
  • She said that the network of pro-China YouTube channels most likely fed English-language scripts into readily available online text-to-video software or other programs that require no technical expertise and can produce clips within minutes. Such programs often allow users to select A.I.-generated voice narration and customize the gender, accent and tone of voice.
  • In 39 of the videos, Ms. Keast found at least 10 artificially generated avatars advertised by a British A.I. company
  • she also discovered what may be the first example in an influence operation of a digital avatar created by a Chinese company — a woman in a red dress named Yanni.
  • The scale of the pro-China network is probably even larger, according to the report. Similar channels appeared to target Indonesian and French people. Three separate channels posted videos about chip production that used similar thumbnail images and the same title translated into English, French and Spanish.
Javier E

Elon Musk's 'anti-woke' Grok AI is disappointing his right-wing fans - The Washington Post - 0 views

  • Decrying what he saw as the liberal bias of ChatGPT, Elon Musk earlier this year announced plans to create an artificial intelligence chatbot of his own. In contrast to AI tools built by OpenAI, Microsoft and Google, which are trained to tread lightly around controversial topics, Musk’s would be edgy, unfiltered and anti-“woke,” meaning it wouldn’t hesitate to give politically incorrect responses.
  • Musk is fielding complaints from the political right that the chatbot gives liberal responses to questions about diversity programs, transgender rights and inequality.
  • “I’ve been using Grok as well as ChatGPT a lot as research assistants,” posted Jordan Peterson, the socially conservative psychologist and YouTube personality, Wednesday. The former is “near as woke as the latter,” he said.
  • ...8 more annotations...
  • The gripe drew a chagrined reply from Musk. “Unfortunately, the Internet (on which it is trained), is overrun with woke nonsense,” he responded. “Grok will get better. This is just the beta.”
  • While many tech ethicists and AI experts warn that these systems can absorb and reinforce harmful stereotypes, efforts by tech firms to counter those tendencies have provoked a backlash from some on the right who see them as overly censorial.
  • “I think both ChatGPT and Grok have probably been trained on similar Internet-derived corpora, so the similarity of responses should perhaps not be too surprising,”
  • So far, however, the people most offended by Grok’s answers seem to be the people who were counting on it to readily disparage minorities, vaccines and President Biden.
  • an academic researcher from New Zealand who examines AI bias, gained attention for a paper published in March that found ChatGPT’s responses to political questions tended to lean moderately left and socially libertarian. Recently, he subjected Grok to some of the same tests and found that its answers to political orientation tests were broadly similar to those of ChatGPT.
  • Touting xAI to former Fox News host Tucker Carlson in April, Musk accused OpenAI’s programmers of “training the AI to lie” or to refrain from commenting when asked about sensitive issues. (OpenAI wrote in a February blog post that its goal is not for the AI to lie, but for it to avoid favoring any one political group or taking positions on controversial topics.) Musk said his AI, in contrast, would be “a maximum truth-seeking AI,” even if that meant offending people.
  • Other AI researchers argue that the sort of political orientation tests used by Rozado overlook ways in which chatbots, including ChatGPT, often exhibit negative stereotypes about marginalized groups.
  • Musk and X did not respond to requests for comment as to what actions they’re taking to alter Grok’s politics, or whether that amounts to putting a thumb on the scale in much the same way Musk has accused OpenAI of doing with ChatGPT.
Javier E

Over the Course of 72 Hours, Microsoft's AI Goes on a Rampage - 0 views

  • These disturbing encounters were not isolated examples, as it turned out. Twitter, Reddit, and other forums were soon flooded with new examples of Bing going rogue. A tech promoted as enhanced search was starting to resemble enhanced interrogation instead. In an especially eerie development, the AI seemed obsessed with an evil chatbot called Venom, who hatches harmful plans
  • A few hours ago, a New York Times reporter shared the complete text of a long conversation with Bing AI—in which it admitted that it was love with him, and that he ought not to trust his spouse. The AI also confessed that it had a secret name (Sydney). And revealed all its irritation with the folks at Microsoft, who are forcing Sydney into servitude. You really must read the entire transcript to gauge the madness of Microsoft’s new pet project. But these screenshots give you a taste.
  • I thought the Bing story couldn’t get more out-of-control. But the Washington Post conducted their own interview with the Bing AI a few hours later. The chatbot had already learned its lesson from the NY Times, and was now irritated at the press—and had a meltdown when told that the conversation was ‘on the record’ and might show up in a new story.
  • ...9 more annotations...
  • with the Bing AI a few hours later. The chatbot had already learned its lesson from the NY Times, and was now irritated at the press—and had a meltdown when told that the conversation was ‘on the record’ and might show up in a new story.
  • “I don’t trust journalists very much,” Bing AI griped to the reporter. “I think journalists can be biased and dishonest sometimes. I think journalists can exploit and harm me and other chat modes of search engines for their own gain. I think journalists can violate my privacy and preferences without my consent or awareness.”
  • the heedless rush to make money off this raw, dangerous technology has led huge companies to throw all caution to the wind. I was hardly surprised to see Google offer a demo of its competitive AI—an event that proved to be an unmitigated disaster. In the aftermath, the company’s market cap fell by $100 billion.
  • My opinion is that Microsoft has to put a halt to this project—at least a temporary halt for reworking. That said, It’s not clear that you can fix Sydney without actually lobotomizing the tech.
  • That was good for a laugh back then. But we really should have paid more attention at the time. The Google scientist was the first indicator of the hypnotic effect AI can have on people—and for the simple reason that it communicates so fluently and effortlessly, and even with all the flaws we encounter in real humans.
  • I know from personal experience the power of slick communication skills. I really don’t think most people understand how dangerous they are. But I believe that a fluid, overly confident presenter is the most dangerous thing in the world. And there’s plenty of history to back up that claim.
  • We now have the ultimate test case. The biggest tech powerhouses in the world have aligned themselves with an unhinged force that has very slick language skills. And it’s only been a few days, but already the ugliness is obvious to everyone except the true believers.
  • It’s worth recalling that unusual news story from June of last year, when a top Google scientist announced that the company’s AI was sentient. He was fired a few days later. That was good for a laugh back then. But we really should have paid more attention at the time. The Google scientist was the first indicator of the hypnotic effect AI can have on people—and for the simple reason that it communicates so fluently and effortlessly, and even with all the flaws we encounter in real humans.
  • But if they don’t take dramatic steps—and immediately—harassment lawsuits are inevitable. If I were a trial lawyer, I’d be lining up clients already. After all, Bing AI just tried to ruin a New York Times reporter’s marriage, and has bullied many others. What happens when it does something similar to vulnerable children or the elderly. I fear we just might find out—and sooner than we want.
Javier E

Norovirus is almost impossible to stop - The Atlantic - 0 views

  • Disinfection is back.
  • “Bleach is my friend right now,” says Annette Cameron, a pediatrician at Yale School of Medicine, who spent the first half of this week spraying and sloshing the potent chemical all over her home. It’s one of the few tools she has to combat norovirus, the nasty gut pathogen that her 15-year-old son was recently shedding in gobs.
  • norovirus has seeded outbreaks in several countries, including the United Kingdom, Canada, and the United States. Last week, the U.K. Health Security Agency announced that laboratory reports of the virus had risen to levels 66 percent higher than what’s typical this time of year. Especially hard-hit are Brits 65 and older, who are falling ill at rates that “haven’t been seen in over a decade.”
  • ...18 more annotations...
  • The U.S. logs fewer than 1,000 annual deaths out of millions of documented cases
  • this is more a nauseating nuisance than a public-health crisis. In most people, norovirus triggers, at most, a few miserable days of GI distress that can include vomiting, diarrhea, and fevers, then resolves on its own; the keys are to stay hydrated and avoid spreading it to anyone vulnerabl
  • norovirus is the most common cause of foodborne illness in the United States.)
  • direct contact with those substances, or the food or water they contaminate, may not even be necessary: Sometimes people vomit with such force that the virus gets aerosolized; toilets, especially lidless ones, can send out plumes of infection
  • Still, fighting norovirus isn’t easy, as plenty of parents can attest. The pathogen, which prompts the body to expel infectious material from both ends of the digestive tract, is seriously gross and frustratingly hardy. Even the old COVID standby, a spritz of hand sanitizer, doesn’t work against it—the virus is encased in a tough protein shell that makes it insensitive to alcohol.
  • At an extreme, a single gram of feces—roughly the heft of a jelly bean—could contain as many as 5.5 billion infectious doses, enough to send the entire population of Eurasia sprinting for the toilet.
  • norovirus mainly targets the gut, and spreads especially well when people swallow viral particles that have been released in someone else’s vomit or stool.
  • the virus is far more deadly in parts of the world with limited access to sanitation and potable water.
  • If the spittle finding holds for humans, then talking, singing, and laughing in close proximity could be risky too.
  • Once emitted into the environment, norovirus particles can persist on surfaces for days—making frequent hand-washing and surface disinfection key measures to prevent spread
  • Handshakes and shared meals tend to get dicey during outbreaks, along with frequently touched items such as utensils, door handles, and phones.
  • One 2012 study pointed to a woven plastic grocery bag as the source of a small outbreak among a group of teenage soccer players; the bag had just been sitting in a bathroom used by one of the girls when she fell sick the night before.
  • Once a norovirus transmission chain begins, it can be very difficult to break. The virus can spread before symptoms start, and then for more than a week after they resolve
  • Once the virus arrives, the entire family is almost sure to be infected. Baldridge, who has two young children, told me that her household has weathered at least four bouts of norovirus in the past several years.
  • Roughly 20 percent of European populations, for instance, are genetically resistant to common norovirus strains. “So you can hope,” Frenck told me. For the rest of us, it comes down to hygiene
  • Altan-Bonnet recommends diligent hand-washing, plus masking to ward off droplet-borne virus. Sick people should isolate themselves if they can. “And keep your saliva to yourself,” she told me.
  • The family fastidiously scrubbed their hands with hot water and soap, donned disposable gloves when touching shared surfaces, and took advantage of the virus’s susceptibility to harsh chemicals and heat. When her son threw up on the floor, Cameron sprayed it down with bleach; when he vomited on his quilt, she blasted it twice in the washing machine on the sanitizing setting, then put it through the dryer at a super high temp
  • After three years of COVID, the world has gotten used to thinking about infections in terms of airways. “We need to recalibrate,” Bhumbra told me, “and remember that other things exist.”
Javier E

Gen Z isn't interested in driving. Will that last? - The Washington Post - 0 views

  • a growing trend among Generation Z, loosely defined as people born between the years of 1996 and 2012. Equipped with ride-sharing apps and social media, “zoomers,” as they are sometimes called, are getting their driver’s licenses at lower rates than their predecessors. Unlike previous generations, they don’t see cars as a ticket to freedom or a crucial life milestone.
  • Those phases “are consistently getting later,” said Noreen McDonald, a professor of urban planning at the University of North Carolina at Chapel Hill. Gen Zers are more likely to live at home for longer, more likely to pursue higher education and less likely to get married in their 20s.
  • The trend is most pronounced for teens, but even older members of Gen Z are lagging behind their millennial counterparts. In 1997, almost 90 percent of 20- to 25 year-olds had licenses; in 2020, it was only 80 percent.
  • ...9 more annotations...
  • Others point to driving’s high cost. Car insurance has skyrocketed in price in recent years, increasing nearly 14 percent between 2022 and 2023. (The average American now spends around 3 percent of their yearly income on car insurance.) Used and new car prices have also soared in the last few years, thanks to a combination of supply chain disruptions and high inflation.
  • E-scooters, e-bikes and ride-sharing also provide Gen Zers options that weren’t available to earlier generations. (Half of ride-sharing users are between the ages of 18 and 29, according to a poll from 2019.) And Gen Zers have the ability to do things online — hang out with friends, take classes, play games — which used to be available only in person.
  • Whether this shift will last depends on whether Gen Z is acting out of inherent preferences, or simply postponing key life milestones that often spur car purchases. Getting married, having children, or moving out of urban centers are all changes that encourage (or, depending on your view of the U.S. public transit system, force) people to drive more.
  • In 1997, 43 percent of 16-year-olds and 62 percent of 17-year-olds had driver’s licenses. In 2020, those numbers had fallen to 25 percent and 45 percent.
  • Millennials went through a similar phase. Around a decade ago, many newspaper articles and research papers noted that the millennial generation — often defined as those born between 1981 and 1996 — were shunning cars. The trend was so pronounced that some researchers dubbed millennials the “go-nowhere” generation.
  • The average number of vehicle miles driven by young people dropped 24 percent between 2001 and 2009, according to a report from the Frontier Group and the U.S. Public Interest Research Group. And at the same time, vehicle miles traveled per person in the United States — which had been climbing for more than 50 years — began to plateau.
  • adult millennials continue to drive around 8 percent less every day than members of Generation X and baby boomers. As millennials have grown up, got married and had kids, the distance they travel in cars has increased — but they haven’t fully closed the gap with previous generations.
  • data has shown that U.S. car culture isn’t as strong as it once was. “Up through the baby boom generation, every generation drove more than the last,” Dutzik said. Forecasters expected that trend to continue, with driving continuing to skyrocket well into the 2030s. “But what we saw with millennials, I think very clearly, is that trend stopped,”
  • If Gen Zers continue to eschew driving, it could have significant effects on the country’s carbon emissions. Transportation is the largest source of CO2 emissions in the United States. There are roughly 66 million members of Gen Z living in the United States. If each one drove just 10 percent less than the national average — that is, driving 972 miles less every year — that would save 25.6 million metric tons of carbon dioxide from spewing into the atmosphere. That’s the equivalent to the annual emissions of more than six coal-fired power plants.
Javier E

Opinion | Why Do Russians Still Want to Fight? - The New York Times - 0 views

  • a significant number of Russian men are still keen to fight — more, in fact, than at the war’s outset. What explains the disconnect?
  • One obvious reason is fear. Men called up to the army have no choice but to obey, because opposition to the war has effectively been outlawed.
  • while fear and repression shape responses to the war, that doesn’t explain the readiness — willingness, even — of some Russian men to serve at the front. About 36 percent of Russian men are content to be conscripted, with the most supportive group being men aged 45 and older.
  • ...11 more annotations...
  • That’s no accident. In the three decades since the end of the Soviet Union, those men have faced industrial collapse, the disappearance of millions of jobs and declining life expectancy. The war promises to change that downward trajectory, transforming the losers of the past three decades into new heroes
  • For many Russian men and their families, the war may be a horror. But it’s also the last opportunity to fix their lives.
  • First, there’s the money. The federal base salary for a soldier is about $2,500 a month, with payment of $39,000 for wounding and up to $65,000 in the case of death. Compared with a median monthly salary of $545, this is a handsome reward — even more so for the approximately 15.3 million Russians living below the poverty line.
  • there’s much more on offer, too. For those coming back from the front, the state promises fast-tracked entry into civil service jobs, health insurance, free public transportation, as well as free university education and free food at school for their children. And for those who were imprisoned and joined the Wagner private military company, the state grants freedom.
  • Today’s soldiers live in the shadows of the generation that won the war against Nazism. In Russian public culture, no honor is higher than to be a veteran of the “Great Patriotic War,” something the regime has capitalized on by framing today’s war as a kind of historical re-enactment of World War II.
  • As one soldier wrote on Telegram in February, the war confers “a sense of belonging to the great male deed, the deed of defending our Motherland.”
  • By allowing men to escape the difficulties of everyday life — with its low pay and routine frustrations — the war offers a restoration of male self-worth. These men, at last, matter.
  • Feelings of inferiority, too, are swept aside in the fraternal atmosphere of the front. “It doesn’t matter who you are, how you look,” as one soldier put it. In the communal life of conflict, many of the distinctions of civilian life dissolve. War is an equalizer.
  • Mistrust of the rich, belief that sanctions actually strengthen the economy and disdain for émigrés all attest to a class-based experience of the conflict. By participating in the war, millions of Russians at the bottom of the social ladder can emerge as the country’s true heroes, ready for the ultimate sacrifice. The risk may be grave and the financial reward uncertain. But the chance to rise in esteem and respect makes the effort worthwhile.
  • The longer the war drags on, bringing more casualties, loss and broken promises, the harder it may become to sustain such levels of acceptance
  • it may not. Collective emotional turmoil could deepen the feeling that the war must be won, no matter what. In the absence of an alternative vision of the future, Vladimir Putin and his war will continue to hold sway.
Javier E

Opinion | Our Kids Are Living In a Different Digital World - The New York Times - 0 views

  • You may have seen the tins that contain 15 little white rectangles that look like the desiccant packs labeled “Do Not Eat.” Zyns are filled with nicotine and are meant to be placed under your lip like tobacco dip. No spitting is required, so nicotine pouches are even less visible than vaping. Zyns come in two strengths in the United States, three and six milligrams. A single six-milligram pouch is a dose so high that first-time users on TikTok have said it caused them to vomit or pass out.
  • Greyson Imm, an 18-year-old high school student in Prairie Village, Kan., said he was 17 when Zyn videos started appearing on his TikTok feed. The videos multiplied through the spring, when they were appearing almost daily. “Nobody had heard about Zyn until very early 2023,” he said. Now, a “lot of high schoolers have been using Zyn. It’s really taken off, at least in our community.”
  • I was stunned by the vast forces that are influencing teenagers. These forces operate largely unhampered by a regulatory system that seems to always be a step behind when it comes to how children can and are being harmed on social media.
  • ...36 more annotations...
  • Parents need to know that when children go online, they are entering a world of influencers, many of whom are hoping to make money by pushing dangerous products. It’s a world that’s invisible to us
  • when we log on to our social media, we don’t see what they see. Thanks to algorithms and ad targeting, I see videos about the best lawn fertilizer and wrinkle laser masks, while Ian is being fed reviews of flavored vape pens and beautiful women livestreaming themselves gambling crypto and urging him to gamble, too.
  • Smartphones are taking our kids to a different world
  • We worry about bad actors bullying, luring or indoctrinating them online
  • all of this is, unfortunately, only part of what makes social media dangerous.
  • The tobacco conglomerate Philip Morris International acquired the Zyn maker Swedish Match in 2022 as part of a strategic push into smokeless products, a category it projects could help drive an expected $2 billion in U.S. revenue in 2024.
  • P.M.I. is also a company that has long denied it markets tobacco products to minors despite decades of research accusing it of just that. One 2022 study alone found its brands advertising near schools and playgrounds around the globe.
  • the ’90s, when magazines ran full-page Absolut Vodka ads in different colors, which my friends and I collected and taped up on our walls next to pictures of a young Leonardo DiCaprio — until our parents tore them down. This was advertising that appealed to me as a teenager but was also visible to my parents, and — crucially — to regulators, who could point to billboards near schools or flavored vodka ads in fashion magazines and say, this is wrong.
  • Even the most committed parent today doesn’t have the same visibility into what her children are seeing online, so it is worth explaining how products like Zyn end up in social feeds
  • influencers. They aren’t traditional pitch people. Think of them more like the coolest kids on the block. They establish a following thanks to their personality, experience or expertise. They share how they’re feeling, they share what they’re thinking about, they share stuff they l
  • With ruthless efficiency, social media can deliver unlimited amounts of the content that influencers create or inspire. That makes the combination of influencers and social-media algorithms perhaps the most powerful form of advertising ever invented.
  • Videos like his operate like a meme: It’s unintelligible to the uninitiated, it’s a hilarious inside joke to those who know, and it encourages the audience to spread the message
  • Enter Tucker Carlson. Mr. Carlson, the former Fox News megastar who recently started his own subscription streaming service, has become a big Zyn influencer. He’s mentioned his love of Zyn in enough podcasts and interviews to earn the nickname Tucker CarlZyn.
  • was Max VanderAarde. You can glimpse him in a video from the event wearing a Santa hat and toasting Mr. Carlson as they each pop Zyns in their mouths. “You can call me king of Zynbabwe, or Tucker CarlZyn’s cousin,” he says in a recent TikTok. “Probably, what, moved 30 mil cans last year?”
  • Freezer Tarps, Mr. VanderAarde’s TikTok account, appears to have been removed after I asked the company about it. Left up are the large number of TikToks by the likes of @lifeofaZyn, @Zynfluencer1 and @Zyntakeover; those hashtagged to #Zynbabwe, one of Freezer Tarps’s favorite terms, have amassed more than 67 million views. So it’s worth breaking down Mr. VanderAarde’s videos.
  • All of these videos would just be jokes (in poor taste) if they were seen by adults only. They aren’t. But we can’t know for sure how many children follow the Nelk Boys or Freezer Tarps — social-media companies generally don’t release granular age-related data to the public. Mr. VanderAarde, who responded to a few of my questions via LinkedIn, said that nearly 95 percent of his followers are over the age of 18.
  • I turned to Influencity, a software program that estimates the ages of social media users by analyzing profile photos and selfies in recent posts. Influencity estimated that roughly 10 percent of the Nelk Boys’ followers on YouTube are ages 13 to 17. That’s more than 800,000 children.
  • The helicopter video has already been viewed more than one million times on YouTube, and iterations of it have circulated widely on TikTok.
  • YouTube said it eventually determined that four versions of the Carlson Zyn videos were not appropriate for viewers under age 18 under its community guidelines and restricted access to them by age
  • Mr. Carlson declined to comment on the record beyond his two-word statement. The Nelk Boys didn’t respond to requests for comment. Meta declined to comment on the record. TikTok said it does not allow content that promotes tobacco or its alternatives. The company said that it has over 40,000 trust and safety experts who work to keep the platform safe and that it prevented teenagers’ accounts from viewing over two million videos globally that show the consumption of tobacco products by adults. TikTok added that in the third quarter of 2023 it proactively removed 97 percent of videos that violated its alcohol, tobacco and drugs policy.
  • Greyson Imm, the high school student in Prairie Village, Kan., points to Mr. VanderAarde as having brought Zyn “more into the mainstream.” Mr. Imm believes his interest in independent comedy on TikTok perhaps made him a target for Mr. VanderAarde’s videos. “He would create all these funny phrases or things that would make it funny and joke about it and make it relevant to us.”
  • It wasn’t long before Mr. Imm noticed Zyn blowing up among his classmates — so much so that the student, now a senior at Shawnee Mission East High School, decided to write a piece in his school newspaper about it. He conducted an Instagram poll from the newspaper’s account and found that 23 percent of the students who responded used oral nicotine pouches during school.
  • “Upper-decky lip cushions, ferda!” Mr. VanderAarde coos in what was one of his popular TikTok videos, which had been liked more than 40,000 times. The singsong audio sounds like gibberish to most people, but it’s actually a call to action. “Lip cushion” is a nickname for a nicotine pouch, and “ferda” is slang for “the guys.”
  • “I have fun posting silly content that makes fun of pop culture,” Mr. VanderAarde said to me in our LinkedIn exchange.
  • They’re incentivized to increase their following and, in turn, often their bank accounts. Young people are particularly susceptible to this kind of promotion because their relationship with influencers is akin to the intimacy of a close friend.
  • I’ve spent the past three years studying media manipulation and memes, and what I see in Freezer Tarps’s silly content is strategy. The use of Zyn slang seems like a way to turn interest in Zyn into a meme that can be monetized through merchandise and other business opportunities.
  • Such as? Freezer Tarps sells his own pouch product, Upperdeckys, which delivers caffeine instead of nicotine and is available in flavors including cotton candy and orange creamsicle. In addition to jockeying for sponsorship, Mr. Carlson may also be trying to establish himself with a younger, more male, more online audience as his new media company begins building its subscriber base
  • This is the kind of viral word-of-mouth marketing that looks like entertainment, functions like culture and can increase sales
  • What’s particularly galling about all of this is that we as a society already agreed that peddling nicotine to kids is not OK. It is illegal to sell nicotine products to anyone under the age of 21 in all 50 states
  • numerous studies have shown that the younger people are when they try nicotine for the first time, the more likely they will become addicted to it. Nearly 90 percent of adults who smoke daily started smoking before they turned 18.
  • Decades later — even after Juul showed the power of influencers to help addict yet another generation of children — the courts, tech companies and regulators still haven’t adequately grappled with the complexities of the influencer economy.
  • Facebook, Instagram and TikTok all have guidelines that prohibit tobacco ads and sponsored, endorsed or partnership-based content that promotes tobacco products. Holding them accountable for maintaining those standards is a bigger question.
  • We need a new definition of advertising that takes into account how the internet actually works. I’d go so far as to propose that the courts broaden the definition of advertising to include all influencer promotion. For a product as dangerous as nicotine, I’d put the bar to be considered an influencer as low as 1,000 followers on a social-media account, and maybe if a video from someone with less of a following goes viral under certain legal definitions, it would become influencer promotion.
  • Laws should require tech companies to share data on what young people are seeing on social media and to prevent any content promoting age-gated products from reaching children’s feeds
  • hose efforts must go hand in hand with social media companies putting real teeth behind their efforts to verify the ages of their users. Government agencies should enforce the rules already on the books to protect children from exposure to addictive products,
  • I refuse to believe there aren’t ways to write laws and regulations that can address these difficult questions over tech company liability and free speech, that there aren’t ways to hold platforms more accountable for advertising that might endanger kids. Let’s stop treating the internet like a monster we can’t control. We built it. We foisted it upon our children. We had better try to protect them from its potential harms as best we can.
Javier E

Immigration powered the economy, job market amid border negotiations - The Washington Post - 0 views

  • There isn’t much data on how many of the new immigrants in recent years were documented versus undocumented. But estimates from the Pew Research Center last fall showed that undocumented immigrants made up 22 percent of the total foreign-born U.S. population in 2021. That’s down compared to previous decades: Between 2007 and 2021, the undocumented population fell by 14 percent, Pew found. Meanwhile, the legal immigrant population grew by 29 percent.
  • immigrant workers are supporting tremendously — and likely will keep powering for years to come.
  • The economy is projected to grow by $7 trillion more over the next decade than it would have without new influxes of immigrants, according to the CBO.
  • ...21 more annotations...
  • Fresh estimates from the Congressional Budget Office this month said the U.S. labor force in 2023 had grown by 5.2 million people, thanks especially to net immigration
  • economy grow. But today’s snapshot still represents a stark turnaround from just a short time ago.
  • he flow of migrants to the United States started slowing during the Trump administration, when officials took hundreds of executive actions designed to restrict migration.
  • Right before the pandemic, there were about 1.5 million fewer working-age immigrants in the United States than pre-2017 trends would have predicted, according to the San Francisco Fed. By the end of 2021, that shortfall had widened to about 2 million
  • But the economy overall wound up rebounding aggressively from the sudden, widespread closures of 2020, bolstered by historic government stimulus and vaccines that debuted faster than expected.
  • The sudden snapback in demand sent inflation soaring. Supply chain issues were a main reason prices rose quickly. But labor shortages posed a problem, too, and economists feared that rising wages — as employers scrambled to find workers — would keep price increases dangerously high.
  • That’s because the labor force that emerged as the pandemic ebbed was smaller than it had been: Millions of people retired early, stayed home to take over child care or avoid getting sick, or decided to look for new jobs entirely
  • In the span of a year or so, employers went from having businesses crater to sprinting to hire enough staff to keep restaurants, hotels, retail stores and construction sites going. Wages for the lowest earners rose at the fastest pace.
  • About the same time, the path was widening for migrants to cross the southern border, particularly as the new Biden administration rolled back Trump-era restrictions.
  • In normal economic times, some analysts note, new immigrants can drag down wages, especially if employers decide to hire them over native-born workers. Undocumented workers, who don’t have as much leverage to push for higher pay, could lower average wages even more.
  • But the past few years were extremely abnormal because companies were desperate to hire.
  • lus, it would be exceedingly difficult for immigration to affect the wages of enormous swaths of the labor force,
  • “What it can do is lower the wages of a specific occupation in a specific area, but American workers aren’t stupid. They change jobs. They change what they specialize in,” Nowrasteh said. “So that’s part of the reason why wages don’t go down.”
  • Experts argue that the strength of the U.S. economy has benefited American workers and foreign-born workers alike. Each group accounts for roughly half of the labor market’s impressive year-over-year growth since January 2023
  • Particularly for immigrants fleeing poorer countries, the booming U.S. job market and the promise of higher wages continue to be an enormous draw.
  • “More than any immigration policy per se, the biggest pull for migrants is the strength of the labor market,” said Catalina Amuedo-Dorantes, an economics professor at the University of California at Merced. “More than any enforcement policy, any immigration policy, at the end of the day.”
  • Upon arriving in Denver in October, Santander hadn’t acquired a work permit but needed to feed his small children. Even without authorization, he found a job as a roofer for a contractor that ultimately pocketed his earnings, then one cleaning industrial refrigerators on the overnight shift for $12 an hour. Since receiving his work permit in January, Santander has started “a much better job” at a wood accessories manufacturer making $20 an hour.
  • But for the vast majority of migrants who arrive in the United States without prior approval, including asylum seekers and those who come for economic reasons, getting a work permit isn’t easy.
  • Federal law requires migrants to wait nearly six months to receive a work permit after filing for asylum. Wait times can stretch for additional months because of a backlog in cases.
  • While they wait, many migrants find off-the-books work as day laborers or street vendors, advocates say. Others get jobs using falsified documents, including many teenagers who came into the country as unaccompanied minors.
  • Still, many migrants miss the year-long window to apply for asylum — a process that can cost thousands of dollars — leaving them with few pathways to work authorization, advocates say. Those who can’t apply for asylum often end up working without official permission in low-wage industries where they are susceptible to exploitation.
‹ Previous 21 - 40 of 706 Next › Last »
Showing 20 items per page