Skip to main content

Home/ History Readings/ Group items tagged computing

Rss Feed Group items tagged

Javier E

'Social Order Could Collapse' in AI Era, Two Top Japan Companies Say - WSJ - 0 views

  • Japan’s largest telecommunications company and the country’s biggest newspaper called for speedy legislation to restrain generative artificial intelligence, saying democracy and social order could collapse if AI is left unchecked.
  • the manifesto points to rising concern among American allies about the AI programs U.S.-based companies have been at the forefront of developing.
  • The Japanese companies’ manifesto, while pointing to the potential benefits of generative AI in improving productivity, took a generally skeptical view of the technology
  • ...8 more annotations...
  • Without giving specifics, it said AI tools have already begun to damage human dignity because the tools are sometimes designed to seize users’ attention without regard to morals or accuracy.
  • Unless AI is restrained, “in the worst-case scenario, democracy and social order could collapse, resulting in wars,” the manifesto said.
  • It said Japan should take measures immediately in response, including laws to protect elections and national security from abuse of generative AI.
  • The Biden administration is also stepping up oversight, invoking emergency federal powers last October to compel major AI companies to notify the government when developing systems that pose a serious risk to national security. The U.S., U.K. and Japan have each set up government-led AI safety institutes to help develop AI guidelines.
  • NTT and Yomiuri said their manifesto was motivated by concern over public discourse. The two companies are among Japan’s most influential in policy. The government still owns about one-third of NTT, formerly the state-controlled phone monopoly.
  • Yomiuri Shimbun, which has a morning circulation of about six million copies according to industry figures, is Japan’s most widely-read newspaper. Under the late Prime Minister Shinzo Abe and his successors, the newspaper’s conservative editorial line has been influential in pushing the ruling Liberal Democratic Party to expand military spending and deepen the nation’s alliance with the U.S.
  • The Yomiuri’s news pages and editorials frequently highlight concerns about artificial intelligence. An editorial in December, noting the rush of new AI products coming from U.S. tech companies, said “AI models could teach people how to make weapons or spread discriminatory ideas.” It cited risks from sophisticated fake videos purporting to show politicians speaking.
  • NTT is active in AI research, and its units offer generative AI products to business customers. In March, it started offering these customers a large-language model it calls “tsuzumi” which is akin to OpenAI’s ChatGPT but is designed to use less computing power and work better in Japanese-language contexts.
Javier E

I tried out an Apple Vision Pro. It frightened me | Arwa Mahdawi | The Guardian - 0 views

  • Despite all the marketed use cases, the most impressive aspect of it is the immersive video
  • Watching a movie, however, feels like you’ve been transported into the content.
  • that raises serious questions about how we perceive the world and what we consider reality. Big tech companies are desperate to rush this technology out but it’s not clear how much they’ve been worrying about the consequences.
  • ...10 more annotations...
  • it is clear that its widespread adoption is a matter of when, not if. There is no debate that we are moving towards a world where “real life” and digital technology seamlessly blur
  • Over the years there have been multiple reports of people being harassed and even “raped” in the metaverse: an experience that feels scarily real because of how immersive virtual reality is. As the lines between real life and the digital world blur to a point that they are almost indistinguishable, will there be a meaningful difference between online assault and an attack in real life?
  • more broadly, spatial computing is going to alter what we consider reality
  • Researchers from Stanford and Michigan University recently undertook a study on the Vision Pro and other “passthrough” headsets (that’s the technical term for the feature which brings VR content into your real-world surrounding so you see what’s around you while using the device) and emerged with some stark warnings about how this tech might rewire our brains and “interfere with social connection”.
  • These headsets essentially give us all our private worlds and rewrite the idea of a shared reality. The cameras through which you see the world can edit your environment – you can walk to the shops wearing it, for example, and it might delete all the homeless people from your view and make the sky brighter.
  • “What we’re about to experience is, using these headsets in public, common ground disappears,”
  • “People will be in the same physical place, experiencing simultaneous, visually different versions of the world. We’re going to lose common ground.”
  • It’s not just the fact that our perception of reality might be altered that’s scary: it’s the fact that a small number of companies will have so much control over how we see the world. Think about how much influence big tech already has when it comes to content we see, and then multiply that a million times over. You think deepfakes are scary? Wait until they seem even more realistic.
  • We’re seeing a global rise of authoritarianism. If we’re not careful this sort of technology is going to massively accelerate it.
  • Being able to suck people into an alternate universe, numb them with entertainment, and dictate how they see reality? That’s an authoritarian’s dream. We’re entering an age where people can be mollified and manipulated like never before
Javier E

'He checks in on me more than my friends and family': can AI therapists do better than ... - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
Javier E

How We Can Control AI - WSJ - 0 views

  • What’s still difficult is to encode human values
  • That currently requires an extra step known as Reinforcement Learning from Human Feedback, in which programmers use their own responses to train the model to be helpful and accurate. Meanwhile, so-called “red teams” provoke the program in order to uncover any possible harmful outputs
  • This combination of human adjustments and guardrails is designed to ensure alignment of AI with human values and overall safety. So far, this seems to have worked reasonably well.
  • ...22 more annotations...
  • At some point they will be able to, for example, suggest recipes for novel cyberattacks or biological attacks—all based on publicly available knowledge.
  • But as models become more sophisticated, this approach may prove insufficient. Some models are beginning to exhibit polymathic behavior: They appear to know more than just what is in their training data and can link concepts across fields, languages, and geographies.
  • We need to adopt new approaches to AI safety that track the complexity and innovation speed of the core models themselves.
  • What’s much harder to test for is what’s known as “capability overhang”—meaning not just the model’s current knowledge, but the derived knowledge it could potentially generate on its own.
  • Red teams have so far shown some promise in predicting models’ capabilities, but upcoming technologies could break our current approach to safety in AI. For one, “recursive self-improvement” is a feature that allows AI systems to collect data and get feedback on their own and incorporate it to update their own parameters, thus enabling the models to train themselves
  • This could result in, say, an AI that can build complex system applications (e.g., a simple search engine or a new game) from scratch. But, the full scope of the potential new capabilities that could be enabled by recursive self-improvement is not known.
  • Another example would be “multi-agent systems,” where multiple independent AI systems are able to coordinate with each other to build something new.
  • This so-called “combinatorial innovation,” where systems are merged to build something new, will be a threat simply because the number of combinations will quickly exceed the capacity of human oversight.
  • Short of pulling the plug on the computers doing this work, it will likely be very difficult to monitor such technologies once these breakthroughs occur
  • Current regulatory approaches are based on individual model size and training effort, and are based on passing increasingly rigorous tests, but these techniques will break down as the systems become orders of magnitude more powerful and potentially elusive
  • AI regulatory approaches will need to evolve to identify and govern the new emergent capabilities and the scaling of those capabilities.
  • But the AI Act has already fallen behind the frontier of innovation, as open-source AI models—which are largely exempt from the legislation—expand in scope and number
  • Europe has so far attempted the most ambitious regulatory regime with its AI Act,
  • both Biden’s order and Europe’s AI Act lack intrinsic mechanisms to rapidly adapt to an AI landscape that will continue to change quickly and often.
  • a gathering in Palo Alto organized by the Rand Corp. and the Carnegie Endowment for International Peace, where key technical leaders in AI converged on an idea: The best way to solve these problems is to create a new set of testing companies that will be incentivized to out-innovate each other—in short, a robust economy of testing
  • To check the most powerful AI systems, their testers will also themselves have to be powerful AI systems, precisely trained and refined to excel at the single task of identifying safety concerns and problem areas in the world’s most advanced models.
  • To be trustworthy and yet agile, these testing companies should be checked and certified by government regulators but developed and funded in the private market, with possible support by philanthropy organizations
  • The field is moving too quickly and the stakes are too high for exclusive reliance on typical government processes and timeframes.
  • One way this can unfold is for government regulators to require AI models exceeding a certain level of capability to be evaluated by government-certified private testing companies (from startups to university labs to nonprofit research organizations), with model builders paying for this testing and certification so as to meet safety requirements.
  • As AI models proliferate, growing demand for testing would create a big enough market. Testing companies could specialize in certifying submitted models across different safety regimes, such as the ability to self-proliferate, create new bio or cyber weapons, or manipulate or deceive their human creators
  • Much ink has been spilled over presumed threats of AI. Advanced AI systems could end up misaligned with human values and interests, able to cause chaos and catastrophe either deliberately or (often) despite efforts to make them safe. And as they advance, the threats we face today will only expand as new systems learn to self-improve, collaborate and potentially resist human oversight.
  • If we can bring about an ecosystem of nimble, sophisticated, independent testing companies who continuously develop and improve their skill evaluating AI testing, we can help bring about a future in which society benefits from the incredible power of AI tools while maintaining meaningful safeguards against destructive outcomes.
Javier E

Yuval Noah Harari's Apocalyptic Vision - The Atlantic - 0 views

  • He shares with Jared Diamond, Steven Pinker, and Slavoj Žižek a zeal for theorizing widely, though he surpasses them in his taste for provocative simplifications.
  • In medieval Europe, he explains, “Knowledge = Scriptures x Logic,” whereas after the scientific revolution, “Knowledge = Empirical Data x Mathematics.”
  • Silicon Valley’s recent inventions invite galaxy-brain cogitation of the sort Harari is known for. The larger you feel the disruptions around you to be, the further back you reach for fitting analogies
  • ...44 more annotations...
  • Have such technological leaps been good? Harari has doubts. Humans have “produced little that we can be proud of,” he complained in Sapiens. His next books, Homo Deus: A Brief History of Tomorrow (2015) and 21 Lessons for the 21st Century (2018), gazed into the future with apprehension
  • Harari has written another since-the-dawn-of-time overview, Nexus: A Brief History of Information Networks From the Stone Age to AI. It’s his grimmest work yet
  • Harari rejects the notion that more information leads automatically to truth or wisdom. But it has led to artificial intelligence, whose advent Harari describes apocalyptically. “If we mishandle it,” he warns, “AI might extinguish not only the human dominion on Earth but the light of consciousness itself, turning the universe into a realm of utter darkness.”
  • Those seeking a precedent for AI often bring up the movable-type printing press, which inundated Europe with books and led, they say, to the scientific revolution. Harari rolls his eyes at this story. Nothing guaranteed that printing would be used for science, he notes
  • Copernicus’s On the Revolutions of the Heavenly Spheres failed to sell its puny initial print run of about 500 copies in 1543. It was, the writer Arthur Koestler joked, an “all-time worst seller.”
  • The book that did sell was Heinrich Kramer’s The Hammer of the Witches (1486), which ranted about a supposed satanic conspiracy of sexually voracious women who copulated with demons and cursed men’s penises. The historian Tamar Herzig describes Kramer’s treatise as “arguably the most misogynistic text to appear in print in premodern times.” Yet it was “a bestseller by early modern standards,”
  • Kramer’s book encouraged the witch hunts that killed tens of thousands. These murderous sprees, Harari observes, were “made worse” by the printing press.
  • Ampler information flows made surveillance and tyranny worse too, Harari argues. The Soviet Union was, among other things, “one of the most formidable information networks in history,”
  • Information has always carried this destructive potential, Harari believes. Yet up until now, he argues, even such hellish episodes have been only that: episodes
  • Demagogic manias like the ones Kramer fueled tend to burn bright and flame out.
  • States ruled by top-down terror have a durability problem too, Harari explains. Even if they could somehow intercept every letter and plant informants in every household, they’d still need to intelligently analyze all of the incoming reports. No regime has come close to managing this
  • for the 20th-century states that got nearest to total control, persistent problems managing information made basic governance difficult.
  • So it was, at any rate, in the age of paper. Collecting data is now much, much easier.
  • Some people worry that the government will implant a chip in their brain, but they should “instead worry about the smartphones on which they read these conspiracy theories,” Harari writes. Phones can already track our eye movements, record our speech, and deliver our private communications to nameless strangers. They are listening devices that, astonishingly, people are willing to leave by the bedside while having sex.
  • Harari’s biggest worry is what happens when AI enters the chat. Currently, massive data collection is offset, as it has always been, by the difficulties of data analysis
  • What defense could there be against an entity that recognized every face, knew every mood, and weaponized that information?
  • Today’s political deliriums are stoked by click-maximizing algorithms that steer people toward “engaging” content, which is often whatever feeds their righteous rage.
  • Imagine what will happen, Harari writes, when bots generate that content themselves, personalizing and continually adjusting it to flood the dopamine receptors of each user.
  • Kramer’s Hammer of the Witches will seem like a mild sugar high compared with the heroin rush of content the algorithms will concoct. If AI seizes command, it could make serfs or psychopaths of us all.
  • Harari regards AI as ultimately unfathomable—and that is his concern.
  • Although we know how to make AI models, we don’t understand them. We’ve blithely summoned an “alien intelligence,” Harari writes, with no idea what it will do.
  • Last year, Harari signed an open letter warning of the “profound risks to society and humanity” posed by unleashing “powerful digital minds that no one—not even their creators—can understand, predict, or reliably control.” It called for a pause of at least six months on training advanced AI systems,
  • cynics saw the letter as self-serving. It fed the hype by insisting that artificial intelligence, rather than being a buggy product with limited use, was an epochal development. It showcased tech leaders’ Oppenheimer-style moral seriousness
  • it cost them nothing, as there was no chance their research would actually stop. Four months after signing, Musk publicly launched an AI company.
  • The economics of the Information Age have been treacherous. They’ve made content cheaper to consume but less profitable to produce. Consider the effect of the free-content and targeted-advertising models on journalism
  • Since 2005, the United States has lost nearly a third of its newspapers and more than two-thirds of its newspaper jobs, to the point where nearly 7 percent of newspaper employees now work for a single organization, The New York Times
  • we speak of “news deserts,” places where reporting has essentially vanished.
  • AI threatens to exacerbate this. With better chatbots, platforms won’t need to link to external content, because they’ll reproduce it synthetically. Instead of a Google search that sends users to outside sites, a chatbot query will summarize those sites, keeping users within Google’s walled garden.
  • a Truman Show–style bubble: personally generated content, read by voices that sound real but aren’t, plus product placement
  • this would cut off writers and publishers—the ones actually generating ideas—from readers. Our intellectual institutions would wither, and the internet would devolve into a closed loop of “five giant websites, each filled with screenshots of the other four,” as the software engineer Tom Eastman puts it.
  • Harari is Silicon Valley’s ideal of what a chatbot should be. He raids libraries, detects the patterns, and boils all of history down to bullet points. (Modernity, he writes, “can be summarised in a single phrase: humans agree to give up meaning in exchange for power.”)
  • Individual AI models cost billions of dollars. In 2023, about a fifth of venture capital in North America and Europe went to AI. Such sums make sense only if tech firms can earn enormous revenues off their product, by monopolizing it or marketing it. And at that scale, the most obvious buyers are other large companies or governments. How confident are we that giving more power to corporations and states will turn out well?
  • He discusses it as something that simply happened. Its arrival is nobody’s fault in particular.
  • In Harari’s view, “power always stems from cooperation between large numbers of humans”; it is the product of society.
  • like a chatbot, he has a quasi-antagonistic relationship with his sources, an I’ll read them so you don’t have to attitude. He mines other writers for material—a neat quip, a telling anecdote—but rarely seems taken with anyone else’s view
  • Hand-wringing about the possibility that AI developers will lose control of their creation, like the sorcerer’s apprentice, distracts from the more plausible scenario that they won’t lose control, and that they’ll use or sell it as planned. A better German fable might be Richard Wagner’s The Ring of the Nibelung : A power-hungry incel forges a ring that will let its owner rule the world—and the gods wage war over it.
  • Harari’s eyes are more on the horizon than on Silicon Valley’s economics or politics.
  • In Nexus, he proposes four principles. The first is “benevolence,” explained thus: “When a computer network collects information on me, that information should be used to help me rather than manipulate me.”
  • Harari’s other three values are decentralization of informational channels, accountability from those who collect our data, and some respite from algorithmic surveillance
  • these are fine, but they are quick, unsurprising, and—especially when expressed in the abstract, as things that “we” should all strive for—not very helpful.
  • though his persistent first-person pluralizing (“decisions we all make”) softly suggests that AI is humanity’s collective creation rather than the product of certain corporations and the individuals who run them. This obscures the most important actors in the drama—ironically, just as those actors are sapping our intellectual life, hampering the robust, informed debates we’d need in order to make the decisions Harari envisions.
  • Taking AI seriously might mean directly confronting the companies developing it
  • Harari slots easily into the dominant worldview of Silicon Valley. Despite his oft-noted digital abstemiousness, he exemplifies its style of gathering and presenting information. And, like many in that world, he combines technological dystopianism with political passivity.
  • Although he thinks tech giants, in further developing AI, might end humankind, he does not treat thwarting them as an urgent priority. His epic narratives, told as stories of humanity as a whole, do not make much room for such us-versus-them clashes.
Javier E

The Warehouse Worker Who Became a Philosopher - The Atlantic - 0 views

  • leven years ago, Stephen West was stocking groceries at a Safeway warehouse in Seattle. He was 24, and had been working to support himself since dropping out of high school at 16. Homeless at times, he had mainly grown up in group homes and foster-care programs up and down the West Coast after being taken away from his family at 9. He learned to find solace in books.
  • He would tell himself to be grateful for the work: “It’s manual, physical labor, but it’s better than 99.9 percent of jobs that have ever existed in human history.” By the time most kids have graduated from college, he had consumed “the entire Western canon of philosophy.”
  • A notable advantage of packing boxes in a warehouse all day is that rote, solitary work can be accomplished with headphones on. “I would just queue up audio books and listen and pause and think about it and contextualize as much as I could,” he told me. “I was at work for eight hours a day. Seven hours of it would be spent reading philosophy, listening to philosophy; a couple hours interpreting it, just thinking about it. In the last hour of the day, I’d turn on a podcast.”
  • ...11 more annotations...
  • West started his podcast, Philosophize This, in 2013. Podcasting, he realized, was the one “technological medium where there’s no barrier to entry.” He “just turned on a microphone and started talking.”
  • Within months, he was earning enough from donations to quit his warehouse job and pursue philosophy full-time. Now he has some 2 million monthly listeners on Spotify and 150,000 subscribers on YouTube, and Philosophize This holds the No. 3 spot in the country for philosophy podcasts on Apple.
  • He treats the philosophical claims of any given thinker, however outdated, within the sense-making texture of their own time, oscillating adroitly between explanation and criticism and—this is rare—refusing to condescend from the privilege of the present
  • He is, as he once described the 10th-century Islamic scholar Al-Fārābī, “a peacemaker between different time periods.” All the episodes display the qualities that make West so compelling: unpretentious erudition, folksy delivery, subtle wit, and respect for a job well done.
  • “He’s coming at this stuff from the perspective of a person actually searching for interesting answers, not as someone who is seeking academic legitimacy,” Shapiro said. “Too much philosophy is directed toward the other philosophers in the walled garden. He’s doing the opposite.”
  • “Academic philosophy is cloistered and impenetrable, but it needn’t be,” he told me. West, he said, “doesn’t preen or preach or teach; he just talks to you like a smart, curious adult.”
  • I counted just six books on a shelf next to a pair of orange dumbbells: The Complete Essays of Montaigne; The Creative Act, by Rick Rubin; Richard Harland’s Literary Theory From Plato to Barthes; an anthology of feminist theory; And Yet, by Christopher Hitchens; and Foucault’s The Order of Things. The rest of his reading material lives on a Kindle. “If you look at the desktop of my computer, it’ll be a ton of tabs open,” he said, laughing. “Maybe it’s the clutter you’d be expecting.”
  • He just “always wanted to be wiser,” Alina said. “I mean, when he was younger, he literally Googled who was the wisest person.” (Here we can give Socrates his flowers once again.) “That’s how he got into philosophy.”
  • All of us are, as the Spanish philosopher José Ortega y Gasset observed, inexorably the combination of our innate, inimitable selves and the circumstances in which we are embedded. “Yo soy yo y mi circunstancia.”
  • We are captive to the economic, racial, and technological limits of our times, just as we may be propelled forward in unforeseen ways by the winds of innovation.
  • Now he can design any life he likes. “I could be in Bora Bora right now,” he told me. “But I don’t want to be.” He wants to be in Puyallup with his family, in a place “where I can read and do my work and pace around and think about stuff.”
Javier E

Opinion | Artificial Intelligence Requires Specific Safety Rules - The New York Times - 0 views

  • For about five years, OpenAI used a system of nondisclosure agreements to stifle public criticism from outgoing employees. Current and former OpenAI staffers were paranoid about talking to the press. In May, one departing employee refused to sign and went public in The Times. The company apologized and scrapped the agreements. Then the floodgates opened. Exiting employees began criticizing OpenAI’s safety practices, and a wave of articles emerged about its broken promises.
  • These stories came from people who were willing to risk their careers to inform the public. How many more are silenced because they’re too scared to speak out? Since existing whistle-blower protections typically cover only the reporting of illegal conduct, they are inadequate here. Artificial intelligence can be dangerous without being illegal
  • A.I. needs stronger protections — like those in place in parts of the public sector, finance and publicly traded companies — that prohibit retaliation and establish anonymous reporting channels.
  • ...19 more annotations...
  • The company’s chief executive was briefly fired after the nonprofit board lost trust in him.
  • OpenAI has spent the last year mired in scandal
  • Whistle-blowers alleged to the Securities and Exchange Commission that OpenAI’s nondisclosure agreements were illegal.
  • Safety researchers have left the company in droves
  • Now the firm is restructuring its core business as a for-profit, seemingly prompting the departure of more key leaders
  • On Friday, The Wall Street Journal reported that OpenAI rushed testing of a major model in May, attempting to undercut a rival’s publicity; after the release, employees found out the model exceeded the company’s standards for safety. (The company told The Journal the findings were the result of a methodological flaw.)
  • This behavior would be concerning in any industry, but according to OpenAI itself, A.I. poses unique risks. The leaders of the top A.I. firms and leading A.I. researchers have warned that the technology could lead to human extinction.
  • Since more comprehensive national A.I. regulations aren’t coming anytime soon, we need a narrow federal law allowing employees to disclose information to Congress if they reasonably believe that an A.I. model poses a significant safety risk
  • But McKinsey did not hold the majority of employees’ compensation hostage in exchange for signing lifetime nondisparagement agreements, as OpenAI did.
  • People reporting violations of the Atomic Energy Act have more robust whistle-blower protections than those in most fields, while those working in biological toxins for several government departments are protected by proactive, pro-reporting guidance. A.I. workers need similar rules.
  • Many companies maintain a culture of secrecy beyond what is healthy. I once worked at the consulting firm McKinsey on a team that advised Immigration and Customs Enforcement on implementing Donald Trump’s inhumane immigration policies. I was fearful of going public
  • Congress should establish a special inspector general to serve as a point of contact for these whistle-blowers. The law should mandate companies to notify staff about the channels available to them, which they can use without facing retaliation.
  • Earlier this month, OpenAI released a highly advanced new model. For the first time, experts concluded the model could aid in the construction of a bioweapon more effectively than internet research alone could. A third party hired by the company found that the new system demonstrated evidence of “power seeking” and “the basic capabilities needed to do simple in-context scheming
  • penAI decided to publish these results, but the company still chooses what information to share. It is possible the published information paints an incomplete picture of the model’s risks.
  • The A.I. safety researcher Todor Markov — who recently left OpenAI after nearly six years with the firm — suggested one hypothetical scenario. An A.I. company promises to test its models for dangerous capabilities, then cherry-picks results to make the model look safe. A concerned employee wants to notify someone, but doesn’t know who — and can’t point to a specific law being broken. The new model is released, and a terrorist uses it to construct a novel bioweapon. Multiple former OpenAI employees told me this scenario is plausible.
  • The United States’ current arrangement of managing A.I. risks through voluntary commitments places enormous trust in the companies developing this potentially dangerous technology. Unfortunately, the industry in general — and OpenAI in particular — has shown itself to be unworthy of that trust, time and again.
  • The fate of the first attempt to protect A.I. whistle-blowers rests with Governor Gavin Newsom of California. Mr. Newsom has hinted that he will veto a first-of-its-kind A.I. safety bill, called S.B. 1047, which mandates that the largest A.I. companies implement safeguards to prevent catastrophes, features whistle-blower protections, a rare point of agreement between the bill’s supporters and its critics
  • if those legislators are serious in their support for these protections, they should introduce a federal A.I. whistle-blower protection bill. They are well positioned to do so: The letter’s organizer, Representative Zoe Lofgren, is the ranking Democrat on the House Committee on Science, Space and Technology.
  • Last month, a group of leading A.I. experts warned that as the technology rapidly progresses, “we face growing risks that A.I. could be misused to attack critical infrastructure, develop dangerous weapons or cause other forms of catastrophic harm.” These risks aren’t necessarily criminal, but they are real — and they could prove deadly. If that happens, employees at OpenAI and other companies will be the first to know. But will they tell us?
Javier E

In Memoriam: Lewis H. Lapham (1935-2024), by Harper's Magazine - 0 views

  • By drawing upon the authority of Montaigne, who begins his essay “Of Books” with what would be regarded on both Wall Street and Capitol Hill as a career-ending display of transparency:
  • I have no doubt that I often speak of things which are better treated by the masters of the craft, and with more truth. This is simply a trial [essai] of my natural faculties, and not of my acquired ones. If anyone catches me in ignorance, he will score no triumph over me, since I can hardly be answerable to another for my reasonings, when I am not answerable for them to myself, and am never satisfied with them. . . .
  • When I was thirty I assumed that by the time I was fifty I would know what I was talking about. The notice didn’t arrive in the mail. At fifty I knew less than what I thought I knew at thirty, and so I figured that by the time I was seventy, then surely, this being America, where all the stories supposedly end in the key of C major, I would have come up with a reason to believe that I had been made wise. Now I’m seventy-five, and I see no sign of a dog with a bird in its mouth.
  • ...10 more annotations...
  • On the opening of a book or the looking into a manuscript, I listen for the sound of a voice in the first-person singular, and from authors whom I read more than once I learn to value the weight of words and to delight in their meter and cadence—in Gibbon’s polyphonic counterpoint and Guedalla’s command of the subjunctive, in Mailer’s hyperbole and Dillard’s similes, in Twain’s invectives and burlesques with which he set the torch of his ferocious wit to the hospitality tents of the world’s “colossal humbug.”
  • y object was to learn, not preach, which prevented my induction into the national college of pundits but encouraged my reading of history.
  • I soon discovered that I had as much to learn from the counsel of the dead as I did from the advice and consent of the living. The reading of history damps down the impulse to slander the trend and tenor of the times, instills a sense of humor, lessens our fear of what might happen tomorrow.
  • On listening to President Barack Obama preach the doctrine of freedom-loving military invasion to the cadets at West Point, I’m reminded of the speeches that sent the Athenian army to its destruction in Sicily in 415 bc, and I don’t have to wait for dispatches from Afghanistan to suspect that the shooting script for the Pax Americana is a tale told by an idiot.
  • The common store of our shared history is what Goethe had in mind when he said that the inability to “draw on three thousand years is living hand to mouth.”
  • It isn’t with symbolic icons that men make their immortality. They do so with what they’ve learned on their travels across the frontiers of the millennia, salvaging from the wreck of time what they find to be useful or beautiful or true.
  • What preserves the voices of the great authors from one century to the next is not the recording device (the clay tablet, the scroll, the codex, the book, the computer, the iPad) but the force of imagination and the power of expression. It is the strength of the words themselves, not their product placement, that invites the play of mind and induces a change of heart.
  • How do we know what we think we know? Why is it that the more information we collect the less likely we are to grasp what it means? Possibly because a montage is not a narrative, the ear is not the eye, a pattern recognition is not a figure or a form of speech.
  • The surfeit of new and newer news comes so quickly to hand that within the wind tunnels of the “innovative delivery strategies” the data blow away and shred. The time is always now, and what gets lost is all thought of what happened yesterday, last week, three months or three years ago. Unlike moths and fruit flies, human beings bereft of memory, even as poor a memory as Montaigne’s or my own, tend to become disoriented and confused.
  • I know no other way out of what is both the maze of the eternal present and the prison of the self except with a string of words.
Javier E

OpenAI Whistle-Blowers Describe Reckless and Secretive Culture - The New York Times - 0 views

  • A group of OpenAI insiders is blowing the whistle on what they say is a culture of recklessness and secrecy at the San Francisco artificial intelligence company, which is racing to build the most powerful A.I. systems ever created.
  • The group, which includes nine current and former OpenAI employees, has rallied in recent days around shared concerns that the company has not done enough to prevent its A.I. systems from becoming dangerous.
  • The members say OpenAI, which started as a nonprofit research lab and burst into public view with the 2022 release of ChatGPT, is putting a priority on profits and growth as it tries to build artificial general intelligence, or A.G.I., the industry term for a computer program capable of doing anything a human can.
  • ...21 more annotations...
  • They also claim that OpenAI has used hardball tactics to prevent workers from voicing their concerns about the technology, including restrictive nondisparagement agreements that departing employees were asked to sign.
  • “OpenAI is really excited about building A.G.I., and they are recklessly racing to be the first there,” said Daniel Kokotajlo, a former researcher in OpenAI’s governance division and one of the group’s organizers.
  • Other members include William Saunders, a research engineer who left OpenAI in February, and three other former OpenAI employees: Carroll Wainwright, Jacob Hilton and Daniel Ziegler. Several current OpenAI employees endorsed the letter anonymously because they feared retaliation from the company,
  • At OpenAI, Mr. Kokotajlo saw that even though the company had safety protocols in place — including a joint effort with Microsoft known as the “deployment safety board,” which was supposed to review new models for major risks before they were publicly released — they rarely seemed to slow anything down.
  • So was the departure of Dr. Leike, who along with Dr. Sutskever had led OpenAI’s “superalignment” team, which focused on managing the risks of powerful A.I. models. In a series of public posts announcing his departure, Dr. Leike said he believed that “safety culture and processes have taken a back seat to shiny products.”
  • “When I signed up for OpenAI, I did not sign up for this attitude of ‘Let’s put things out into the world and see what happens and fix them afterward,’” Mr. Saunders said.
  • Mr. Kokotajlo, 31, joined OpenAI in 2022 as a governance researcher and was asked to forecast A.I. progress. He was not, to put it mildly, optimistic.In his previous job at an A.I. safety organization, he predicted that A.G.I. might arrive in 2050. But after seeing how quickly A.I. was improving, he shortened his timelines. Now he believes there is a 50 percent chance that A.G.I. will arrive by 2027 — in just three years.
  • He also believes that the probability that advanced A.I. will destroy or catastrophically harm humanity — a grim statistic often shortened to “p(doom)” in A.I. circles — is 70 percent.
  • Last month, two senior A.I. researchers — Ilya Sutskever and Jan Leike — left OpenAI under a cloud. Dr. Sutskever, who had been on OpenAI’s board and voted to fire Mr. Altman, had raised alarms about the potential risks of powerful A.I. systems. His departure was seen by some safety-minded employees as a setback.
  • Mr. Kokotajlo said, he became so worried that, last year, he told Mr. Altman that the company should “pivot to safety” and spend more time and resources guarding against A.I.’s risks rather than charging ahead to improve its models. He said that Mr. Altman had claimed to agree with him, but that nothing much changed.
  • In April, he quit. In an email to his team, he said he was leaving because he had “lost confidence that OpenAI will behave responsibly" as its systems approach human-level intelligence.
  • “The world isn’t ready, and we aren’t ready,” Mr. Kokotajlo wrote. “And I’m concerned we are rushing forward regardless and rationalizing our actions.”
  • On his way out, Mr. Kokotajlo refused to sign OpenAI’s standard paperwork for departing employees, which included a strict nondisparagement clause barring them from saying negative things about the company, or else risk having their vested equity taken away.
  • Many employees could lose out on millions of dollars if they refused to sign. Mr. Kokotajlo’s vested equity was worth roughly $1.7 million, he said, which amounted to the vast majority of his net worth, and he was prepared to forfeit all of it.
  • Mr. Altman said he was “genuinely embarrassed” not to have known about the agreements, and the company said it would remove nondisparagement clauses from its standard paperwork and release former employees from their agreements.)
  • In their open letter, Mr. Kokotajlo and the other former OpenAI employees call for an end to using nondisparagement and nondisclosure agreements at OpenAI and other A.I. companies.
  • “Broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues,”
  • They also call for A.I. companies to “support a culture of open criticism” and establish a reporting process for employees to anonymously raise safety-related concerns.
  • They have retained a pro bono lawyer, Lawrence Lessig, the prominent legal scholar and activist
  • Mr. Kokotajlo and his group are skeptical that self-regulation alone will be enough to prepare for a world with more powerful A.I. systems. So they are calling for lawmakers to regulate the industry, too.
  • “There needs to be some sort of democratically accountable, transparent governance structure in charge of this process," Mr. Kokotajlo said. “Instead of just a couple of different private companies racing with each other, and keeping it all secret.”
Javier E

Ocean Currents in the Atlantic Could Slow by Century's End, Research Shows - The New Yo... - 0 views

  • The last time there was a major slowdown in the mighty network of ocean currents that shapes the climate around the North Atlantic, it seems to have plunged Europe into a deep cold for over a millennium.
  • That was roughly 12,800 years ago, when not many people were around to experience it. But in recent decades, human-driven warming could be causing the currents to slow once more, and scientists have been working to determine whether and when they might undergo another great weakening, which would have ripple effects for weather patterns across a swath of the globe.
  • A pair of researchers in Denmark this week put forth a bold answer: A sharp weakening of the currents, or even a shutdown, could be upon us by century’s end.
  • ...21 more annotations...
  • Climate scientists generally agree that the Atlantic circulation will decline this century, but there’s no consensus on whether it will stall out before 2100.
  • the new findings were reason enough not to regard a shutdown as an abstract, far-off concern. “It’s now,” she said.
  • As humans warm the atmosphere, however, the melting of the Greenland ice sheet is adding large amounts of fresh water to the North Atlantic, which could be disrupting the balance of heat and salinity that keeps the overturning moving. A patch of the Atlantic south of Greenland has cooled conspicuously in recent years, creating a “cold blob” that some scientists see as a sign that the system is slowing.
  • Abrupt thawing of the Arctic permafrost. Loss of the Amazon rain forest. Collapse of the Greenland and West Antarctic ice sheets. Once the world warms past a certain point, these and other events could be set into swift motion, scientists warn, though the exact thresholds at which this would occur are still highly uncertain.
  • In the Atlantic, researchers have been searching for harbingers of tipping-point-like change in a tangle of ocean currents that goes by an unlovely name: the Atlantic Meridional Overturning Circulation, or AMOC (pronounced “AY-mock”).
  • These currents carry warm waters from the tropics through the Gulf Stream, past the southeastern United States, before bending toward northern Europe. When this water releases its heat into the air farther north, it becomes colder and denser, causing it to sink to the deep ocean and move back toward the Equator. This sinking effect, or “overturning,” allows the currents to transfer enormous amounts of heat around the planet, making them hugely influential for the climate around the Atlantic and beyond.
  • adds to a growing body of scientific work that describes how humankind’s continued emissions of heat-trapping gases could set off climate “tipping points,” or rapid and hard-to-reverse changes in the environment.
  • Much of the Northern Hemisphere could cool. The coastlines of North America and Europe could see faster sea-level rise. Northern Europe could experience stormier winters, while the Sahel in Africa and the monsoon regions of Asia would most likely get less rain.
  • Scientists’ uncertainty about the timing of an AMOC collapse shouldn’t be taken as an excuse for not reducing greenhouse-gas emissions to try to avoid it, said Hali Kilbourne, an associate research professor at the University of Maryland Center for Environmental Science.
  • Were the circulation to tip into a much weaker state, the effects on the climate would be far-reaching, though scientists are still examining their potential magnitude.
  • Dr. Ditlevsen’s new analysis focused on a simple metric, based on sea-surface temperatures, that is similar to ones other scientists have used as proxies for the strength of the Atlantic circulation. She conducted the analysis with Peter Ditlevsen, her brother, who is a climate scientist at the University of Copenhagen’s Niels Bohr Institute. They used data on their proxy measure from 1870 to 2020 to calculate statistical indicators that presage changes in the overturning.
  • “Not only do we see an increase in these indicators,” Peter Ditlevsen said, “but we see an increase which is consistent with this approaching a tipping point.”
  • They then used the mathematical properties of a tipping-point-like system to extrapolate from these trends. That led them to predict that the Atlantic circulation could collapse around midcentury, though it could potentially occur as soon as 2025 and as late as 2095.
  • Their analysis included no specific assumptions about how much greenhouse-gas emissions will rise in this century. It assumed only that the forces bringing about an AMOC collapse would continue at an unchanging pace — essentially, that atmospheric carbon dioxide concentrations would keep rising as they have since the Industrial Revolution.
  • they voiced reservations about some of its methods, and said more work was still needed to nail down the timing with greater certainty.
  • Susan Lozier, a physical oceanographer at Georgia Tech, said sea-surface temperatures in the North Atlantic near Greenland weren’t necessarily influenced by changes in the overturning alone, making them a questionable proxy for inferring those changes. She pointed to a study published last year showing that much of the cold blob’s development could be explained by shifts in wind and atmospheric patterns.
  • Scientists are now using sensors slung across the Atlantic to directly measure the overturning. Dr. Lozier is involved in one of these measurement efforts. The aim is to better understand what’s driving the changes beneath the waves, and to improve projections of future changes.
  • Still, the new study sent an urgent message about the need to keep collecting data on the changing ocean currents,
  • scientists’ most advanced computer models of the global climate have produced a wide range of predictions for how the currents might behave in the coming decades, in part because the mix of factors that shape them is so complex.
  • “It is very plausible that we’ve fallen off a cliff already and don’t know it,” Dr. Kilbourne said. “I fear, honestly, that by the time any of this is settled science, it’s way too late to act.”
  • the projects began collecting data in 2004 at the earliest, which isn’t enough time to draw firm long-term conclusions. “It is extremely difficult to look at a short record for the ocean overturning and say what it is going to do over 30, 40 or 50 years,”
Javier E

Do They Really Believe That Stuff? | The New Yorker - 0 views

  • A central roadblock, the psychologist Keith Payne writes, is that people employ “flexible reasoning.” By conceding here and asserting there, they evade our queries, leading us into mazes of rationalization. Once we’re in the maze, it can seem as though these people don’t have stable beliefs, or don’t believe things in the usual way.
  • In “Good Reasonable People: The Psychology Behind America’s Dangerous Divide,” Payne recounts arguing with his brother, who supported Trump, about whether the 2020 election was stolen. “I didn’t know how I could relate to him if he embraced Trump’s lie,” Payne recalls. To Payne’s great relief, his brother rejected Trump’s denialism, writing, on Facebook, that “by the letter of the law, yes, Biden won.” Yet his brother went on to say, “I think there was some malfeasance there in areas, I do. But it can’t be proven.” Like many people, Payne concludes, his brother had arrived at a kind of semi-belief, which allowed him both to acknowledge reality and “to hold on to the larger feeling that Biden’s victory was, deep down, illegitimate.”
  • It’s tempting to assume that only one’s political opponents are this slippery. But flexible reasoning, in Payne’s view, is “a bipartisan affair.
  • ...21 more annotations...
  • So, who are we? Payne argues that, although our identities are infinitely variable, we share a “psychological bottom line”: the conviction that we are “good and reasonable people.”
  • We have “psychological immune systems,” Payne concludes, and they keep us feeling good. Really, they do more than that—they help us maintain a stable sense of who we are.
  • According to Payne, a professor at the University of North Carolina, Chapel Hill, flexible reasoning is a fundamental part of our mental tool kit. We reason flexibly in all sorts of nonpolitical situations. A young scholar might dread being denied tenure; a girlfriend might fear being dumped. But, when disaster strikes, they find ways of reasoning themselves back to happiness—as do we all
  • After taking the surveys away, the researchers secretly altered some of the answers that the respondents had given, then handed the surveys back and asked people to explain their views. Those surveyed only noticed that the answers had been changed twenty-two per cent of the time. “Astonishingly, on the majority of switched questions, participants then proceeded to explain why they chose an answer that they had in fact rejected,” Payne writes. “And the explanations they gave were every bit as sincere and compelling as the explanations they gave to answers that they actually had chosen.”
  • our determination to see ourselves as good, reasonable people extends to our tribes: we pledge our strongest loyalties to those groups that can “create and sustain our sense of identity as a good and valuable person.”
  • studies have shown that most people are pretty disorganized in their political thinking: very few of us hold a suite of positions that’s intellectually coherent or consistent over time
  • the uncomfortable reality we face, he argues, is that psychological drama is of national importance. Journalists and policy experts focus on the issues, and our changing views of them. But “the reasoning loops we go through are less like the linear thinking of a computer and more like painting,”
  • We desperately want a stable sense of ourselves, yet our views are profoundly unstable. What this adds up to, Payne argues, is the near-total subordination of political discourse to group identities.
  • most people are “winging it,” saying and thinking what they need to do in order to “preserve the bottom line that they are good and reasonable people and their group is good and reasonable.”
  • despite our missteps, we still see ourselves as basically decent, and decades of work in psychology have affirmed that we freely rewrite history to maintain this view. When psychologists convince people that they’re wrong about an issue, for instance, those people often later misremember their prior stance, forgetting that they ever thought differently.
  • In Payne’s account, we’re far more likely to try seeing ourselves as the good guys; we might accomplish this most efficiently by further dehumanizing those who have accused us of being bad.
  • The group affiliations that necessitate our ad-hoc beliefs are often “thrust upon us by accidents of history,” Payne writes. He points to the experience of Southern whites during and after slavery: having been born into a group that was perpetrating a heinous crime, many found it almost impossible not to believe that racism was in some sense justifiable.
  • For Payne, the divisions in our society are baked in, and we don’t really choose to belong to one tribe or another. Moreover, whether we are actually good and reasonable people depends on much more than our political opinions. Our lives are wider and deeper than our votes.
  • Still, politics is powerfully magnetic; it’s easy (and perhaps convenient) to experience it as the central moral arena of our lives, and so to invest extraordinary energy on the tending of our political identities
  • What if a group does things that aren’t good and reasonable? What if—say—its leader encourages people to invade the United States Capitol and overturn an election? And what if that group’s opponents say, loud and clear, that what happened was bad and crazy? In that case, winging it goes into overdrive. The insurrectionist group may even find it necessary to “say that the other side are fascists or socialists bent on destroying America,” Payne suggests. This is extreme behavior—but it’s in keeping with perfectly ordinary mental habits. In fact, Payne insists, it reflects a genuine desire to be good, giving one’s zany improvisations the feeling of moral force.
  • “If something doesn’t feel right, you can always go back and change it. News channels and social media are constantly serving up an assortment of arguments to fill your palette. If one combination doesn’t work you can keep mixing and shading, until everything feels right.” Our pictures alter from day to day, but a troubling status quo is preserved.
  • Payne’s analysis points to a different, more troubling level of irrationality. In his version of our political life, our deepest and most ineradicable habits of mind push some of us to indulge in radical fantasies about the rest of us
  • Irrespective of the underlying reality, these fantasies shape our collective life
  • “We need more humanizing, because people in our country have been dehumanizing one another a lot,” he writes. “Democrats call Trump supporters MAGAts. Republicans call Democrats demon rats.” And “decades of research have found that dehumanizing words and images are a strong predictor that political violence is around the corner.”
  • Democrats dream of a time when Republicans turn their backs on Donald Trump, and when all of America views him as a baddie. But is this really possible? If there’s a path out of our current political hellscape, it may very well involve the cultivation of a vast, exculpatory fiction in which the extremities of Trumpism are either forgotten or framed as understandable.
  • aybe, looking back, it will all be seen as part of some larger and largely innocent semi-mistake—a good-faith effort, undertaken for decent reasons, by people who were ultimately good and reasonable. This fiction will be galling to some people, but deeply reassuring to others. It could be that living with it will be the price we’ll have to pay to live with each other.
Javier E

An 'Interview' With a Dead Luminary Exposes the Pitfalls of A.I. - 0 views

  • he said he was appalled at being replaced by a machine-generated substitute. “I was very angry that real, deep talks and real interviews with real people were replaced with something totally fake.”
  • An online petition drafted by Mr. Zaleski, the terminated culture show host, and Mateusz Demski, a fellow presenter who also lost his job, warned that “the case of Off Radio Krakow is an important reminder for the entire industry” and a “dangerous precedent that hits us all.”
  • Felix Simon, the author of a report published in February on the effect of A.I. on journalism, said the Polish experiment had not altered his view that technology “aids news workers rather than replaces them.” For the moment, he added, “there is still reason to believe it will not bring the big jobs wipeout some people fear.”
  • ...4 more annotations...
  • But he said the interview “was horrible” and put words in the poet’s mouth that she would never have used, making her sound “bland,” “naïve” and of “no interest whatsoever.” But that, he added, was heartening because “it shows that A.I. does not yet work” as well as humans. “If the interview had been really good,” he said, “it would be terrifying.”
  • In a Facebook post, he said the use of A.I. to fake an interview with the dead Nobel Prize winner had left him speechless. “If that is not a breach of journalistic ethics,” he said, “I don’t know what is.”
  • Among those outraged by Mr. Pulit’s experiment was Jaroslaw Juszkiewicz, a radio journalist whose voice was used for more than a decade to guide drivers using the Polish version of Google Maps. His replacement by a metallic computer-generated voice in 2020 stirred fury on social media, prompting Google to restore Mr. Juszkiewicz, at least for a time.
  • An ‘Interview’ With a Dead Luminary Exposes the Pitfalls of A.I.
Javier E

'The Magic Mountain' Saved My Life - The Atlantic - 0 views

  • I had never noticed the void before, because I had never been moved to ask the questions Who am I? What is life for? Now I couldn’t seem to escape them, and I received no answers from an empty sky.
  • a “moist spot” on one of his lungs. That and a slight fever suggest tuberculosis, requiring him to remain for an indeterminate time. Both diagnosis and treatment are dubious, but they thrill Hans Castorp: This hermetic world has begun to cast a spell on him and provoke questions “about the meaning and purpose of life” that he’d never asked down in the flatlands. Answered at first with “hollow silence,” they demand extended contemplation that’s possible only on the magic mountain.
  • I fell under the spell of Hans Castorp’s quest story, as the Everyman hero is transformed by his explorations of time, illness, sciences and séances, politics and religion and music.
  • ...34 more annotations...
  • he climactic chapter, “Snow,” felt as though it were addressed to me. Hans Castorp, lost in a snowstorm, falls asleep and then awakens from a mesmerizing and monstrous dream with an insight toward which the entire story has led him: “For the sake of goodness and love, man shall grant death no dominion over his thoughts.”
  • Hans Castorp remains on the mountain for seven years—a mystical number. The Magic Mountain is an odyssey confined to one place, a novel of ideas like no other, and a masterpiece of literary modernism.
  • Mann analyzes the nature of time philosophically and also conveys the feeling of its passage, slowing down his narrative in some spots to take in “the entire world of ideas”—a day can fill 100 pages—and elsewhere omitting years
  • As I made my way through the novel by kerosene lamplight, I took Mann’s bildungsroman as a guide to my own education among the farmers, teachers, children, and market women who became my closest companions, hoping to find myself on a journey toward enlightenment as rich and meaningful as its hero’s
  • Mann has something important to tell us as a civilization. The Mann who began writing the novel was an aristocrat of art, hostile to democracy—a reactionary aesthete. Working on The Magic Mountain was a transformative experience, turning him—as it turned his protagonist—into a humanist
  • What Hans Castorp arrives at, lost and asleep in the snow, “is the idea of the human being,” Mann later wrote, “the conception of a future humanity that has passed through and survived the profoundest knowledge of disease and death.”
  • In our age of brutal wars, authoritarian politics, cultures of contempt, and technology that promises to replace us with machines, what is left of the idea of the human being? What can it mean to be a humanist?
  • For Mann, the Great War was more than a contest among rival European powers or a patriotic cause. It was a struggle between “civilization” and “culture”—between the rational, politicized civilization of the West and Germany’s deeper culture of art, soul, and “genius,” which Mann associated with the irrational in human nature: sex, aggression, mythical belief.
  • The kaiser’s Germany—strong in arms, rich in music and philosophy, politically authoritarian—embodied Mann’s ideal. The Western powers “want to make us happy,” he wrote in the fall of 1914—that is, to turn Germany into a liberal democracy. Mann was more drawn to death’s mystery and profundity than to reason and progress, which he considered facile values
  • This sympathy wasn’t simply a fascination with human evil—with a death instinct—but an attraction to a deeper freedom, a more intense form of life than parliaments and pamphleteering offered.
  • Mann scorned the notion of the writer as political activist. The artist should remain apart from politics and society, he believed, free to represent the deep and contradictory truths of reality rather than using art as a means to advance a particular view
  • Settembrini, like Heinrich, is a “humanist”—but in Mann’s usage, the term has an ironic sound. As he wrote elsewhere, it implies “a repugnant shallowness and castration of the concept of humanity,” pushed by “the politician, the humanitarian revolutionary and radical literary man, who is a demagogue in the grand style, namely a flatterer of mankind.”
  • As an artist above politics, Mann didn’t want simply to criticize “civilization’s literary man,” but to show him as “equally right and wrong.” He intended to create an intellectual opponent to Settembrini in a conservative Protestant character named Pastor Bunge—but the war intruded.
  • He published his wartime writings in the genre-defying Reflections of a Nonpolitical Man in October 1918, one month before the armistice. Katia Mann later wrote, “In the course of writing the book, Thomas Mann gradually freed himself from the ideas which had held sway over him … He wrote Reflections in all sincerity and, in doing so, ended by getting over what he had advocated in the book.”
  • The war that had just ended enlarged the novel’s theme into “a worldwide festival of death”; the devastation, he would go on to write in the book’s last pages, was “the thunderbolt that bursts open the magic mountain and rudely sets its entranced sleeper outside the gates,” soon to become a German soldier. It also confronted Mann himself with a new world to which he had to respond.
  • Some German conservatives, in their hatred of the Weimar Republic and the Treaty of Versailles, embraced right-wing mass politics. Mann, nearing 50, vacillated, hoping to salvage the old conservatism from the new extremism. In early 1922, he and Heinrich reconciled, and, as Mann later wrote, he began “to accept the European-democratic religion of humanity within my moral horizon, which so far had been bounded solely by late German romanticism, by Schopenhauer, Nietzsche, Wagner.”
  • in a review of a German translation of Walt Whitman’s selected poetry and prose, he associated the American poet’s mystical notion of democracy with “the same thing that we in our old-fashioned way call ‘humanity’ … I am convinced there is no more urgent task for Germany today than to fill out this word, which has been debased into a hollow shell.”
  • when ultranationalists in Berlin murdered his friend Walther Rathenau, the Weimar Republic’s Jewish foreign minister. Shocked into taking a political stand, Mann turned a birthday speech in honor of the Nobel Prize–winning author Gerhart Hauptmann into a stirring call for democracy. To the amazement of his audience and the German press, Mann ended with the cry “Long live the republic!”
  • Abandoning Pastor Bunge as outmoded, he created a new counterpart to Settembrini who casts a sinister shadow over the second half of the novel: an ugly, charismatic, and (of course) tubercular Jesuit of Jewish origin named Leo Naphta. The intellectual combat between him and Settembrini—which ends physically, in a duel—provides some of the most dazzling passages in The Magic Mountain.
  • Naphta is neither conservative nor liberal. Against capitalist modernity, whose godless greed and moral vacuity he hates with a sulfurous rage, Naphta offers a synthesis of medieval Catholicism and the new ideology of communism. Both place “anonymous and communal” authority over the individual, and both are intent on saving humanity from Settembrini’s soft, rational humanism.
  • Naphta argues that love of freedom and pleasure is weaker than the desire to obey. “The mystery and precept of our age is not liberation and development of the ego,” he says. “What our age needs, what it demands, what it will create for itself, is—terror.” Mann understood the appeal of totalitarianism early on.
  • It’s Naphta, a truly demonic figure—not Settembrini, the voice of reason—who precipitates the end of the hero’s romance with death. His jarring arrival allows Hans Castorp to loosen himself from its grip and begin a journey toward—what? Not toward Settembrini’s international republic of letters, and not back toward his simple bourgeois life down in the flatlands
  • Hans Castorp puts on a new pair of skis and sets out for a few hours of exercise that lead him into the fateful blizzard and “a very enchanting, very dreadful dream.”
  • In it, he encounters a landscape of human beings in all their kindness and beauty, and all their hideous evil. “I know everything about humankind,” he thinks, still dreaming, and he resolves to reject both Settembrini and Naphta—or rather, to reject the stark choice between life and death, illness and health, recognizing that “man is the master of contradictions, they occur through him, and so he is more noble than they.”
  • e’s become one of death’s intimates, and his initiation into its mysteries has immeasurably deepened his understanding of life—but he won’t let death rule his thoughts. He won’t let reason either, which seems weak and paltry before the power of destruction. “Love stands opposed to death,” he dreams; “it alone, and not reason, is stronger than death.”
  • We succumb to the impulse to escape our humanness. That urge, ubiquitous today, thrives in the utopian schemes of technologists who want to upload our minds into computers; in the pessimism of radical environmentalists who want us to disappear from the Earth in order to save it; in the longing of apocalyptic believers for godly retribution and cleansing; in the daily sense of inadequacy, of shame and sin, that makes us disappear into our devices.
  • the vision of “love” that Hans Castorp embraces just before waking up is “brotherly love”—the bond that unites all human beings.
  • he emerged as the preeminent German spokesman against Hitler who, in lectures across the United States in 1938, warned Americans of the rising threat to democracy, which for him was inseparable from humanism: “We must define democracy as that form of government and of society which is inspired above every other with the feeling and consciousness of the dignity of man.”
  • Mann urged his audiences to resist the temptation to deride humanity. “Despite so much ridiculous depravity, we cannot forget the great and the honorable in man,” he said, “which manifest themselves as art and science, as passion for truth, creation of beauty, and the idea of justice.”
  • Could anyone utter these lofty words today without courting a chorus of snickers, a social-media immolation? We live in an age of human self-contempt. We’re hardly surprised when our leaders debase themselves with vile behavior and lies, when combatants desecrate the bodies of their enemies, when free people humiliate themselves under the spell of a megalomaniacal fraud
  • In driving our democracy into hatred, chaos, and violence we, too, grant death dominion over our thoughts.
  • Mann now recognized political freedom as necessary to ensure the freedom of art, and he became a sworn enemy of the Nazis.
  • The need for political reconstruction, in this country and around the world, is as obvious as it was in Thomas Mann’s time.
  • Mann also knew that, to withstand our attraction to death, a decent society has to be built on a foundation deeper than politics: the belief that, somewhere between matter and divinity, we human beings, made of water, protein, and love, share a common destiny.
Javier E

The AI Boom Has an Expiration Date - The Atlantic - 0 views

  • Demis Hassabis, the head of Google DeepMind, repeated in August his suggestion from earlier this year that AGI could arrive by 2030, adding that “we could cure most diseases within the next decade or two.”
  • A month later, even Meta’s more typically grounded chief AI scientist, Yann LeCun, said he expected powerful and all-knowing AI assistants within years, or perhaps a decade
  • Dario Amodei, the chief executive of the rival AI start-up Anthropic, wrote in a sprawling self-published essay last week that such ultra-powerful AI “could come as early as 2026.” He predicts that the technology will end disease and poverty and bring about “a renaissance of liberal democracy and human rights,” and that “many will be literally moved to tears” as they behold these accomplishments.
  • ...5 more annotations...
  • Then the CEO of OpenAI, Sam Altman, wrote a blog post stating that “it is possible that we will have superintelligence in a few thousand days,” which would in turn make such dreams as “fixing the climate” and “establishing a space colony” reality
  • All of this infrastructure will be extraordinarily expensive, requiring perhaps trillions of dollars of investment in the next few years. Over the summer, The Information reported that Anthropic expects to lose nearly $3 billion this year. And last month, the same outlet reported that OpenAI projects that its losses could nearly triple to $14 billion in 2026 and that it will lose money until 2029, when, it claims, revenue will reach $100 billion
  • Microsoft and Google are spending more than $10 billion every few months on data centers and AI infrastructure.
  • Amodei’s and Hassabis’s visions that omniscient computer programs will soon end all disease is worth any amount of spending today. With such tight competition among the top AI firms, if a rival executive makes a grand claim, there is pressure to reciprocate.
  • All of this financial and technological speculation has, however, created something a bit more solid: self-imposed deadlines. In 2026, 2030, or a few thousand days, it will be time to check in with all the AI messiahs. Generative AI—boom or bubble—finally has an expiration date.
Javier E

Opinion | The Elites Had It Coming - The New York Times - 0 views

  • Twenty years ago I published a book about politics in my home state of Kansas where white, working-class voters seemed to be drifting into the arms of right-wing movements. I attributed this, in large part, to the culture wars, which the right framed in terms of working-class agony. Look at how these powerful people insult our values!, went the plaint, whether they were talking about the theory of evolution or the war on Christmas.
  • This was worth pointing out because working people were once the heart and soul of left-wing parties all over the world
  • I also wrote about the way the Democrats were gradually turning away from working people and their concerns. Just think of all those ebullient Democratic proclamations in the ’90s about trade and tech and globalization and financial innovation.
  • ...14 more annotations...
  • it felt like every rising leader in the Democratic Party was making those points. That was the way to win voters in what they called “the center,” the well-educated suburbanites and computer-literate professionals whom everybody admired.
  • Vance is the vice president-elect, and what I hope you will understand, what I want you to mull over and take to heart and remember for the rest of your life, is that he got there by mimicking the language that Americans used to associate with labor, with liberals, with Democrats.
  • By comparison, here is Barack Obama in 2016, describing to Bloomberg Businessweek his affinity for the private sector: “Just to bring things full circle about innovation — the conversations I have with Silicon Valley and with venture capital pull together my interests in science and organization in a way I find really satisfying.”
  • It would have been nice if the Democrats could have triangulated their way into the hearts of enough educated and affluent suburbanites to make up for the working class voters they’ve lost over the years, but somehow that strategy rarely works out
  • For a short time in the last few years, it looked as if the Democrats might actually have understood all this. What the Biden administration did on antitrust and manufacturing and union organizing was never really completed but it was inspiring. Framed the right way, it might have formed the nucleus of a strong appeal to the voters
  • Speaker after speaker at the gathering in Chicago blasted the Republicans for their hostility to working people. There was even a presentation about the meaning of the word “populism.” At times it felt like they were speaking to me personally.
  • The administration’s achievements on antitrust were barely mentioned.
  • Then, once Ms. Harris’s campaign got rolling, it largely dropped economic populism, wheeled out another billionaire and embraced Liz Cheney.
  • Mr. Trump, meanwhile, put together a remarkable coalition of the disgruntled. He reached out to everyone with a beef, from Robert Kennedy Jr. to Elon Musk. From free-speech guys to book-banners. From Muslims in Michigan to anti-immigration zealots everywhere. “Trump Will Fix It,” declared the signs they waved at his rallies, regardless of which “It” you had in mind.
  • clucking liberal pundits would sometimes respond to all this by mocking the very concept of “grievance,” as though discontent itself was the product of a diseased mind.
  • Mr. Trump is a con man straight out of Mark Twain; he will say anything, promise anything, do nothing. But his movement baffled the party of education and innovation. Their most brilliant minds couldn’t figure him out.
  • I fear that ’90s-style centrism will march on, by a sociological force of its own, until the parties have entirely switched their social positions and the world is given over to Trumpism.
  • Can anything reverse it? Only a resolute determination by the Democratic Party to rededicate itself to the majoritarian vision of old: a Great Society of broad, inclusive prosperity. This means universal health care and a higher minimum wage.
  • It means robust financial regulation and antitrust enforcement. It means unions and a welfare state and higher taxes on billionaires, even the cool ones. It means, above all, liberalism as a social movement, as a coming-together of ordinary people — not a series of top-down reforms by well-meaning professionals.
Javier E

Francis Fukuyama: what Trump unleashed means for America - 0 views

  • the significance of the election extends way beyond these specific issues, and represents a decisive rejection by American voters of liberalism and the particular way that the understanding of a “free society” has evolved since the 1980s.
  • Following Tuesday’s vote, it now seems that it was the Biden presidency that was the anomaly, and that Trump is inaugurating a new era in US politics and perhaps for the world as a whole. Americans were voting with full knowledge of who Trump was and what he represented. Not only did he win a majority of votes and is projected to take every single swing state, but the Republicans retook the Senate and look like holding on to the House of Representatives. Given their existing dominance of the Supreme Court, they are now set to hold all the major branches of government.
  • All of these groups were unhappy with a free-trade system that eliminated their livelihoods even as it created a new class of super-rich, and were unhappy as well with progressive parties that seemingly cared more for foreigners and the environment than their own condition.
  • ...23 more annotations...
  • Classical liberalism is a doctrine built around respect for the equal dignity of individuals through a rule of law that protects their rights, and through constitutional checks on the state’s ability to interfere with those rights
  • But over the past half century that basic impulse underwent two great distortions. The first was the rise of “neoliberalism”, an economic doctrine that sanctified markets and reduced the ability of governments to protect those hurt by economic change. The world got a lot richer in the aggregate, while the working class lost jobs and opportunity. Power shifted away from the places that hosted the original industrial revolution to Asia and other parts of the developing world.
  • The second distortion was the rise of identity politics or what one might call “woke liberalism”, in which progressive concern for the working class was replaced by targeted protections for a narrower set of marginalised groups: racial minorities, immigrants, sexual minorities and the like. State power was increasingly used not in the service of impartial justice, but rather to promote specific social outcomes for these groups.
  • In the meantime, labour markets were shifting into an information economy. In a world in which most workers sat in front of a computer screen rather than lifted heavy objects off factory floors, women experienced a more equal footing. This transformed power within households and led to the perception of a seemingly constant celebration of female achievement.
  • The rise of these distorted understandings of liberalism drove a major shift in the social basis of political power. The working class felt that leftwing political parties were no longer defending their interests, and began voting for parties of the right.
  • Thus the Democrats lost touch with their working-class base and became a party dominated by educated urban professionals. The former chose to vote Republican. In Europe, Communist party voters in France and Italy defected to Marine Le Pen and Giorgia Meloni
  • With regard to immigration, Trump no longer simply wants to close the border; he wants to deport as many of the 11mn undocumented immigrants already in the country as possible. Administratively, this is such a huge task that it will require years of investment in the infrastructure needed to carry it out — detention centres, immigration control agents, courts and so on.
  • The Republican victory was built around white working-class voters, but Trump succeeded in peeling off significantly more Black and Hispanic working-class voters compared with the 2020 election. This was especially true of the male voters within these groups.
  • There is no particular reason why a working-class Latino, for example, should be particularly attracted to a woke liberalism that favours recent undocumented immigrants and focuses on advancing the interests of women.
  • It is also clear that the vast majority of working-class voters simply did not care about the threat to the liberal order, both domestic and international, posed specifically by Trump.
  • what is the underlying nature of this new phase of American history?
  • The real question at this point is not the malignity of his intentions, but rather his ability to actually carry out what he threatens. Many voters simply don’t take his rhetoric seriously, while mainstream Republicans argue that the checks and balances of the American system will prevent him from doing his worst. This is a mistake: we should take his stated intentions very seriously.
  • Trump is a self-proclaimed protectionist, who says that “tariff” is the most beautiful word in the English language. He has proposed 10 or 20 per cent tariffs against all goods produced abroad, by friends and enemies alike, and does not need the authority of Congress to do so.
  • As a large number of economists have pointed out, this level of protectionism will have extremely negative effects on inflation, productivity and employment.
  • Donald Trump not only wants to roll back neoliberalism and woke liberalism, but is a major threat to classical liberalism itself.
  • It will have devastating effects on any number of industries that rely on immigrant labour, particularly construction and agriculture. It will also be monumentally challenging in moral terms, as parents are taken away from their citizen children, and would set the scene for civil conflict, since many of the undocumented live in blue jurisdictions
  • He has vowed to use the justice system to go after everyone from Liz Cheney and Joe Biden to former Joint Chiefs of Staff chair Mark Milley and Barack Obama. He wants to silence media critics by taking away their licences or imposing penalties on them.
  • Whether Trump will have the power to do any of this is uncertain: the court system was one of the most resilient barriers to his excesses during his first term. But the Republicans have been working steadily to insert sympathetic justices into the system, such as Judge Aileen Cannon in Florida, who threw out the strong classified documents case against him.
  • Trump has privately threatened to pull out of Nato, but even if he doesn’t, he can gravely weaken the alliance by failing to follow through on its Article 5 mutual defence guarantee. There are no European champions that can take the place of America as the alliance’s leader, so its future ability to stand up to Russia and China is in grave doubt. On the contrary, Trump’s victory will inspire other European populists such as the Alternative for Germany and the National Rally in France.
  • East Asian allies and friends of the US are in no better position. While Trump has talked tough on China, he also greatly admires Xi Jinping for the latter’s strongman characteristics, and might be willing to make a deal with him over Taiwan
  • At the end of his term, he issued an executive order creating a new “Schedule F” that would strip all federal workers of their job protections and allow him to fire any bureaucrat he wanted. A revival of Schedule F is at the core of the plans for a second Trump term, and conservatives have been busy compiling lists of potential officials whose main qualification is personal loyalty to Trump. This is why he is more likely to carry out his plans this time around.
  • critics including Kamala Harris accused Trump of being a fascist. This was misguided insofar as he was not about to implement a totalitarian regime in the US. Rather, there would be a gradual decay of liberal institutions, much as occurred in Hungary after Viktor Orbán’s return to power in 2010.
  • This decay has already started, and Trump has done substantial damage. He has deepened an already substantial polarisation within society, and turned the US from a high-trust to a low-trust society; he has demonised the government and weakened belief that it represents the collective interests of Americans; he has coarsened political rhetoric and given permission for overt expressions of bigotry and misogyny; and he has convinced a majority of Republicans that his predecessor was an illegitimate president who stole the 2020 election.
Javier E

Stanford's top disinformation research group collapses under pressure - The Washington ... - 0 views

  • The collapse of the five-year-old Observatory is the latest and largest of a series of setbacks to the community of researchers who try to detect propaganda and explain how false narratives are manufactured, gather momentum and become accepted by various groups
  • It follows Harvard’s dismissal of misinformation expert Joan Donovan, who in a December whistleblower complaint alleged he university’s close and lucrative ties with Facebook parent Meta led the university to clamp down on her work, which was highly critical of the social media giant’s practices.
  • Starbird said that while most academic studies of online manipulation look backward from much later, the Observatory’s “rapid analysis” helped people around the world understand what they were seeing on platforms as it happened.
  • ...9 more annotations...
  • Brown University professor Claire Wardle said the Observatory had created innovative methodology and trained the next generation of experts.
  • “Closing down a lab like this would always be a huge loss, but doing so now, during a year of global elections, makes absolutely no sense,” said Wardle, who previously led research at anti-misinformation nonprofit First Draft. “We need universities to use their resources and standing in the community to stand up to criticism and headlines.”
  • The study of misinformation has become increasingly controversial, and Stamos, DiResta and Starbird have been besieged by lawsuits, document requests and threats of physical harm. Leading the charge has been Rep. Jim Jordan (R-Ohio), whose House subcommittee alleges the Observatory improperly worked with federal officials and social media companies to violate the free-speech rights of conservatives.
  • In a joint statement, Stamos and DiResta said their work involved much more than elections, and that they had been unfairly maligned.
  • “The politically motivated attacks against our research on elections and vaccines have no merit, and the attempts by partisan House committee chairs to suppress First Amendment-protected research are a quintessential example of the weaponization of government,” they said.
  • Stamos founded the Observatory after publicizing that Russia has attempted to influence the 2016 election by sowing division on Facebook, causing a clash with the company’s top executives. Special counsel Robert S. Mueller III later cited the Facebook operation in indicting a Kremlin contractor. At Stanford, Stamos and his team deepened his study of influence operations from around the world, including one it traced to the Pentagon.
  • Stamos told associates he stepped back from leading the Observatory last year in part because the political pressure had taken a toll. Stamos had raised most of the money for the project, and the remaining faculty have not been able to replicate his success, as many philanthropic groups shift their focus on artificial intelligence and other, fresher topics.
  • In supporting the project further, the university would have risked alienating conservative donors, Silicon Valley figures, and members of Congress, who have threatened to stop all federal funding for disinformation research or cut back general support.
  • The Observatory’s non-election work has included developing curriculum for teaching college students about how to handle trust and safety issues on social media platforms and launching the first peer-reviewed journal dedicated to that field. It has also investigated rings publishing child sexual exploitation material online and flaws in the U.S. system for reporting it, helping to prepare platforms to handle an influx of computer-generated material.
Javier E

Opinion | Why Can't College Grads Find Jobs? Here Are Some Theories - and Fixes. - The ... - 0 views

  • simply tossing your résumé and cover letter into a company’s job portal has a low probability of success, especially now. It’s so easy to submit applications that companies are being bombarded with thousands of them. Human beings can’t possibly review all of them, so they’re reviewed by computers, which simply search for keywords. They don’t understand in any deep way either the applicant’s qualities or the employer’s needs.
  • “The better writer you are, the greater your chance of getting rejected, because you won’t use keywords” the way the evaluation algorithm wants,
  • Personal contact is crucial, he said. Rather than spraying applications far and wide, he recommends focusing on a handful of companies, researching them in depth and contacting a wide range of people connected with them, even their suppliers and customers.
  • ...1 more annotation...
  • Du Bois’s quote, “Either the United States will destroy ignorance or ignorance will destroy the United States,” serves as a poignant reminder of the importance of education, understanding and open discourse in addressing societal challenges.
Javier E

Silicon Valley's Trillion-Dollar Leap of Faith - The Atlantic - 0 views

  • Tech companies like to make two grand pronouncements about the future of artificial intelligence. First, the technology is going to usher in a revolution akin to the advent of fire, nuclear weapons, and the internet.
  • And second, it is going to cost almost unfathomable sums of money.
  • Silicon Valley has already triggered tens or even hundreds of billions of dollars of spending on AI, and companies only want to spend more.
  • ...22 more annotations...
  • Their reasoning is straightforward: These companies have decided that the best way to make generative AI better is to build bigger AI models. And that is really, really expensive, requiring resources on the scale of moon missions and the interstate-highway system to fund the data centers and related infrastructure that generative AI depends on
  • “If we’re going to justify a trillion or more dollars of investment, [AI] needs to solve complex problems and enable us to do things we haven’t been able to do before.” Today’s flagship AI models, he said, largely cannot.
  • Now a number of voices in the finance world are beginning to ask whether all of this investment can pay off. OpenAI, for its part, may lose up to $5 billion this year, almost 10 times more than what the company lost in 2022,
  • Dario Amodei, the CEO of the rival start-up Anthropic, has predicted that a single AI model (such as, say, GPT-6) could cost $100 billion to train by 2027. The global data-center buildup over the next few years could require trillions of dollars from tech companies, utilities, and other industries, according to a July report from Moody’s Ratings.
  • Over the past few weeks, analysts and investors at some of the world’s most influential financial institutions—including Goldman Sachs, Sequoia Capital, Moody’s, and Barclays—have issued reports that raise doubts about whether the enormous investments in generative AI will be profitable.
  • generative AI has already done extraordinary things, of course—advancing drug development, solving challenging math problems, generating stunning video clips. But exactly what uses of the technology can actually make money remains unclear
  • At present, AI is generally good at doing existing tasks—writing blog posts, coding, translating—faster and cheaper than humans can. But efficiency gains can provide only so much value, boosting the current economy but not creating a new one.
  • Right now, Silicon Valley might just functionally be replacing some jobs, such as customer service and form-processing work, with historically expensive software, which is not a recipe for widespread economic transformation.
  • McKinsey has estimated that generative AI could eventually add almost $8 trillion to the global economy every year
  • Tony Kim, the head of technology investment at BlackRock, the world’s largest money manager, told me he believes that AI will trigger one of the most significant technological upheavals ever. “Prior industrial revolutions were never about intelligence,”
  • “Here, we can manufacture intelligence.”
  • this future is not guaranteed. Many of the productivity gains expected from AI could be both greatly overestimated and very premature, Daron Acemoglu, an economist at MIT, has found
  • AI products’ key flaws, such as a tendency to invent false information, could make them unusable, or deployable only under strict human oversight, in certain settings—courts, hospitals, government agencies, schools
  • AI as a truly epoch-shifting technology, it may well be more akin to blockchain, a very expensive tool destined to fall short of promises to fundamentally transform society and the economy.
  • Researchers at Barclays recently calculated that tech companies are collectively paying for enough AI-computing infrastructure to eventually power 12,000 different ChatGPTs. Silicon Valley could very well produce a whole host of hit generative-AI products like ChatGPT, “but probably not 12,000 of them,
  • even if it did, there would be nowhere enough demand to use all those apps and actually turn a profit.
  • Some of the largest tech companies’ current spending on AI data centers will require roughly $600 billion of annual revenue to break even, of which they are currently about $500 billion short.
  • Tech proponents have responded to the criticism that the industry is spending too much, too fast, with something like religious dogma. “I don’t care” how much we spend, Altman has said. “I genuinely don’t.
  • the industry is asking the world to engage in something like a trillion-dollar tautology: AI’s world-transformative potential justifies spending any amount of resources, because its evangelists will spend any amount to make AI transform the world.
  • in the AI era in particular, a lack of clear evidence for a healthy return on investment may not even matter. Unlike the companies that went bust in the dot-com bubble in the early 2000s, Big Tech can spend exorbitant sums of money and be largely fine
  • perhaps even more important in Silicon Valley than a messianic belief in AI is a terrible fear of missing out. “In the tech industry, what drives part of this is nobody wants to be left behind. Nobody wants to be seen as lagging,
  • Go all in on AI, the thinking goes, or someone else will. Their actions evince “a sense of desperation,” Cahn writes. “If you do not move now, you will never get another chance.” Enormous sums of money are likely to continue flowing into AI for the foreseeable future, driven by a mix of unshakable confidence and all-consuming fear.
« First ‹ Previous 381 - 399 of 399
Showing 20 items per page