Skip to main content

Home/ History Readings/ Group items matching "big" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Javier E

'He checks in on me more than my friends and family': can AI therapists do better than the real thing? | Counselling and therapy | The Guardian - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
Javier E

Opinion | The 100-Year Extinction Panic Is Back, Right on Schedule - The New York Times - 0 views

  • The literary scholar Paul Saint-Amour has described the expectation of apocalypse — the sense that all history’s catastrophes and geopolitical traumas are leading us to “the prospect of an even more devastating futurity” — as the quintessential modern attitude. It’s visible everywhere in what has come to be known as the polycrisis.
  • Climate anxiety, of the sort expressed by that student, is driving new fields in psychology, experimental therapies and debates about what a recent New Yorker article called “the morality of having kids in a burning, drowning world.”
  • The conviction that the human species could be on its way out, extinguished by our own selfishness and violence, may well be the last bipartisan impulse.
  • ...28 more annotations...
  • a major extinction panic happened 100 years ago, and the similarities are unnerving.
  • The 1920s were also a period when the public — traumatized by a recent pandemic, a devastating world war and startling technological developments — was gripped by the conviction that humanity might soon shuffle off this mortal coil.
  • It also helps us see how apocalyptic fears feed off the idea that people are inherently violent, self-interested and hierarchical and that survival is a zero-sum war over resources.
  • Either way, it’s a cynical view that encourages us to take our demise as a foregone conclusion.
  • What makes an extinction panic a panic is the conviction that humanity is flawed and beyond redemption, destined to die at its own hand, the tragic hero of a terrestrial pageant for whom only one final act is possible
  • What the history of prior extinction panics has to teach us is that this pessimism is both politically questionable and questionably productive. Our survival will depend on our ability to recognize and reject the nihilistic appraisals of humanity that inflect our fears for the future, both left and right.
  • As a scholar who researches the history of Western fears about human extinction, I’m often asked how I avoid sinking into despair. My answer is always that learning about the history of extinction panics is actually liberating, even a cause for optimism
  • Nearly every generation has thought its generation was to be the last, and yet the human species has persisted
  • As a character in Jeanette Winterson’s novel “The Stone Gods” says, “History is not a suicide note — it is a record of our survival.”
  • Contrary to the folk wisdom that insists the years immediately after World War I were a period of good times and exuberance, dark clouds often hung over the 1920s. The dread of impending disaster — from another world war, the supposed corruption of racial purity and the prospect of automated labor — saturated the period
  • The previous year saw the publication of the first of several installments of what many would come to consider his finest literary achievement, “The World Crisis,” a grim retrospective of World War I that laid out, as Churchill put it, the “milestones to Armageddon.
  • Bluntly titled “Shall We All Commit Suicide?,” the essay offered a dismal appraisal of humanity’s prospects. “Certain somber facts emerge solid, inexorable, like the shapes of mountains from drifting mist,” Churchill wrote. “Mankind has never been in this position before. Without having improved appreciably in virtue or enjoying wiser guidance, it has got into its hands for the first time the tools by which it can unfailingly accomplish its own extermination.”
  • The essay — with its declaration that “the story of the human race is war” and its dismay at “the march of science unfolding ever more appalling possibilities” — is filled with right-wing pathos and holds out little hope that mankind might possess the wisdom to outrun the reaper. This fatalistic assessment was shared by many, including those well to Churchill’s left.
  • “Are not we and they and all the race still just as much adrift in the current of circumstances as we were before 1914?” he wondered. Wells predicted that our inability to learn from the mistakes of the Great War would “carry our race on surely and inexorably to fresh wars, to shortages, hunger, miseries and social debacles, at last either to complete extinction or to a degradation beyond our present understanding.” Humanity, the don of sci-fi correctly surmised, was rushing headlong into a “scientific war” that would “make the biggest bombs of 1918 seem like little crackers.”
  • The pathbreaking biologist J.B.S. Haldane, another socialist, concurred with Wells’s view of warfare’s ultimate destination. In 1925, two decades before the Trinity test birthed an atomic sun over the New Mexico desert, Haldane, who experienced bombing firsthand during World War I, mused, “If we could utilize the forces which we now know to exist inside the atom, we should have such capacities for destruction that I do not know of any agency other than divine intervention which would save humanity from complete and peremptory annihilation.”
  • F.C.S. Schiller, a British philosopher and eugenicist, summarized the general intellectual atmosphere of the 1920s aptly: “Our best prophets are growing very anxious about our future. They are afraid we are getting to know too much and are likely to use our knowledge to commit suicide.”
  • Many of the same fears that keep A.I. engineers up at night — calibrating thinking machines to human values, concern that our growing reliance on technology might sap human ingenuity and even trepidation about a robot takeover — made their debut in the early 20th century.
  • The popular detective novelist R. Austin Freeman’s 1921 political treatise, “Social Decay and Regeneration,” warned that our reliance on new technologies was driving our species toward degradation and even annihilation
  • Extinction panics are, in both the literal and the vernacular senses, reactionary, animated by the elite’s anxiety about maintaining its privilege in the midst of societal change
  • There is a perverse comfort to dystopian thinking. The conviction that catastrophe is baked in relieves us of the moral obligation to act. But as the extinction panic of the 1920s shows us, action is possible, and these panics can recede
  • To whatever extent, then, that the diagnosis proved prophetic, it’s worth asking if it might have been at least partly self-fulfilling.
  • today’s problems are fundamentally new. So, too, must be our solutions
  • It is a tired observation that those who don’t know history are destined to repeat it. We live in a peculiar moment in which this wisdom is precisely inverted. Making it to the next century may well depend on learning from and repeating the tightrope walk — between technological progress and self-annihilation — that we have been doing for the past 100 years
  • We have gotten into the dangerous habit of outsourcing big issues — space exploration, clean energy, A.I. and the like — to private businesses and billionaires
  • That ideologically varied constellation of prominent figures shared a basic diagnosis of humanity and its prospects: that our species is fundamentally vicious and selfish and our destiny therefore bends inexorably toward self-destruction.
  • Less than a year after Churchill’s warning about the future of modern combat — “As for poison gas and chemical warfare,” he wrote, “only the first chapter has been written of a terrible book” — the 1925 Geneva Protocol was signed, an international agreement banning the use of chemical or biological weapons in combat. Despite the many horrors of World War II, chemical weapons were not deployed on European battlefields.
  • As for machine-age angst, there’s a lesson to learn there, too: Our panics are often puffed up, our predictions simply wrong
  • In 1928, H.G. Wells published a book titled “The Way the World Is Going,” with the modest subtitle “Guesses and Forecasts of the Years Ahead.” In the opening pages, he offered a summary of his age that could just as easily have been written about our turbulent 2020s. “Human life,” he wrote, “is different from what it has ever been before, and it is rapidly becoming more different.” He continued, “Perhaps never in the whole history of life before the present time, has there been a living species subjected to so fiercely urgent, many-sided and comprehensive a process of change as ours today. None at least that has survived. Transformation or extinction have been nature’s invariable alternatives. Ours is a species in an intense phase of transition.”
Javier E

How We Can Control AI - WSJ - 0 views

  • What’s still difficult is to encode human values
  • That currently requires an extra step known as Reinforcement Learning from Human Feedback, in which programmers use their own responses to train the model to be helpful and accurate. Meanwhile, so-called “red teams” provoke the program in order to uncover any possible harmful outputs
  • This combination of human adjustments and guardrails is designed to ensure alignment of AI with human values and overall safety. So far, this seems to have worked reasonably well.
  • ...22 more annotations...
  • At some point they will be able to, for example, suggest recipes for novel cyberattacks or biological attacks—all based on publicly available knowledge.
  • But as models become more sophisticated, this approach may prove insufficient. Some models are beginning to exhibit polymathic behavior: They appear to know more than just what is in their training data and can link concepts across fields, languages, and geographies.
  • We need to adopt new approaches to AI safety that track the complexity and innovation speed of the core models themselves.
  • What’s much harder to test for is what’s known as “capability overhang”—meaning not just the model’s current knowledge, but the derived knowledge it could potentially generate on its own.
  • Red teams have so far shown some promise in predicting models’ capabilities, but upcoming technologies could break our current approach to safety in AI. For one, “recursive self-improvement” is a feature that allows AI systems to collect data and get feedback on their own and incorporate it to update their own parameters, thus enabling the models to train themselves
  • This could result in, say, an AI that can build complex system applications (e.g., a simple search engine or a new game) from scratch. But, the full scope of the potential new capabilities that could be enabled by recursive self-improvement is not known.
  • Another example would be “multi-agent systems,” where multiple independent AI systems are able to coordinate with each other to build something new.
  • This so-called “combinatorial innovation,” where systems are merged to build something new, will be a threat simply because the number of combinations will quickly exceed the capacity of human oversight.
  • Short of pulling the plug on the computers doing this work, it will likely be very difficult to monitor such technologies once these breakthroughs occur
  • Current regulatory approaches are based on individual model size and training effort, and are based on passing increasingly rigorous tests, but these techniques will break down as the systems become orders of magnitude more powerful and potentially elusive
  • AI regulatory approaches will need to evolve to identify and govern the new emergent capabilities and the scaling of those capabilities.
  • But the AI Act has already fallen behind the frontier of innovation, as open-source AI models—which are largely exempt from the legislation—expand in scope and number
  • Europe has so far attempted the most ambitious regulatory regime with its AI Act,
  • both Biden’s order and Europe’s AI Act lack intrinsic mechanisms to rapidly adapt to an AI landscape that will continue to change quickly and often.
  • a gathering in Palo Alto organized by the Rand Corp. and the Carnegie Endowment for International Peace, where key technical leaders in AI converged on an idea: The best way to solve these problems is to create a new set of testing companies that will be incentivized to out-innovate each other—in short, a robust economy of testing
  • To check the most powerful AI systems, their testers will also themselves have to be powerful AI systems, precisely trained and refined to excel at the single task of identifying safety concerns and problem areas in the world’s most advanced models.
  • To be trustworthy and yet agile, these testing companies should be checked and certified by government regulators but developed and funded in the private market, with possible support by philanthropy organizations
  • The field is moving too quickly and the stakes are too high for exclusive reliance on typical government processes and timeframes.
  • One way this can unfold is for government regulators to require AI models exceeding a certain level of capability to be evaluated by government-certified private testing companies (from startups to university labs to nonprofit research organizations), with model builders paying for this testing and certification so as to meet safety requirements.
  • As AI models proliferate, growing demand for testing would create a big enough market. Testing companies could specialize in certifying submitted models across different safety regimes, such as the ability to self-proliferate, create new bio or cyber weapons, or manipulate or deceive their human creators
  • Much ink has been spilled over presumed threats of AI. Advanced AI systems could end up misaligned with human values and interests, able to cause chaos and catastrophe either deliberately or (often) despite efforts to make them safe. And as they advance, the threats we face today will only expand as new systems learn to self-improve, collaborate and potentially resist human oversight.
  • If we can bring about an ecosystem of nimble, sophisticated, independent testing companies who continuously develop and improve their skill evaluating AI testing, we can help bring about a future in which society benefits from the incredible power of AI tools while maintaining meaningful safeguards against destructive outcomes.
Javier E

Opinion | Bidencare Is a Really Big Deal - The New York Times - 0 views

  • President Biden has made Obamacare an even bigger deal, in a way that is improving life for millions of Americans.
  • The Biden administration just announced that 21 million people have enrolled for coverage through the A.C.A.’s health insurance marketplaces, up from around 12 million on the eve of the pandemic. America still doesn’t have the universal coverage that is standard in other wealthy nations, but some states, including Massachusetts and New York, have gotten close.
  • And this gain, unlike some of the other good things happening, is all on Biden, who both restored aid to people seeking health coverage and enhanced a key aspect of the system.
  • ...1 more annotation...
  • Biden, as part of the 2022 Inflation Reduction Act, largely resolved these problems, reducing maximum premium payments (net of subsidies) and eliminating the cliff at 400 percent. The result is to make health insurance coverage substantially more affordable, especially for middle-income Americans who previously earned too much to be eligible for subsidies. Hence the surge in marketplace enrollments.
Javier E

Toxic Political Culture Has Even Some Slovaks Calling Country 'a Black Hole.' - The New York Times - 0 views

  • Of all the countries in Central and Eastern Europe that shook off communist rule in 1989, Slovakia has the highest proportion of citizens who view liberal democracy as a threat to their identity and values — 43 percent compared with 15 percent in the neighboring Czech Republic
  • Support for Russia has declined sharply since the start of the full-scale invasion of Ukraine in 2022, but 27 percent of Slovaks see it as key strategic partner, the highest level in the region.
  • many of its people — particularly those living outside big cities — feel left behind and resentful, Mr. Meseznikov said, and are “more vulnerable than elsewhere to conspiracy theories and narratives that liberal democracy is a menace.”
  • ...12 more annotations...
  • The picture is much the same in many other formerly communist countrie
  • Slovakia’s politics are particularly poisonous, swamped by wild conspiracy theories and bile.
  • The foundations of this were laid in the 1990s when Mr. Meciar formed what is still one of the country’s two main political blocs: an alliance of right-wing nationalists, business cronies and anti-establishment leftists. All thrived on denouncing their centrist and liberal opponents as enemies willing to sell out the country’s interests to the West
  • “Meciar was a pioneer,” he said. “He was a typical representative of national populism with an authoritarian approach, and so is Fico.”
  • On the day Mr. Fico was shot, Parliament was meeting to endorse an overhaul of public television to purge what his governing party views as unfair bias in favor of political opponents, a reprise of efforts in the 1990s by Mr. Meciar to mute media critics.
  • The legislation was part of a raft of measures that the European Commission in February said risked doing “irreparable damage” to the rule of law. These include measures to limit corruption investigations and impose what critics denounced as Russian-style restrictions on nongovernmental organizations. The government opposes military aid to Ukraine and L.G.B.T.Q. rights, is often at odds with the European Union and, like Mr. Orban, favors friendly relations with Vladimir V. Putin’s Russia.
  • In the run-up to the election last September that returned Mr. Fico, a fixture of Slovak politics for more than two decades, to power, he and his allies took an increasingly hostile stance toward the United States and Ukraine, combined with sympathetic words for Russia.
  • Their statements often recalled a remark by Mr. Meciar, who, resisting demands in the 1990s that he must change his ways if Slovakia wanted to join the European Union, held up Russia as an alternative haven: “if they don’t want us in the West, we’ll go East.”
  • But, he added, “the frames that the society and its elites use to interpret the conflict remain the same: a choice between a Western path and being something of a bridge between the East and the West, as well as a choice between liberal democracy and illiberal, authoritarian government.”
  • Andrej Danko, the leader of the party, which is now part of the new coalition government formed by Mr. Fico after the September election, said that the attempt to assassinate Mr. Fico represented the “start of a political war” between the country’s two opposing camps.
  • Iveta Radicova, a sociologist opposed to Mr. Fico who is a former prime minister, said Slovakia’s woes were part of a wider crisis with roots that extend far beyond its early stumbles under Mr. Meciar.
  • “Many democracies are headed toward the black hole,” as countries from Hungary in the East to the Netherlands in the West succumb to the appeal of national populism, she said. “This shift is happening everywhere.”
Javier E

Infected blood inquiry: Another state failure - will things ever change? - 0 views

  • Blame spread, accountability avoided.And the net result was year after year of stasis, the initial injustice made all the worse by a collective unwillingness to acknowledge it, let alone address it.
  • Mr, now Lord, Cameron said the government was “deeply sorry” after a public inquiry unequivocally blamed the British Army for one of the most controversial days in Northern Ireland's history, when 13 civil rights marchers were shot dead and 15 others were wounded.For 38 years, so many had waited for those words.Obfuscation, delay and denials until finally the truth emerged.
  • So you can be secretary of state, of all things, and still be misled?“It’s incredible. Most serious questions should be asked of Whitehall departments
  • ...4 more annotations...
  • The big question, then, is how do you bring about widespread, deep-seated cultural change within the organs of government and institutions connected to it?
  • Can you legislate to change a culture?
  • Sir Brian Langstaff, the report author, thinks you might be able to – or at least make a start.
  • He suggests there should be a so-called duty of candour demanded in law for civil servants and others.It would then become a legal obligation to speak up, rather than a cultural expectation to shut up.Whistleblowing would be mandatory.But will it happen, and will it make any difference?
Javier E

OpenAI Just Gave Away the Entire Game - The Atlantic - 0 views

  • If you’re looking to understand the philosophy that underpins Silicon Valley’s latest gold rush, look no further than OpenAI’s Scarlett Johansson debacle.
  • the situation is also a tidy microcosm of the raw deal at the center of generative AI, a technology that is built off data scraped from the internet, generally without the consent of creators or copyright owners. Multiple artists and publishers, including The New York Times, have sued AI companies for this reason, but the tech firms remain unchastened, prevaricating when asked point-blank about the provenance of their training data.
  • At the core of these deflections is an implication: The hypothetical superintelligence they are building is too big, too world-changing, too important for prosaic concerns such as copyright and attribution. The Johansson scandal is merely a reminder of AI’s manifest-destiny philosophy: This is happening, whether you like it or not.
  • ...7 more annotations...
  • Altman and OpenAI have been candid on this front. The end goal of OpenAI has always been to build a so-called artificial general intelligence, or AGI, that would, in their imagining, alter the course of human history forever, ushering in an unthinkable revolution of productivity and prosperity—a utopian world where jobs disappear, replaced by some form of universal basic income, and humanity experiences quantum leaps in science and medicine. (Or, the machines cause life on Earth as we know it to end.) The stakes, in this hypothetical, are unimaginably high—all the more reason for OpenAI to accelerate progress by any means necessary.
  • As with other grand projects of the 20th century, the voting public had a voice in both the aims and the execution of the Apollo missions. Altman made it clear that we’re no longer in that world. Rather than waiting around for it to return, or devoting his energies to making sure that it does, he is going full throttle forward in our present reality.
  • In response to one question about AGI rendering jobs obsolete, Jeff Wu, an engineer for the company, confessed, “It’s kind of deeply unfair that, you know, a group of people can just build AI and take everyone’s jobs away, and in some sense, there’s nothing you can do to stop them right now.” He added, “I don’t know. Raise awareness, get governments to care, get other people to care. Yeah. Or join us and have one of the few remaining jobs. I don’t know; it’s rough.”
  • Part of Altman’s reasoning, he told Andersen, is that AI development is a geopolitical race against autocracies like China. “If you are a person of a liberal-democratic country, it is better for you to cheer on the success of OpenAI” rather than that of “authoritarian governments,” he said. He noted that, in an ideal world, AI should be a product of nations. But in this world, Altman seems to view his company as akin to its own nation-state.
  • Wu’s colleague Daniel Kokotajlo jumped in with the justification. “To add to that,” he said, “AGI is going to create tremendous wealth. And if that wealth is distributed—even if it’s not equitably distributed, but the closer it is to equitable distribution, it’s going to make everyone incredibly wealthy.”
  • This is the unvarnished logic of OpenAI. It is cold, rationalist, and paternalistic. That such a small group of people should be anointed to build a civilization-changing technology is inherently unfair, they note. And yet they will carry on because they have both a vision for the future and the means to try to bring it to fruition
  • Wu’s proposition, which he offers with a resigned shrug in the video, is telling: You can try to fight this, but you can’t stop it. Your best bet is to get on board.
Javier E

(3) Chartbook 285: Cal-Tex - How Bidenomics is shaping America's multi-speed energy transition. (Carbon notes 14) - 0 views

  • If the Texas solar boom, the biggest in the USA, has little to do with Bidenomics, are we exaggerating the impact of Bidenomics? Rather than the shiny new tax incentives is it more general factors such as the plunging cost of PVs driving the renewable surge in the USA. Or, if policy is indeed the key, are state-level measures in Texas making the difference? Or, is this unfair to the IRA? Are its main effects still to come? Will it pile-on a boom that is already underway?
  • What did I learn?
  • First, when we compare the US renewable energy trajectory with the global picture, there is little reason to believe that Bidenomics has, so far, produced an exceptional US trajectory.
  • ...29 more annotations...
  • Everywhere, new investment in green energy generation is being propelled by general concern for the climate, shifting corporate and household demand, the plunging prices for solar and batteries triggered by Chinese policy, and a combination of national and regional interventions
  • How different would we expect this data to look without the IRA?
  • The most useful overview of these modeling efforts that I have been able to find is by Bistline et al “Power sector impacts of the Inflation Reduction Act of 2022” in Environmental Research Letters November 2023. If anyone has a better source, please let me know.
  • The top panel shows the historical trajectory of US generating capacity from 1980 to 2021. The second half of the graphic shows how 11 different models predict that the US electricity system might be expected to develop up to 2035, with and without IRA.
  • all the models expect the trends of the 2010s to continue through to the 2030s which means that solar, wind and battery storage dominate America’s energy future. Even without the IRA, the low carbon share of electricity generation will likely rise to 50-55% by 2035. Bidenomics bumps that to 70-80 percent.
  • The question is: “How does the renewable surge of 2022-2024, compare to the model-based expectations, with and without the IRA?”
  • The answer is either, “so so”, or, more charitably, it is “too early to tell”. In broad terms the current rate of expansion is slightly above the rate the models predict without the provision of additional Bidenomics incentives. But what is also clear is that the current rate of expansion, is far short of the long-run pace that should be expected from the IRA
  • At this point, defenders of the IRA interject that the IRA has only just come into effect. Cash from the IRA is only beginning to flow. And in an environment of higher costs for renewable energy equipment and higher interest rates, cash matters.
  • As Yakov Feygin put it: “Maybe the pithiest way to put it is that there are pre-IRA trends and outside IRA trends, but IRA has served to rapidly compress the timeframes for installation in a lot of technologies. So five years has turned into two, for example.”
  • So, to judge the impact of the IRA to date, the real question is not what has been built in 2022 and 2023, but what is in the pipeline.
  • Advised by JP Morgan, sophisticated global players like Ørsted are optimizing their use of both the production and investment tax credits offered by the IRA to launch large new renewable schemes. Of course, correlation is not the same as causation
  • Where the IRA is perhaps doing its most important work may be in incentivizing the middle bracket of projects where green momentum is less certain.
  • According to Utility Drive: “The 10 largest U.S. developers plan to build 110,364 MW of new wind and solar projects over the next five years, according to S&P Global Market Intelligence, but the majority of these projects remain in early stages of development. Just 15% of planned wind and solar projects are under construction, and 13% are considered to be in advanced stages of development, … ”
  • The states that I have highlighted in red stand out either for their unusually low existing level of renewable power capacity or their lack of current momentum.
  • Along with Texas, the pipelines for the PJM, MISO and Southeast regions (which includes Florida) look particularly healthy.
  • The relatively modest California numbers should not be a surprise. As Yakov Feygin and others pointed out, what is needed in California is not more raw generating capacity, but more battery storage. And that is what we are seeing in the data.
  • The numbers would be even larger if it were not for the truly surreal logjam in California’s system for authorizing interconnections. According to Hamilton/Brookings data the volume of hybrid solar and batter capacity in the queue for approval is 6.5 times the capacity currently operating in the state. In other words there is an entire energy transition waiting to happen when the overloaded managerial processes of the system catch up
  • Texas’s less bureaucratic system seems to be one of its key advantages in the extremely rapid roll-out of solar.
  • though it may be true that globally speaking the United States as a whole is a laggard in renewable energy development,
  • If California (with an economy roughly comparable to that of Germany at current exchange rates) and Texas (with an economy roughly the size of Italy’s) were countries, they would be #3 and #5 in the world in solar capacity per capita.
  • the obvious question is, which are the laggards in the US energy system.
  • So there is a lot to get excited about, at, what we are learning to call, the “meso”-level of the economy (more on this in a future post).
  • What the state-level data reveal is that there are a significant number of large states in the USA where solar and wind energy have barely made any impact. Pennsylvania, for instance
  • The relative levels of sunshine between US states is irrelevant. As the global solar atlas shows, the entire United States has far better solar potential than North West Europe. If you can grow corn and tobbaco, you can do utility-scale solar. The fact that Arizona is not a solar giant is mind boggling.
  • Texas is both big and truly remarkable. California already is a world leader in renewable energy. Meanwhile, the majority of the US electricity system presents a very different picture. There is a huge distance to be traveled and the pace of solar build-out is unremarkable.
  • This is where national level incentives like the IRA must prove themselves
  • And these local battles in America matter. Given the extremely high per capita energy consumption in the USA, greening state-level energy systems is significant at the global level. It does not compare to the super-sized levels of emissions in China, but it matters.
  • Indonesia’s total installed electricity generating capacity is rated at 81 GW. As far as immediate impact on the global carbon balance is concerned, cleaning up the power systems of Pennsylvania and Illinois would make an even bigger impact.
  • A key test of Biden-era climate and industrial policy will be whether it can untie the local political economy of fossil fuels, which, across many regions of the United States still stands in the way of a green energy transition that now has all the force of economics and technological advantage on its side.
Javier E

Opinion | How We've Lost Our Moorings as a Society - The New York Times - 0 views

  • To my mind, one of the saddest things that has happened to America in my lifetime is how much we’ve lost so many of our mangroves. They are endangered everywhere today — but not just in nature.
  • Our society itself has lost so many of its social, normative and political mangroves as well — all those things that used to filter toxic behaviors, buffer political extremism and nurture healthy communities and trusted institutions for young people to grow up in and which hold our society together.
  • You see, shame used to be a mangrove
  • ...21 more annotations...
  • That shame mangrove has been completely uprooted by Trump.
  • The reason people felt ashamed is that they felt fidelity to certain norms — so their cheeks would turn red when they knew they had fallen short
  • He keeps pushing our system to its breaking point, flooding the zone with lies so that the people trust only him and the truth is only what he says it is. In nature, as in society, when you lose your mangroves, you get flooding with lots of mud.
  • People in high places doing shameful things is hardly new in American politics and business. What is new, Seidman argued, “is so many people doing it so conspicuously and with such impunity: ‘My words were perfect,’ ‘I’d do it again.’ That is what erodes norms — that and making everyone else feel like suckers for following them.”
  • Nothing is more corrosive to a vibrant democracy and healthy communities, added Seidman, than “when leaders with formal authority behave without moral authority.
  • Without leaders who, through their example and decisions, safeguard our norms and celebrate them and affirm them and reinforce them, the words on paper — the Bill of Rights, the Constitution or the Declaration of Independence — will never unite us.”
  • . Trump wants to destroy our social and legal mangroves and leave us in a broken ethical ecosystem, because he and people like him best thrive in a broken system.
  • in the kind of normless world we have entered where societal, institutional and leadership norms are being eroded,” Seidman said to me, “no one has to feel shame anymore because no norm has been violated.”
  • Responsibility, especially among those who have taken oaths of office — another vital mangrove — has also experienced serious destruction.
  • It’s not that the people in these communities have changed. It’s that if that’s what you are being fed, day in and day out, then you’re going to come to every conversation with a certain set of predispositions that are really hard to break through.”
  • Your sense of responsibility to appear above partisan politics to uphold the integrity of the court’s rulings would not allow it.
  • Civil discourse and engaging with those with whom you disagree — instead of immediately calling for them to be fired — also used to be a mangrove.
  • when moral arousal manifests as moral outrage — and immediate demands for firings — “it can result in a vicious cycle of moral outrage being met with equal outrage, as opposed to a virtuous cycle of dialogue and the hard work of forging real understanding.”
  • In November 2022, the Heterodox Academy, a nonprofit advocacy group, surveyed 1,564 full-time college students ages 18 to 24. The group found that nearly three in five students (59 percent) hesitate to speak about controversial topics like religion, politics, race, sexual orientation and gender for fear of negative backlashes by classmates.
  • Locally owned small-town newspapers used to be a mangrove buffering the worst of our national politics. A healthy local newspaper is less likely to go too far to one extreme or another, because its owners and editors live in the community and they know that for their local ecosystem to thrive, they need to preserve and nurture healthy interdependencies
  • in 2023, the loss of local newspapers accelerated to an average of 2.5 per week, “leaving more than 200 counties as ‘news deserts’ and meaning that more than half of all U.S. counties now have limited access to reliable local news and information.”
  • As in nature, it leaves the local ecosystem with fewer healthy interdependencies, making it more vulnerable to invasive species and disease — or, in society, diseased ideas.
  • It used to be that if you had the incredible privilege of serving as U.S. Supreme Court justice, in your wildest dreams you would never have an American flag hanging upside down
  • we have gone from you’re not supposed to say “hell” on the radio to a nation that is now being permanently exposed to for-profit systems of political and psychological manipulation (and throw in Russia and China stoking the fires today as well), so people are not just divided, but being divided. Yes, keeping Americans morally outraged is big business at home now and war by other means by our geopolitical rivals.
  • More than ever, we are living in the “never-ending storm” that Seidman described to me back in 2016, in which moral distinctions, context and perspective — all the things that enable people and politicians to make good judgments — get blown away.
  • Blown away — that is exactly what happens to the plants, animals and people in an ecosystem that loses its mangroves.
Javier E

Elon Musk's Latest Dust-Up: What Does 'Science' Even Mean? - WSJ - 0 views

  • Elon Musk is racing to a sci-fi future while the AI chief at Meta Platforms is arguing for one rooted in the traditional scientific approach.
  • Meta’s top AI scientist, Yann LeCun, criticized the rival company and Musk himself. 
  • Musk turned to a favorite rebuttal—a veiled suggestion that the executive, who is also a high-profile professor, wasn’t accomplishing much: “What ‘science’ have you done in the past 5 years?”
  • ...20 more annotations...
  • “Over 80 technical papers published since January 2022,” LeCun responded. “What about you?”
  • To which Musk posted: “That’s nothing, you’re going soft. Try harder!
  • At stake are the hearts and minds of AI experts—academic and otherwise—needed to usher in the technology
  • “Join xAI,” LeCun wrote, “if you can stand a boss who:– claims that what you are working on will be solved next year (no pressure).– claims that what you are working on will kill everyone and must be stopped or paused (yay, vacation for 6 months!).– claims to want a ‘maximally rigorous pursuit of the truth’ but spews crazy-ass conspiracy theories on his own social platform.”
  • Some read Musk’s “science” dig as dismissing the role research has played for a generation of AI experts. For years, the Metas and Googles of the world have hired the top minds in AI from universities, indulging their desires to keep a foot in both worlds by allowing them to release their research publicly, while also trying to deploy products. 
  • For an academic such as LeCun, published research, whether peer-reviewed or not, allowed ideas to flourish and reputations to be built, which in turn helped build stars in the system.
  • LeCun has been at Meta since 2013 while serving as an NYU professor since 2003. His tweets suggest he subscribes to the philosophy that one’s work needs to be published—put through the rigors of being shown to be correct and reproducible—to really be considered science. 
  • “If you do research and don’t publish, it’s not Science,” he posted in a lengthy tweet Tuesday rebutting Musk. “If you never published your research but somehow developed it into a product, you might die rich,” he concluded. “But you’ll still be a bit bitter and largely forgotten.” 
  • After pushback, he later clarified in another post: “What I *AM* saying is that science progresses through the collision of ideas, verification, analysis, reproduction, and improvements. If you don’t publish your research *in some way* your research will likely have no impact.”
  • The spat inspired debate throughout the scientific community. “What is science?” Nature, a scientific journal, asked in a headline about the dust-up.
  • Others, such as Palmer Luckey, a former Facebook executive and founder of Anduril Industries, a defense startup, took issue with LeCun’s definition of science. “The extreme arrogance and elitism is what people have a problem with,” he tweeted.
  • For Musk, who prides himself on his physics-based viewpoint and likes to tout how he once aspired to work at a particle accelerator in pursuit of the universe’s big questions, LeCun’s definition of science might sound too ivory-tower. 
  • Musk has blamed universities for helping promote what he sees as overly liberal thinking and other symptoms of what he calls the Woke Mind Virus. 
  • Over the years, an appeal of working for Musk has been the impression that his companies move quickly, filled with engineers attracted to tackling hard problems and seeing their ideas put into practice.
  • “I’ve teamed up with Elon to see if we can actually apply these new technologies to really make a dent in our understanding of the universe,” Igor Babuschkin, an AI expert who worked at OpenAI and Google’s DeepMind, said last year as part of announcing xAI’s mission. 
  • The creation of xAI quickly sent ripples through the AI labor market, with one rival complaining it was hard to compete for potential candidates attracted to Musk and his reputation for creating value
  • that was before xAI’s latest round raised billions of dollars, putting its valuation at $24 billion, kicking off a new recruiting drive. 
  • It was already a seller’s market for AI talent, with estimates that there might be only a couple hundred people out there qualified to deal with certain pressing challenges in the industry and that top candidates can easily earn compensation packages worth $1 million or more
  • Since the launch, Musk has been quick to criticize competitors for what he perceived as liberal biases in rival AI chatbots. His pitch of xAI being the anti-woke bastion seems to have worked to attract some like-minded engineers.
  • As for Musk’s final response to LeCun’s defense of research, he posted a meme featuring Pepé Le Pew that read: “my honest reaction.”
Javier E

The AI Revolution Is Already Losing Steam - WSJ - 0 views

  • Most of the measurable and qualitative improvements in today’s large language model AIs like OpenAI’s ChatGPT and Google’s Gemini—including their talents for writing and analysis—come down to shoving ever more data into them. 
  • AI could become a commodity
  • To train next generation AIs, engineers are turning to “synthetic data,” which is data generated by other AIs. That approach didn’t work to create better self-driving technology for vehicles, and there is plenty of evidence it will be no better for large language models,
  • ...25 more annotations...
  • AIs like ChatGPT rapidly got better in their early days, but what we’ve seen in the past 14-and-a-half months are only incremental gains, says Marcus. “The truth is, the core capabilities of these systems have either reached a plateau, or at least have slowed down in their improvement,” he adds.
  • the gaps between the performance of various AI models are closing. All of the best proprietary AI models are converging on about the same scores on tests of their abilities, and even free, open-source models, like those from Meta and Mistral, are catching up.
  • models work by digesting huge volumes of text, and it’s undeniable that up to now, simply adding more has led to better capabilities. But a major barrier to continuing down this path is that companies have already trained their AIs on more or less the entire internet, and are running out of additional data to hoover up. There aren’t 10 more internets’ worth of human-generated content for today’s AIs to inhale.
  • A mature technology is one where everyone knows how to build it. Absent profound breakthroughs—which become exceedingly rare—no one has an edge in performance
  • companies look for efficiencies, and whoever is winning shifts from who is in the lead to who can cut costs to the bone. The last major technology this happened with was electric vehicles, and now it appears to be happening to AI.
  • the future for AI startups—like OpenAI and Anthropic—could be dim.
  • Microsoft and Google will be able to entice enough users to make their AI investments worthwhile, doing so will require spending vast amounts of money over a long period of time, leaving even the best-funded AI startups—with their comparatively paltry warchests—unable to compete.
  • Many other AI startups, even well-funded ones, are apparently in talks to sell themselves.
  • the bottom line is that for a popular service that relies on generative AI, the costs of running it far exceed the already eye-watering cost of training it.
  • That difference is alarming, but what really matters to the long-term health of the industry is how much it costs to run AIs. 
  • Changing people’s mindsets and habits will be among the biggest barriers to swift adoption of AI. That is a remarkably consistent pattern across the rollout of all new technologies.
  • the industry spent $50 billion on chips from Nvidia to train AI in 2023, but brought in only $3 billion in revenue.
  • For an almost entirely ad-supported company like Google, which is now offering AI-generated summaries across billions of search results, analysts believe delivering AI answers on those searches will eat into the company’s margins
  • Google, Microsoft and others said their revenue from cloud services went up, which they attributed in part to those services powering other company’s AIs. But sustaining that revenue depends on other companies and startups getting enough value out of AI to justify continuing to fork over billions of dollars to train and run those systems
  • three in four white-collar workers now use AI at work. Another survey, from corporate expense-management and tracking company Ramp, shows about a third of companies pay for at least one AI tool, up from 21% a year ago.
  • OpenAI doesn’t disclose its annual revenue, but the Financial Times reported in December that it was at least $2 billion, and that the company thought it could double that amount by 2025. 
  • That is still a far cry from the revenue needed to justify OpenAI’s now nearly $90 billion valuation
  • the company excels at generating interest and attention, but it’s unclear how many of those users will stick around. 
  • AI isn’t nearly the productivity booster it has been touted as
  • While these systems can help some people do their jobs, they can’t actually replace them. This means they are unlikely to help companies save on payroll. He compares it to the way that self-driving trucks have been slow to arrive, in part because it turns out that driving a truck is just one part of a truck driver’s job.
  • Add in the myriad challenges of using AI at work. For example, AIs still make up fake information,
  • getting the most out of open-ended chatbots isn’t intuitive, and workers will need significant training and time to adjust.
  • That’s because AI has to think anew every single time something is asked of it, and the resources that AI uses when it generates an answer are far larger than what it takes to, say, return a conventional search result
  • None of this is to say that today’s AI won’t, in the long run, transform all sorts of jobs and industries. The problem is that the current level of investment—in startups and by big companies—seems to be predicated on the idea that AI is going to get so much better, so fast, and be adopted so quickly that its impact on our lives and the economy is hard to comprehend. 
  • Mounting evidence suggests that won’t be the case.
« First ‹ Previous 1841 - 1851 of 1851
Showing 20 items per page