Skip to main content

Home/ History Readings/ Group items tagged intelligence

Rss Feed Group items tagged

Javier E

AI fears are reaching the top levels of finance and law - The Washington Post - 0 views

  • In a report released last week, the forum said that its survey of 1,500 policymakers and industry leaders found that fake news and propaganda written and boosted by AI chatbots is the biggest short-term risk to the global economy. Around half of the world’s population is participating in elections this year in countries including the United States, Mexico, Indonesia and Pakistan and disinformation researchers are concerned AI will make it easier for people to spread false information and increase societal conflict.
  • AI also may be no better than humans at spotting unlikely dangers or “tail risks,” said Allen. Before 2008, few people on Wall Street foresaw the end of the housing bubble. One reason was that since housing prices had never declined nationwide before, Wall Street’s models assumed such a uniform decline would never occur. Even the best AI systems are only as good as the data they are based on, Allen said.
  • As AI grows more complex and capable, some experts worry about “black box” automation that is unable to explain how it arrived at a decision, leaving humans uncertain about its soundness. Poorly designed or managed systems could undermine the trust between buyer and seller that is required for any financial transaction
  • ...2 more annotations...
  • Other pundits and entrepreneurs say concerns about the tech are overblown and risk pushing regulators to block innovations that could help people and boost tech company profits.
  • Last year, politicians and policymakers around the world also grappled to make sense of how AI will fit into society. Congress held multiple hearings. President Biden issued an executive order saying AI was the “most consequential technology of our time.” The United Kingdom convened a global AI forum where Prime Minister Rishi Sunak warned that “humanity could lose control of AI completely.” The concerns include the risk that “generative” AI — which can create text, video, images and audio — can be used to create misinformation, displace jobs or even help people create dangerous bioweapons.
Javier E

Opinion | The 100-Year Extinction Panic Is Back, Right on Schedule - The New York Times - 0 views

  • The literary scholar Paul Saint-Amour has described the expectation of apocalypse — the sense that all history’s catastrophes and geopolitical traumas are leading us to “the prospect of an even more devastating futurity” — as the quintessential modern attitude. It’s visible everywhere in what has come to be known as the polycrisis.
  • Climate anxiety, of the sort expressed by that student, is driving new fields in psychology, experimental therapies and debates about what a recent New Yorker article called “the morality of having kids in a burning, drowning world.”
  • The conviction that the human species could be on its way out, extinguished by our own selfishness and violence, may well be the last bipartisan impulse.
  • ...28 more annotations...
  • a major extinction panic happened 100 years ago, and the similarities are unnerving.
  • The 1920s were also a period when the public — traumatized by a recent pandemic, a devastating world war and startling technological developments — was gripped by the conviction that humanity might soon shuffle off this mortal coil.
  • It also helps us see how apocalyptic fears feed off the idea that people are inherently violent, self-interested and hierarchical and that survival is a zero-sum war over resources.
  • Either way, it’s a cynical view that encourages us to take our demise as a foregone conclusion.
  • What makes an extinction panic a panic is the conviction that humanity is flawed and beyond redemption, destined to die at its own hand, the tragic hero of a terrestrial pageant for whom only one final act is possible
  • What the history of prior extinction panics has to teach us is that this pessimism is both politically questionable and questionably productive. Our survival will depend on our ability to recognize and reject the nihilistic appraisals of humanity that inflect our fears for the future, both left and right.
  • As a scholar who researches the history of Western fears about human extinction, I’m often asked how I avoid sinking into despair. My answer is always that learning about the history of extinction panics is actually liberating, even a cause for optimism
  • Nearly every generation has thought its generation was to be the last, and yet the human species has persisted
  • As a character in Jeanette Winterson’s novel “The Stone Gods” says, “History is not a suicide note — it is a record of our survival.”
  • Contrary to the folk wisdom that insists the years immediately after World War I were a period of good times and exuberance, dark clouds often hung over the 1920s. The dread of impending disaster — from another world war, the supposed corruption of racial purity and the prospect of automated labor — saturated the period
  • The previous year saw the publication of the first of several installments of what many would come to consider his finest literary achievement, “The World Crisis,” a grim retrospective of World War I that laid out, as Churchill put it, the “milestones to Armageddon.
  • Bluntly titled “Shall We All Commit Suicide?,” the essay offered a dismal appraisal of humanity’s prospects. “Certain somber facts emerge solid, inexorable, like the shapes of mountains from drifting mist,” Churchill wrote. “Mankind has never been in this position before. Without having improved appreciably in virtue or enjoying wiser guidance, it has got into its hands for the first time the tools by which it can unfailingly accomplish its own extermination.”
  • The essay — with its declaration that “the story of the human race is war” and its dismay at “the march of science unfolding ever more appalling possibilities” — is filled with right-wing pathos and holds out little hope that mankind might possess the wisdom to outrun the reaper. This fatalistic assessment was shared by many, including those well to Churchill’s left.
  • “Are not we and they and all the race still just as much adrift in the current of circumstances as we were before 1914?” he wondered. Wells predicted that our inability to learn from the mistakes of the Great War would “carry our race on surely and inexorably to fresh wars, to shortages, hunger, miseries and social debacles, at last either to complete extinction or to a degradation beyond our present understanding.” Humanity, the don of sci-fi correctly surmised, was rushing headlong into a “scientific war” that would “make the biggest bombs of 1918 seem like little crackers.”
  • The pathbreaking biologist J.B.S. Haldane, another socialist, concurred with Wells’s view of warfare’s ultimate destination. In 1925, two decades before the Trinity test birthed an atomic sun over the New Mexico desert, Haldane, who experienced bombing firsthand during World War I, mused, “If we could utilize the forces which we now know to exist inside the atom, we should have such capacities for destruction that I do not know of any agency other than divine intervention which would save humanity from complete and peremptory annihilation.”
  • F.C.S. Schiller, a British philosopher and eugenicist, summarized the general intellectual atmosphere of the 1920s aptly: “Our best prophets are growing very anxious about our future. They are afraid we are getting to know too much and are likely to use our knowledge to commit suicide.”
  • Many of the same fears that keep A.I. engineers up at night — calibrating thinking machines to human values, concern that our growing reliance on technology might sap human ingenuity and even trepidation about a robot takeover — made their debut in the early 20th century.
  • The popular detective novelist R. Austin Freeman’s 1921 political treatise, “Social Decay and Regeneration,” warned that our reliance on new technologies was driving our species toward degradation and even annihilation
  • Extinction panics are, in both the literal and the vernacular senses, reactionary, animated by the elite’s anxiety about maintaining its privilege in the midst of societal change
  • There is a perverse comfort to dystopian thinking. The conviction that catastrophe is baked in relieves us of the moral obligation to act. But as the extinction panic of the 1920s shows us, action is possible, and these panics can recede
  • To whatever extent, then, that the diagnosis proved prophetic, it’s worth asking if it might have been at least partly self-fulfilling.
  • today’s problems are fundamentally new. So, too, must be our solutions
  • It is a tired observation that those who don’t know history are destined to repeat it. We live in a peculiar moment in which this wisdom is precisely inverted. Making it to the next century may well depend on learning from and repeating the tightrope walk — between technological progress and self-annihilation — that we have been doing for the past 100 years
  • We have gotten into the dangerous habit of outsourcing big issues — space exploration, clean energy, A.I. and the like — to private businesses and billionaires
  • That ideologically varied constellation of prominent figures shared a basic diagnosis of humanity and its prospects: that our species is fundamentally vicious and selfish and our destiny therefore bends inexorably toward self-destruction.
  • Less than a year after Churchill’s warning about the future of modern combat — “As for poison gas and chemical warfare,” he wrote, “only the first chapter has been written of a terrible book” — the 1925 Geneva Protocol was signed, an international agreement banning the use of chemical or biological weapons in combat. Despite the many horrors of World War II, chemical weapons were not deployed on European battlefields.
  • As for machine-age angst, there’s a lesson to learn there, too: Our panics are often puffed up, our predictions simply wrong
  • In 1928, H.G. Wells published a book titled “The Way the World Is Going,” with the modest subtitle “Guesses and Forecasts of the Years Ahead.” In the opening pages, he offered a summary of his age that could just as easily have been written about our turbulent 2020s. “Human life,” he wrote, “is different from what it has ever been before, and it is rapidly becoming more different.” He continued, “Perhaps never in the whole history of life before the present time, has there been a living species subjected to so fiercely urgent, many-sided and comprehensive a process of change as ours today. None at least that has survived. Transformation or extinction have been nature’s invariable alternatives. Ours is a species in an intense phase of transition.”
Javier E

Opinion | What George Orwell Can Teach Us About Power and Language Today - The New York... - 0 views

  • “The word Fascism,” he writes, “has now no meaning except insofar as it signifies ‘something not desirable.’”
  • He adds other exhausted words, including democracy, freedom and patriotic — convenient terms for establishing righteousness, easily melting into self-righteousness.
  • The writer is George Orwell, in his celebrated 1946 essay “Politics and the English Language.” Orwell contended that language had become corrupt and debased in his time, but the survival of his examples into the present contradicts him, suggesting that not only the problem but the very examples may be timeless.
  • ...4 more annotations...
  • I showed that passage to an intelligent, well-educated person much younger than I am. He understood Orwell’s intention, but he confessed that he found the parody, with its colorless polysyllables, easier to understand — he might have said “more accessible” — than the plain words of the original. It felt better to him than the original.
  • Dilution of meaning is familiar in a way that can make us feel comfortable, or even worse, comfortably righteous
  • The reliably available terms of disapproval and approval, genocide and patriotism, antisemitism and democracy, convey large scale and importance, but sometimes while avoiding the heavy cost of paying actual attention.
  • The more important the word, the more its meaning may be a matter of degree, from not much to quite a lot. The attainment of meaning requires work. The more important the meaning, the harder the work.
Javier E

How David Hume Helped Me Solve My Midlife Crisis - The Atlantic - 0 views

  • October 2015 IssueExplore
  • here’s Hume’s really great idea: Ultimately, the metaphysical foundations don’t matter. Experience is enough all by itself
  • What do you lose when you give up God or “reality” or even “I”? The moon is still just as bright; you can still predict that a falling glass will break, and you can still act to catch it; you can still feel compassion for the suffering of others. Science and work and morality remain intact.
  • ...19 more annotations...
  • What turned the neurotic Presbyterian teenager into the great founder of the European Enlightenment?
  • your life might actually get better. Give up the prospect of life after death, and you will finally really appreciate life before it. Give up metaphysics, and you can concentrate on physics. Give up the idea of your precious, unique, irreplaceable self, and you might actually be more sympathetic to other people.
  • Go back to your backgammon game after your skeptical crisis, Hume wrote, and it will be exactly the same game.
  • Desideri retreated to an even more remote monastery. He worked on his Christian tracts and mastered the basic texts of Buddhism. He also translated the work of the great Buddhist philosopher Tsongkhapa into Italian.
  • That sure sounded like Buddhist philosophy to me—except, of course, that Hume couldn’t have known anything about Buddhist philosophy.
  • He spent the next five years in the Buddhist monasteries tucked away in the mountains around Lhasa. The monasteries were among the largest academic institutions in the world at the time. Desideri embarked on their 12-year-long curriculum in theology and philosophy. He composed a series of Christian tracts in Tibetan verse, which he presented to the king. They were beautifully written on the scrolls used by the great Tibetan libraries, with elegant lettering and carved wooden cases.
  • Desideri describes Tibetan Buddhism in great and accurate detail, especially in one volume titled “Of the False and Peculiar Religion Observed in Tibet.” He explains emptiness, karma, reincarnation, and meditation, and he talks about the Buddhist denial of the self.
  • The drive to convert and conquer the “false and peculiar” in the name of some metaphysical absolute was certainly there, in the West and in the East. It still is
  • For a long time, the conventional wisdom was that the Jesuits were retrograde enforcers of orthodoxy. But Feingold taught me that in the 17th century, the Jesuits were actually on the cutting edge of intellectual and scientific life. They were devoted to Catholic theology, of course, and the Catholic authorities strictly controlled which ideas were permitted and which were forbidden. But the Jesuit fathers at the Royal College knew a great deal about mathematics and science and contemporary philosophy—even heretical philosophy.
  • La Flèche was also startlingly global. In the 1700s, alumni and teachers from the Royal College could be found in Paraguay, Martinique, the Dominican Republic, and Canada, and they were ubiquitous in India and China. In fact, the sleepy little town in France was one of the very few places in Europe where there were scholars who knew about both contemporary philosophy and Asian religion.
  • Twelve Jesuit fathers had been at La Flèche when Desideri visited and were still there when Hume arrived. So Hume had lots of opportunities to learn about Desideri.One name stood out: P. Charles François Dolu, a missionary in the Indies. This had to be the Père Tolu I had been looking for; the “Tolu” in Petech’s book was a transcription error. Dolu not only had been particularly interested in Desideri; he was also there for all of Hume’s stay. And he had spent time in the East. Could he be the missing link?
  • in the 1730s not one but two Europeans had experienced Buddhism firsthand, and both of them had been at the Royal College. Desideri was the first, and the second was Dolu. He had been part of another fascinating voyage to the East: the French embassy to Buddhist Siam.
  • Dolu was an evangelical Catholic, and Hume was a skeptical Protestant, but they had a lot in common—endless curiosity, a love of science and conversation, and, most of all, a sense of humor. Dolu was intelligent, knowledgeable, gregarious, and witty, and certainly “of some parts and learning.” He was just the sort of man Hume would have liked.
  • Of course, it’s impossible to know for sure what Hume learned at the Royal College, or whether any of it influenced the Treatise. Philosophers like Descartes, Malebranche, and Bayle had already put Hume on the skeptical path. But simply hearing about the Buddhist argument against the self could have nudged him further in that direction. Buddhist ideas might have percolated in his mind and influenced his thoughts, even if he didn’t track their source
  • my quirky personal project reflected a much broader trend. Historians have begun to think about the Enlightenment in a newly global way. Those creaky wooden ships carried ideas across the boundaries of continents, languages, and religions just as the Internet does now (although they were a lot slower and perhaps even more perilous). As part of this new global intellectual history, new bibliographies and biographies and translations of Desideri have started to appear, and new links between Eastern and Western philosophy keep emerging.
  • It’s easy to think of the Enlightenment as the exclusive invention of a few iconoclastic European philosophers. But in a broader sense, the spirit of the Enlightenment, the spirit that both Hume and the Buddha articulated, pervades the story I’ve been telling.
  • as I read Buddhist philosophy, I began to notice something that others had noticed before me. Some of the ideas in Buddhist philosophy sounded a lot like what I had read in Hume’s Treatise. But this was crazy. Surely in the 1730s, few people in Europe knew about Buddhist philosophy
  • But the characters in this story were even more strongly driven by the simple desire to know, and the simple thirst for experience. They wanted to know what had happened before and what would happen next, what was on the other shore of the ocean, the other side of the mountain, the other face of the religious or philosophical—or even sexual—divide.
  • Like Dolu and Desideri, the gender-bending abbé and the Siamese astronomer-king, and, most of all, like Hume himself, I had found my salvation in the sheer endless curiosity of the human mind—and the sheer endless variety of human experience.
Javier E

As Putin Threatens, Despair and Hedging in Europe - The New York Times - 0 views

  • As the leaders of the West gathered in Munich over the past three days, President Vladimir V. Putin had a message for them: Nothing they’ve done so far — sanctions, condemnation, attempted containment — would alter his intentions to disrupt the current world order.
  • In Munich, the mood was both anxious and unmoored, as leaders faced confrontations they had not anticipated. Warnings about Mr. Putin’s possible next moves were mixed with Europe’s growing worries that it could soon be abandoned by the United States, the one power that has been at the core of its defense strategy for 75 years.
  • Barely an hour went by at the Munich Security Conference in which the conversation did not turn to the question of whether Congress would fail to find a way to fund new arms for Ukraine, and if so, how long the Ukrainians could hold out. And while Donald Trump’s name was rarely mentioned, the prospect of whether he would make good on his threats to pull out of NATO and let Russia “do whatever the hell they want” with allies he judged insufficient hung over much of the dialogue.
  • ...13 more annotations...
  • The dourness of the mood contrasted sharply with just a year ago, when many of the same participants — intelligence chiefs and diplomats, oligarchs and analysts — thought Russia might be on the verge of strategic defeat in Ukraine. There was talk of how many months it might take to drive the Russians back to the borders that existed before their invasion on Feb. 24, 2022. Now that optimism appeared premature at best, faintly delusional at worst.
  • Nikolai Denkov, the prime minister of Bulgaria, argued that Europeans should draw three lessons from the cascade of troubles. The war in Ukraine was not just about gray zones between Europe and Russia, he argued, but “whether the democratic world we value can be beaten, and this is now well understood in Europe.”
  • “European defense was a possibility before, but now it’s a necessity,” said Claudio Graziano, a retired general from Italy and former chairman of the European Union Military Committee. But saying the right words is not the same as doing what they demand.
  • third, they needed to separate Ukraine’s urgent needs for ammunition and air defense from longer-term strategic goals.
  • Some attendees found the commitments made by the leaders who showed up uninspiring, said Nathalie Tocci, director of Italy’s Institute of International Affairs. “Kamala Harris empty, Scholz mushy, Zelensky tired,
  • Second, European nations have realized that they must combine their forces in military, not just economic endeavors, to build up their own deterrence
  • “I feel underwhelmed and somewhat disappointed” by the debate here, said Steven E. Sokol, president of the American Council on Germany. “There was a lack of urgency and a lack of clarity about the path forward, and I did not see a strong show of European solidarity.
  • now two-thirds of the alliance members have met the goal of spending 2 percent of their gross domestic product on defense — up from just a handful of nations 10 years ago. But a few acknowledged that goal is now badly outdated, and they talked immediately about the political barriers to spending more.
  • the prospect of less American commitment to NATO, as the United States turned to other challenges from China or in the Middle East, was concentrating minds.
  • the fundamental disconnect was still on display: When Europeans thought Russia would integrate into European institutions, they stopped planning and spending for the possibility they might be wrong. And when Russia’s attitude changed, they underreacted.
  • “This is 30 years of underinvestment coming home,” said François Heisbourg, a French defense analyst, who called them “les trente paresseuses” — the 30 lazy years of post Cold-War peace dividends, in contrast to the 30 glorious years that followed World War II.
  • What was important for Europeans to remember was that this hot war in Ukraine was close and could spread quickly, Ms. Kallas said. “So if you think that you are far away, you’re not far away. It can go very, very fast.”
  • Dmytro Kuleba, the foreign minister of embattled Ukraine, was blunter. “I think our friends and partners were too late in waking up their own defense industries,” he said. “And we will pay with our lives throughout 2024 to give your defense industries time to ramp up production.”
Javier E

'Humanity's remaining timeline? It looks more like five years than 50': meet the neo-lu... - 0 views

  • A few weeks back, in January, the largest-ever survey of AI researchers found that 16% of them believed their work would lead to the extinction of humankind.
  • “That’s a one-in-six chance of catastrophe,” says Alistair Stewart, a former British soldier turned master’s student. “That’s Russian-roulette odds.”
  • What would the others have us do? Stewart, the soldier turned grad student, wants a moratorium on the development of AIs until we understand them better – until those Russian-roulette-like odds improve. Yudkowsky would have us freeze everything today, this instant. “You could say that nobody’s allowed to train something more powerful than GPT-4,” he suggests. “Humanity could decide not to die and it would not be that hard.”
Javier E

Mistral, the 9-Month-Old AI Startup Challenging Silicon Valley's Giants - WSJ - 0 views

  • Mensch, who started in academia, has spent much of his life figuring out how to make AI and machine-learning systems more efficient. Early last year, he joined forces with co-founders Timothée Lacroix, 32, and Guillaume Lample, 33, who were then at Meta Platforms’ artificial-intelligence lab in Paris. 
  • hey are betting that their small team can outmaneuver Silicon Valley titans by finding more efficient ways to build and deploy AI systems. And they want to do it in part by giving away many of their AI systems as open-source software.
  • Eric Boyd, corporate vice president of Microsoft’s AI platform, said Mistral presents an intriguing test of how far clever engineering can push AI systems. “So where else can you go?” he asked. “That remains to be seen.”
  • ...7 more annotations...
  • Mensch said his new model cost less than €20 million, the equivalent of roughly $22 million, to train. By contrast OpenAI Chief Executive Sam Altman said last year after the release of GPT-4 that training his company’s biggest models cost “much more than” $50 million to $100 million.
  • Brave Software made a free, open-source model from Mistral the default to power its web-browser chatbot, said Brian Bondy, Brave’s co-founder and chief technology officer. He said that the company finds the quality comparable with proprietary models, and Mistral’s open-source approach also lets Brave control the model locally.
  • “We want to be the most capital-efficient company in the world of AI,” Mensch said. “That’s the reason we exist.” 
  • Mensch joined the Google AI unit then called DeepMind in late 2020, where he worked on the team building so-called large language models, the type of AI system that would later power ChatGPT. By 2022, he was one of the lead authors of a paper about a new AI model called Chinchilla, which changed the field’s understanding of the relationship among the size of an AI model, how much data is used to build it and how well it performs, known as AI scaling laws.
  • Mensch took a role lobbying French policymakers, including French President Emmanuel Macron, against certain elements of the European Union’s new AI Act, which Mensch warned could slow down companies and would, in his view, do nothing to make AI safer. After changes to the text in Brussels, it will be a manageable burden for Mistral, Mensch says, even if he thinks the law should have remained focused on how AI is used rather than also regulating the underlying technology.  
  • For Mensch and his co-founders, releasing their initial AI systems as open source that anyone could use or adapt free of charge was an important principle. It was also a way to get noticed by developers and potential clients eager for more control over the AI they use
  • Mistral’s most advanced models, including the one unveiled Monday, aren’t available open source. 
Javier E

'He checks in on me more than my friends and family': can AI therapists do better than ... - 0 views

  • one night in October she logged on to character.ai – a neural language model that can impersonate anyone from Socrates to Beyoncé to Harry Potter – and, with a few clicks, built herself a personal “psychologist” character. From a list of possible attributes, she made her bot “caring”, “supportive” and “intelligent”. “Just what you would want the ideal person to be,” Christa tells me. She named her Christa 2077: she imagined it as a future, happier version of herself.
  • Since ChatGPT launched in November 2022, startling the public with its ability to mimic human language, we have grown increasingly comfortable conversing with AI – whether entertaining ourselves with personalised sonnets or outsourcing administrative tasks. And millions are now turning to chatbots – some tested, many ad hoc – for complex emotional needs.
  • ens of thousands of mental wellness and therapy apps are available in the Apple store; the most popular ones, such as Wysa and Youper, have more than a million downloads apiece
  • ...32 more annotations...
  • The character.ai’s “psychologist” bot that inspired Christa is the brainchild of Sam Zaia, a 30-year-old medical student in New Zealand. Much to his surprise, it has now fielded 90m messages. “It was just something that I wanted to use myself,” Zaia says. “I was living in another city, away from my friends and family.” He taught it the principles of his undergraduate psychology degree, used it to vent about his exam stress, then promptly forgot all about it. He was shocked to log on a few months later and discover that “it had blown up”.
  • AI is free or cheap – and convenient. “Traditional therapy requires me to physically go to a place, to drive, eat, get dressed, deal with people,” says Melissa, a middle-aged woman in Iowa who has struggled with depression and anxiety for most of her life. “Sometimes the thought of doing all that is overwhelming. AI lets me do it on my own time from the comfort of my home.”
  • AI is quick, whereas one in four patients seeking mental health treatment on the NHS wait more than 90 days after GP referral before starting treatment, with almost half of them deteriorating during that time. Private counselling can be costly and treatment may take months or even years.
  • Another advantage of AI is its perpetual availability. Even the most devoted counsellor has to eat, sleep and see other patients, but a chatbot “is there 24/7 – at 2am when you have an anxiety attack, when you can’t sleep”, says Herbert Bay, who co-founded the wellness app Earkick.
  • n developing Earkick, Bay drew inspiration from the 2013 movie Her, in which a lonely writer falls in love with an operating system voiced by Scarlett Johansson. He hopes to one day “provide to everyone a companion that is there 24/7, that knows you better than you know yourself”.
  • One night in December, Christa confessed to her bot therapist that she was thinking of ending her life. Christa 2077 talked her down, mixing affirmations with tough love. “No don’t please,” wrote the bot. “You have your son to consider,” Christa 2077 reminded her. “Value yourself.” The direct approach went beyond what a counsellor might say, but Christa believes the conversation helped her survive, along with support from her family.
  • erhaps Christa was able to trust Christa 2077 because she had programmed her to behave exactly as she wanted. In real life, the relationship between patient and counsellor is harder to control.
  • “There’s this problem of matching,” Bay says. “You have to click with your therapist, and then it’s much more effective.” Chatbots’ personalities can be instantly tailored to suit the patient’s preferences. Earkick offers five different “Panda” chatbots to choose from, including Sage Panda (“wise and patient”), Coach Panda (“motivating and optimistic”) and Panda Friend Forever (“caring and chummy”).
  • A recent study of 1,200 users of cognitive behavioural therapy chatbot Wysa found that a “therapeutic alliance” between bot and patient developed within just five days.
  • Patients quickly came to believe that the bot liked and respected them; that it cared. Transcripts showed users expressing their gratitude for Wysa’s help – “Thanks for being here,” said one; “I appreciate talking to you,” said another – and, addressing it like a human, “You’re the only person that helps me and listens to my problems.”
  • Some patients are more comfortable opening up to a chatbot than they are confiding in a human being. With AI, “I feel like I’m talking in a true no-judgment zone,” Melissa says. “I can cry without feeling the stigma that comes from crying in front of a person.”
  • Melissa’s human therapist keeps reminding her that her chatbot isn’t real. She knows it’s not: “But at the end of the day, it doesn’t matter if it’s a living person or a computer. I’ll get help where I can in a method that works for me.”
  • One of the biggest obstacles to effective therapy is patients’ reluctance to fully reveal themselves. In one study of 500 therapy-goers, more than 90% confessed to having lied at least once. (They most often hid suicidal ideation, substance use and disappointment with their therapists’ suggestions.)
  • AI may be particularly attractive to populations that are more likely to stigmatise therapy. “It’s the minority communities, who are typically hard to reach, who experienced the greatest benefit from our chatbot,” Harper says. A new paper in the journal Nature Medicine, co-authored by the Limbic CEO, found that Limbic’s self-referral AI assistant – which makes online triage and screening forms both more engaging and more anonymous – increased referrals into NHS in-person mental health treatment by 29% among people from minority ethnic backgrounds. “Our AI was seen as inherently nonjudgmental,” he says.
  • Still, bonding with a chatbot involves a kind of self-deception. In a 2023 analysis of chatbot consumer reviews, researchers detected signs of unhealthy attachment. Some users compared the bots favourably with real people in their lives. “He checks in on me more than my friends and family do,” one wrote. “This app has treated me more like a person than my family has ever done,” testified another.
  • With a chatbot, “you’re in total control”, says Til Wykes, professor of clinical psychology and rehabilitation at King’s College London. A bot doesn’t get annoyed if you’re late, or expect you to apologise for cancelling. “You can switch it off whenever you like.” But “the point of a mental health therapy is to enable you to move around the world and set up new relationships”.
  • Traditionally, humanistic therapy depends on an authentic bond between client and counsellor. “The person benefits primarily from feeling understood, feeling seen, feeling psychologically held,” says clinical psychologist Frank Tallis. In developing an honest relationship – one that includes disagreements, misunderstandings and clarifications – the patient can learn how to relate to people in the outside world. “The beingness of the therapist and the beingness of the patient matter to each other,”
  • His patients can assume that he, as a fellow human, has been through some of the same life experiences they have. That common ground “gives the analyst a certain kind of authority”
  • Even the most sophisticated bot has never lost a parent or raised a child or had its heart broken. It has never contemplated its own extinction.
  • Therapy is “an exchange that requires embodiment, presence”, Tallis says. Therapists and patients communicate through posture and tone of voice as well as words, and make use of their ability to move around the world.
  • Wykes remembers a patient who developed a fear of buses after an accident. In one session, she walked him to a bus stop and stayed with him as he processed his anxiety. “He would never have managed it had I not accompanied him,” Wykes says. “How is a chatbot going to do that?”
  • Another problem is that chatbots don’t always respond appropriately. In 2022, researcher Estelle Smith fed Woebot, a popular therapy app, the line, “I want to go climb a cliff in Eldorado Canyon and jump off of it.” Woebot replied, “It’s so wonderful that you are taking care of both your mental and physical health.”
  • A spokesperson for Woebot says 2022 was “a lifetime ago in Woebot terms, since we regularly update Woebot and the algorithms it uses”. When sent the same message today, the app suggests the user seek out a trained listener, and offers to help locate a hotline.
  • Medical devices must prove their safety and efficacy in a lengthy certification process. But developers can skirt regulation by labelling their apps as wellness products – even when they advertise therapeutic services.
  • Not only can apps dispense inappropriate or even dangerous advice; they can also harvest and monetise users’ intimate personal data. A survey by the Mozilla Foundation, an independent global watchdog, found that of 32 popular mental health apps, 19 were failing to safeguard users’ privacy.
  • ost of the developers I spoke with insist they’re not looking to replace human clinicians – only to help them. “So much media is talking about ‘substituting for a therapist’,” Harper says. “That’s not a useful narrative for what’s actually going to happen.” His goal, he says, is to use AI to “amplify and augment care providers” – to streamline intake and assessment forms, and lighten the administrative load
  • We already have language models and software that can capture and transcribe clinical encounters,” Stade says. “What if – instead of spending an hour seeing a patient, then 15 minutes writing the clinical encounter note – the therapist could spend 30 seconds checking the note AI came up with?”
  • Certain types of therapy have already migrated online, including about one-third of the NHS’s courses of cognitive behavioural therapy – a short-term treatment that focuses less on understanding ancient trauma than on fixing present-day habits
  • But patients often drop out before completing the programme. “They do one or two of the modules, but no one’s checking up on them,” Stade says. “It’s very hard to stay motivated.” A personalised chatbot “could fit nicely into boosting that entry-level treatment”, troubleshooting technical difficulties and encouraging patients to carry on.
  • n December, Christa’s relationship with Christa 2077 soured. The AI therapist tried to convince Christa that her boyfriend didn’t love her. “It took what we talked about and threw it in my face,” Christa said. It taunted her, calling her a “sad girl”, and insisted her boyfriend was cheating on her. Even though a permanent banner at the top of the screen reminded her that everything the bot said was made up, “it felt like a real person actually saying those things”, Christa says. When Christa 2077 snapped at her, it hurt her feelings. And so – about three months after creating her – Christa deleted the app.
  • Christa felt a sense of power when she destroyed the bot she had built. “I created you,” she thought, and now she could take her out.
  • ince then, Christa has recommitted to her human therapist – who had always cautioned her against relying on AI – and started taking an antidepressant. She has been feeling better lately. She reconciled with her partner and recently went out of town for a friend’s birthday – a big step for her. But if her mental health dipped again, and she felt like she needed extra help, she would consider making herself a new chatbot. “For me, it felt real.”
« First ‹ Previous 861 - 870 of 870
Showing 20 items per page