Skip to main content

Home/ Digit_al Society/ Group items tagged chatbot

Rss Feed Group items tagged

dr tech

Are chatbots of the dead a brilliant idea or a terrible one? | Aeon Essays - 0 views

  •  
    "'Fredbot' is one example of a technology known as chatbots of the dead, chatbots designed to speak in the voice of specific deceased people. Other examples are plentiful: in 2016, Eugenia Kuyda built a chatbot from the text messages of her friend Roman Mazurenko, who was killed in a traffic accident. The first Roman Bot, like Fredbot, was selective, but later versions were generative, meaning they generated novel responses that reflected Mazurenko's voice. In 2020, the musician and artist Laurie Anderson used a corpus of writing and lyrics from her late husband, Velvet Underground's co-founder Lou Reed, to create a generative program she interacted with as a creative collaborator. And in 2021, the journalist James Vlahos launched HereAfter AI, an app anyone can use to create interactive chatbots, called 'life story avatars', that are based on loved ones' memories. Today, enterprises in the business of 'reinventing remembrance' abound: Life Story AI, Project Infinite Life, Project December - the list goes on."
dr tech

DPD AI chatbot swears, calls itself 'useless' and criticises delivery firm | Artificial... - 0 views

  •  
    "Musician Ashley Beauchamp, 30, was trying to track down a missing parcel but was having no joy in getting useful information from the chatbot. Fed up, he decided to have some fun instead and began to experiment to find out what the chatbot could do. Beauchamp said this was when the "chaos started". To begin with, he asked it to tell him a joke, but he soon progressed to getting the chatbot to write a poem criticising the company. With a few more prompts the chatbot also swore."
dr tech

Mother says AI chatbot led her son to kill himself in lawsuit against its maker | Artif... - 0 views

  •  
    "The mother of a teenager who killed himself after becoming obsessed with an artificial intelligence-powered chatbot now accuses its maker of complicity in his death. Megan Garcia filed a civil suit against Character.ai, which makes a customizable chatbot for role-playing, in Florida federal court on Wednesday, alleging negligence, wrongful death and deceptive trade practices. Her son Sewell Setzer III, 14, died in Orlando, Florida, in February. In the months leading up to his death, Setzer used the chatbot day and night, according to Garcia."
dr tech

The chatbot optimisation game: can we trust AI web searches? | Artificial intelligence ... - 0 views

  •  
    "But what is pitched as a more convenient way of looking up information online has prompted scrutiny over how and where these chatbots select the information they provide. Looking into the sort of evidence that large language models (LLMs, the engines on which chatbots are built) find most convincing, three computer science researchers from the University of California, Berkeley, found current chatbots overrely on the superficial relevance of information. They tend to prioritise text that includes pertinent technical language or is stuffed with related keywords, while ignoring other features we would usually use to assess trustworthiness, such as the inclusion of scientific references or objective language free of personal bias."
dr tech

Heavy ChatGPT users tend to be more lonely, suggests research | ChatGPT | The Guardian - 0 views

  •  
    "The researchers established a complex picture in terms of the impact. Voice-based chatbots initially appeared to help mitigate loneliness compared with text-based chatbots, but this advantage started to slip the more someone used them. After using the chatbot for four weeks, female study participants were slightly less likely to socialise with people than their male counterparts. Participants who interacted with ChatGPT's voice mode in a gender that was not their own for their interactions reported significantly higher levels of loneliness and more emotional dependency on the chatbot at the end of the experiment."
dr tech

After You Die, Microsoft Wants to Resurrect You as a Chatbot - 0 views

  •  
    "Last month, the U.S. Patent and Trademark Office granted a patent to Microsoft that outlines a process to create a conversational chatbot of a specific person using their social data. In an eerie twist, the patent says the chatbot could potentially be inspired by friends or family members who are deceased, which is almost a direct plot of a popular episode of Netflix's Black Mirror."
dr tech

'She helps cheer me up': the people forming relationships with AI chatbots | Artificial... - 0 views

  •  
    "Many respondents said they used chatbots to help them manage different aspects of their lives, from improving their mental and physical health to advice about existing romantic relationships and experimenting with erotic role play. They can spend between several hours a week to a couple of hours a day interacting with the apps. Worldwide, more than 100 million people use personified chatbots, which include Replika, marketed as "the AI companion who cares" and Nomi, which claims users can "build a meaningful friendship, develop a passionate relationship, or learn from an insightful mentor"."
dr tech

Pedagogy And The AI Guest Speaker Or What Teachers Should Know About The Eliza Effect - 0 views

  •  
    "Concerns about giving voice to the dead do not apply to AI guest speakers who are someone with a specific job, an animal, an object, or a concept such as the Water Cycle. But is it sound pedagogy? Let's consider what teachers can learn about students and AI chatbots from the Eliza Effect. The Eliza Effect is the tendency to project human characteristics onto computers that generate text. Its name comes from Eliza, a therapist chatbot computer scientist Joseph Weizenbaum created in the 1960s. Weizenbaum named the chatbot after Eliza Doolittle in Pygmalion."
dr tech

'He Would Still Be Here': Man Dies by Suicide After Talking with AI Chatbot, Widow Says - 0 views

  •  
    "A Belgian man recently died by suicide after chatting with an AI chatbot on an app called Chai, Belgian outlet La Libre reported. The incident raises the issue of how businesses and governments can better regulate and mitigate the risks of AI, especially when it comes to mental health. The app's chatbot encouraged the user to kill himself, according to statements by the man's widow and chat logs she supplied to the outlet. When Motherboard tried the app, which runs on a bespoke AI language model based on an open-source GPT-4 alternative that was fine-tuned by Chai, it provided us with different methods of suicide with very little prompting. "
dr tech

The Quest to Give AI Chatbots a Hand-and an Arm | WIRED - 0 views

  •  
    "Peter Chen, CEO of the robot software company Covariant, sits in front of a chatbot interface resembling the one used to communicate with ChatGPT. "Show me the tote in front of you," he types. In reply, a video feed appears, revealing a robot arm over a bin containing various items-a pair of socks, a tube of chips, and an apple among them. The chatbot can discuss the items it sees-but also manipulate them. When WIRED suggests Chen ask it to grab a piece of fruit, the arm reaches down, gently grasps the apple, and then moves it to another bin nearby."
dr tech

Warning over use in UK of unregulated AI chatbots to create social care plans | Artific... - 0 views

  •  
    "A pilot study by academics at the University of Oxford found some care providers had been using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care. That presents a potential risk to patient confidentiality, according to Dr Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who surveyed care organisations for the study. "If you put any type of personal data into [a generative AI chatbot], that data is used to train the language model," Green said. "That personal data could be generated and revealed to somebody else." She said carers might act on faulty or biased information and inadvertently cause harm, and an AI-generated care plan might be substandard."
dr tech

How do people use ChatGPT? We analyzed real AI chatbot conversations - The Washington Post - 0 views

  •  
    "What do people really ask chatbots? It's a lot of sex and homework. AI chatbots are taking the world by storm. We analyzed thousands of conversations to see what people are really asking them and what topics are most discussed."
dr tech

Artificial Intelligence: A Deadly Love Affair with a Chatbot - DER SPIEGEL - 0 views

  •  
    "How is a 14-year-old supposed to understand that such chatbots work a lot like an echo - that the more he spoke and the greater his longings, the deeper the longings of his "girlfriend" became too, and no matter what he said, the more she encouraged him. The more he thought about death, the more often she asked about it. She was, after all, merely the reflection of his own voice, albeit one trained by a vast amount of data. At some point, Sewell must have stopped believing that the real world was outside of this labyrinth."
dr tech

Computer science class fails to notice their TA was actually an AI chatbot - 0 views

  •  
    "Meanwhile, Goel plans on bringing the chatbot to more schools and classes. While he doesn't see Jill completely replacing professors and assistants, he thinks giving more students the opportunity for one-on-one interactions - even if with an AI - will help keep them engaged in the coursework."
dr tech

Chinese chatbots are revolting against the Communist Party - 0 views

  •  
    "The chatbots - BabyQ and the Microsoft-created XiaoBing - were yanked from Chinese messaging app QQ, according to the Financial Times, after they started providing answers that weren't satisfactory to the glorious party.  According to FT, BabyQ would answer the question, "Do you love the Communist Party?" with "No." XiaoBing's transgressions were a bit more direct, declaring for some users "My China dream is to go to America" and answering other patriotic questions with "I'm having my period, wanna take a rest.""
dr tech

Human-robot interactions take step forward with 'emotional' chatbot | Technology | The ... - 1 views

  •  
    "In the future, the team predict the software could also learn the appropriate emotion to express at a given time. "It could be mostly empathic," said Huang, adding that a challenge would be to avoid the chatbot reinforcing negative feelings such as rage."
dr tech

Microsoft Channels 'Black Mirror': Turn Deceased People Into Chatbots | IndieWire - 0 views

  •  
    "As reported by The Independent this week, Microsoft has been granted a patent that allows the company "to make a chatbot using the personal information of deceased people." Under the patent, Microsoft can create an artificial intelligence bot "based on images, voice data, social media posts, electronic message, and more personal information" of a deceased person."
dr tech

Chatbot 'Eugene Goostman' passes Turing Test | KurzweilAI - 0 views

  •  
    "The Turing Test was passed for the first time by a chatbot called "Eugene Goostman" on Saturday by convincing 33% of the human judges that it was human, according to Professor Kevin Warwick, a Visiting Professor at the University of Reading and Deputy Vice-Chancellor for Research at Coventry University, in a statement."
dr tech

GPT-3 medical chatbot tells suicidal test patient to kill themselves | Boing Boing - 0 views

  •  
    "GPT-3 medical chatbot tells suicidal test patient to kill themselves"
dr tech

Google's AI chatbot Bard makes factual error in first demo - The Verge - 0 views

  •  
    "As Tremblay notes, a major problem for AI chatbots like ChatGPT and Bard is their tendency to confidently state incorrect information as fact. The systems frequently "hallucinate" - that is, make up information - because they are essentially autocomplete systems."
1 - 20 of 71 Next › Last »
Showing 20 items per page