Skip to main content

Home/ Digit_al Society/ Group items tagged algorithm technology

Rss Feed Group items tagged

dr tech

Warning over use in UK of unregulated AI chatbots to create social care plans | Artific... - 0 views

  •  
    "A pilot study by academics at the University of Oxford found some care providers had been using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care. That presents a potential risk to patient confidentiality, according to Dr Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who surveyed care organisations for the study. "If you put any type of personal data into [a generative AI chatbot], that data is used to train the language model," Green said. "That personal data could be generated and revealed to somebody else." She said carers might act on faulty or biased information and inadvertently cause harm, and an AI-generated care plan might be substandard."
dr tech

Meta documents show 100,000 children sexually harassed daily on its platforms | Meta | ... - 0 views

  •  
    "Meta estimates about 100,000 children using Facebook and Instagram receive online sexual harassment each day, including "pictures of adult genitalia", according to internal company documents made public late Wednesday."
dr tech

ChatGPT is bullshit | Ethics and Information Technology - 0 views

  •  
    "Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called "AI hallucinations". We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems."
dr tech

Google's emissions climb nearly 50% in five years due to AI energy demand | Google | Th... - 0 views

  •  
    "Google's goal of reducing its climate footprint is in jeopardy as it relies on more and more energy-hungry data centres to power its new artificial intelligence products. The tech giant revealed Tuesday that its greenhouse gas emissions have climbed 48% over the past five years. Google said electricity consumption by data centres and supply chain emissions were the primary cause of the increase. It also revealed in its annual environmental report that its emissions in 2023 had risen 13% compared with the previous year, hitting 14.3m metric tons."
dr tech

If AI can provide a better diagnosis than a doctor, what's the prognosis for medics? | ... - 0 views

  •  
    "Or, as the New York Times summarised it, "doctors who were given ChatGPT-4 along with conventional resources did only slightly better than doctors who did not have access to the bot. And, to the researchers' surprise, ChatGPT alone outperformed the doctors." More interesting, though, were two other revelations: the experiment demonstrated doctors' sometimes unwavering belief in a diagnosis they had made, even when ChatGPT suggested a better one; and it also suggested that at least some of the physicians didn't really know how best to exploit the tool's capabilities. Which in turn revealed what AI advocates such as Ethan Mollick have been saying for aeons: that effective "prompt engineering" - knowing what to ask an LLM to get the most out of it - is a subtle and poorly understood art."
dr tech

AI mediation tool may help reduce culture war rifts, say researchers | Artificial intel... - 0 views

  •  
    "Writing in the journal Science, Summerfield and colleagues from Google DeepMind report how they built the "Habermas Machine" - an AI system named after the German philosopher Jürgen Habermas. The system works by taking written views of individuals within a group and using them to generate a set of group statements designed to be acceptable to all. Group members can then rate these statements, a process that not only trains the system but allows the statement with the greatest endorsement to be selected."
dr tech

'An AI Fukushima is inevitable': scientists discuss technology's immense potential and ... - 0 views

  •  
    "The climate crisis could prove AI's greatest challenge. While Google publicises AI-driven advances in flooding, wildfire and heatwave forecasts, like many big tech companies, it uses more energy than many countries. Today's large models are a major culprit. It can take 10 gigawatt-hours of power to train a single large language model like OpenAI's ChatGPT, enough to supply 1,000 US homes for a year."
dr tech

'I was misidentified as shoplifter by facial recognition tech' - 0 views

  •  
    "Sara needed some chocolate - she had had one of those days - so wandered into a Home Bargains store. "Within less than a minute, I'm approached by a store worker who comes up to me and says, 'You're a thief, you need to leave the store'." Sara - who wants to remain anonymous - was wrongly accused after being flagged by a facial-recognition system called Facewatch."
dr tech

The ChatGPT secret: is that text message from your friend, your lover - or a robot? | C... - 0 views

  •  
    "ChatGPT can help with reframing thoughts and situations, similar to cognitive behavioural therapy - but "some clients can start to use it as a substitute for therapy", Masterson says. "I've had clients telling me they've already processed on their own, because of what they've read - it's incredibly dangerous." She has had to ask some clients to cease their self-experiments while in treatment with her. "It's about you and me in the room," she says. "You just cannot have that with text - let alone a conglomeration of lots of other people's texts." Self-directed chatbot therapy also risks being counterproductive, shrinking the area of inquiry. "It's quite affirmative; I challenge clients," says Masterson. ChatGPT could actually cement patterns as it draws, over and again, from the same database: "The more you try to refine it, the more refined the message becomes.""
dr tech

'Profiting from misery': how TikTok makes money from child begging livestreams | TikTok... - 0 views

  •  
    "'Profiting from misery': how TikTok makes money from child begging livestreams Exploitation fears as people in extreme poverty perform stunts and beg for virtual gifts"
dr tech

'She helps cheer me up': the people forming relationships with AI chatbots | Artificial... - 0 views

  •  
    "Many respondents said they used chatbots to help them manage different aspects of their lives, from improving their mental and physical health to advice about existing romantic relationships and experimenting with erotic role play. They can spend between several hours a week to a couple of hours a day interacting with the apps. Worldwide, more than 100 million people use personified chatbots, which include Replika, marketed as "the AI companion who cares" and Nomi, which claims users can "build a meaningful friendship, develop a passionate relationship, or learn from an insightful mentor"."
dr tech

Should AI systems behave like people? | AISI Work - 0 views

  •  
    "Most people agree that AI should transparently reveal itself not to be human, but many were happy for AI to talk in human-realistic ways. A majority (approximately 60%) felt that AI systems should refrain from expressing emotions, unless they were idiomatic expressions (like "I'm happy to help")."
dr tech

'Just the start': X's new AI software driving online racist abuse, experts warn | X | T... - 0 views

  •  
    "A rise in online racism driven by fake images is "just the start of a coming problem" after the latest release of X's AI software, online abuse experts have warned. Concerns were raised after computer-generated images created using Grok, X's generative artificial intelligence chatbot, flooded the social media site in December last year. Signify, an organisation that works with prominent groups and clubs in sports to track and report online hate, said it has seen an increase in reports of abuse since Grok's latest update, and believes the introduction of photorealistic AI will make it far more prevalent."
« First ‹ Previous 121 - 135 of 135
Showing 20 items per page