Skip to main content

Home/ Groups/ Digit_al Society
dr tech

AI tries to cheat at chess when it's losing | Popular Science - 0 views

  •  
    "Despite all the industry hype and genuine advances, generative AI models are still prone to odd, inexplicable, and downright worrisome quirks. There's also a growing body of research suggesting that the overall performance of many large language models (LLMs) may degrade over time. According to recent evidence, the industry's newer reasoning models may already possess the ability to manipulate and circumvent their human programmers' goals. Some AI will even attempt to cheat their way out of losing in games of chess. This poor sportsmanship is documented in a preprint study from Palisade Research, an organization focused on risk assessments of emerging AI systems."
dr tech

Microsoft Dragon Copilot provides the healthcare industry's first unified voice AI assi... - 0 views

  •  
    ""At Microsoft, we have long believed that AI has the incredible potential to free clinicians from much of the administrative burden in healthcare and enable them to refocus on taking care of patients," said Joe Petro, corporate vice president of Microsoft Health and Life Sciences Solutions and Platforms. "With the launch of our new Dragon Copilot, we are introducing the first unified voice AI experience to the market, drawing on our trusted, decades-long expertise that has consistently enhanced provider wellness and improved clinical and financial outcomes for provider organizations and the patients they serve." "With Dragon Copilot, we're not just enhancing how we work in the EHR - we're tapping into a Microsoft-powered ecosystem where AI assistance extends across our organization, delivering a consistent and intelligent experience everywhere we work," said Dr. R. Hal Baker, senior vice president and chief digital and chief information officer, WellSpan Health. "It's this ability to enhance the patient experience while streamlining clinician workflows that makes Dragon Copilot such a game-changer.""
dr tech

Meta apologises over flood of gore, violence and dead bodies on Instagram | Meta | The ... - 0 views

  •  
    " Meta apologises over flood of gore, violence and dead bodies on Instagram Users of Reels report feeds dominated by violent and graphic footage after apparent algorithm malfunction Dan Milmo Global technology editor Fri 28 Feb 2025 15.01 GMT Share Mark Zuckerberg's Meta has apologised after Instagram users were subjected to a flood of violence, gore, animal abuse and dead bodies on their Reels feeds. Users reported the footage after an apparent malfunction in Instagram's algorithm, which curates what people see on the app. Reels is a feature on the social media platform that allows users to share short videos, similar to TikTok."
dr tech

How accurate are the viral TikTok AI POV lab history videos? - 0 views

  •  
    "Murky and misty streets, coughing townsfolk, and the distant toll of a plague doctor's bell all feature in Hogne's most-watched video, which has racked up 53 million views. It has sparked fascination among many, but historian Dr Amy Boyington describes the medieval-themed video as "amateurish" and "evocative and sensational" rather than historically accurate. "It looks like something from a video game as it shows a world that is meant to look real but is actually fake." She points out inaccuracies like the depiction of houses with large glazed windows and a train track running through the town which wouldn't have existed in the 1300s. Historian and archaeologist Dr Hannah Platts has also noticed significant inaccuracies in a video depicting the eruption of Mount Vesuvius at Pompeii. "Due to Pliny the Younger's eyewitness account of the eruption, we know that it didn't start with lava spewing everywhere so to not use that wealth of historical information available to us feels cheap and lazy.""
dr tech

Parents do have favorites - by Jacqueline Nesi, PhD - 0 views

  •  
    "But what about social media posts that offer stories of hope and recovery? Could these types of posts actually prevent suicide? For this experimental study, researchers in Austria created 10 suicide-prevention social media posts from a fictitious influencer. The posts offered stories about recovery from suicidal crises, mental health tips, and life-affirming messages. A total of 354 adult participants were randomly assigned to view these posts, or to view 10 posts totally unrelated to mental health. As expected, participants who were exposed to the suicide-prevention posts reported decreased suicidal thoughts and greater intentions to seek help (e.g., from friends, family, or a professional). This was especially true for those who were already struggling with suicidal thoughts."
dr tech

AI cracks superbug problem in two days that took scientists years - 0 views

  •  
    "A complex problem that took microbiologists a decade to get to the bottom of has been solved in just two days by a new artificial intelligence (AI) tool. Professor José R Penadés and his team at Imperial College London had spent years working out and proving why some superbugs are immune to antibiotics. He gave "co-scientist" - a tool made by Google - a short prompt asking it about the core problem he had been investigating and it reached the same conclusion in 48 hours."
dr tech

The Technium: The Handoff to Bots - 0 views

  •  
    "The purpose of handing the economy off to the synths is so that we can do the kinds of tasks that every human would wake up in the morning eager to do. There should not be any human doing a task they find a waste of their talent. If it is a job where productivity matters, a human should not be doing it. Productivity is for robots. Humans should be doing the jobs where inefficiency reigns - art, exploration, invention, innovation, small talk, adventure, companionship. All the productive chores should be handled by the billions of AIs we make. Therefore our task right now - as humans - is to make sure that in the following decades as our biological numbers start to shrink on this planet, that we can repopulate it with a sufficient number of synthetic agents, bots, and robots with sufficient intelligence, grit, perseverance, and moral training to take over the economy in time to keep our living standards rising. We are not replacing existing humans with bots, nor are we replacing unborn humans with bots. Rather we are replacing never-to-be-born humans with bots, and the relationship that we have with those synthetic agents and ems, will be highly mutual. We build an economy around their needs, and propelled by their labor, and rewarding their work, but all of this is in service of our own definition of progress and human success."
dr tech

Are chatbots of the dead a brilliant idea or a terrible one? | Aeon Essays - 0 views

  •  
    "'Fredbot' is one example of a technology known as chatbots of the dead, chatbots designed to speak in the voice of specific deceased people. Other examples are plentiful: in 2016, Eugenia Kuyda built a chatbot from the text messages of her friend Roman Mazurenko, who was killed in a traffic accident. The first Roman Bot, like Fredbot, was selective, but later versions were generative, meaning they generated novel responses that reflected Mazurenko's voice. In 2020, the musician and artist Laurie Anderson used a corpus of writing and lyrics from her late husband, Velvet Underground's co-founder Lou Reed, to create a generative program she interacted with as a creative collaborator. And in 2021, the journalist James Vlahos launched HereAfter AI, an app anyone can use to create interactive chatbots, called 'life story avatars', that are based on loved ones' memories. Today, enterprises in the business of 'reinventing remembrance' abound: Life Story AI, Project Infinite Life, Project December - the list goes on."
dr tech

Copy link - 0 views

  •  
    60% of TikTok users and 46% of Instagram users say they feel worse off because of these platforms. 57% and 58% of college students (users and non-users) prefer to live in a world without TikTok and Instagram, respectively. Users would even pay money to see them disappear-$24 on average to eliminate TikTok and $6 to eliminate Instagram.
dr tech

Northampton boy with leukaemia sends his robot double to school - 0 views

  •  
    "Boy with leukaemia sends robot double to school"
dr tech

Study Finds That People Who Entrust Tasks to AI Are Losing Critical Thinking Skills - 0 views

  •  
    "The findings from those examples were striking: overall, those who trusted the accuracy of the AI tools found themselves thinking less critically, while those who trusted the tech less used more critical thought when going back over AI outputs. "The data shows a shift in cognitive effort as knowledge workers increasingly move from task execution to oversight when using GenAI," the researchers wrote. "Surprisingly, while AI can improve efficiency, it may also reduce critical engagement, particularly in routine or lower-stakes tasks in which users simply rely on AI, raising concerns about long-term reliance and diminished independent problem-solving." This isn't enormously surprising. Something we've observed in many domains, from self-driving vehicles to scrutinizing news articles produced by AI, is that humans quickly go on autopilot when they're supposed to be overseeing an automated system, often allowing mistakes to slip past."
dr tech

AI chatbots' greenwash and bothsidesism about Big Oil | Global Witness - 0 views

  •  
    "Generative AI chatbots fail to adequately reflect fossil fuel companies' complicity in the climate crisis, a Global Witness investigation has found"
dr tech

From Hiring to Firing: Entire HR team terminated after manager's own resume fails autom... - 0 views

  •  
    "A recent incident at a company has led to the dismissal of half its HR team after a manager discovered a significant flaw in the applicant tracking system (ATS) used for hiring. This system, intended to improve the recruitment process, was automatically rejecting all job candidates, including the manager's own application."
dr tech

Google owner drops promise not to use AI for weapons | Alphabet | The Guardian - 0 views

  •  
    "The Google owner, Alphabet, has dropped its promise not to use artificial intelligence for purposes such as developing weapons and surveillance tools. The US technology company said on Tuesday, just before it reported lower-than-forecast earnings, that it had updated its ethical guidelines around AI, and they no longer referred to not pursuing technologies that could "cause or are likely to cause overall harm". Google's AI head, Demis Hassabis, said the guidelines were being overhauled in a changing world and that AI should protect "national security"."
dr tech

Meta's crackdown on adult content fails to stop AI nudify apps from flourishing - Tech - 0 views

  •  
    "According to a new report from 404Media, as originally reported in Alexios Mantzarlis' Faked Up newsletter, the Crush AI modifier app receives the vast majority of its traffic from Meta platforms such as Facebook and Instagram. According to Similarweb's data, last month Crush AI received more than a quarter of a million visits to its service - and roughly 90 percent of that traffic was from Meta's platforms. Crush AI makes it quite clear what the purpose of its service is in its advertisements. The company's ads include photos of real-life individuals like Instagram influencer and model Mikayla Demaiter and OnlyFans creator Sophie Rain. The ads boast that users can "upload a photo" and "erase anyone's clothes.""
dr tech

Your phone buzzes with a news alert. But what if AI wrote it - and it's not true? | Arc... - 0 views

  •  
    "Some might scoff at this, and point out that news organisations make their own mistakes all the time - more consequential than my physicist/physician howler, if less humiliating. But cases of bad journalism are almost always warped representations of the real world, rather than missives from an imaginary one. Crucially, if an outlet gets big things wrong a lot, its reputation will suffer, and its audience are likely to vote with their feet, or other people will publish stories that air the mistake. And all of it will be out in the open. You may also note that journalists are increasingly likely to use AI in the production of stories - and there is no doubt that it is a phenomenally powerful tool, allowing investigative reporters to find patterns in vast financial datasets that reveal corruption, or analyse satellite imagery for evidence of bombing attacks in areas designated safe for civilians. There is a legitimate debate over the extent of disclosure required in such cases: on the one hand, if the inputs and outputs are being properly vetted, it might be a bit like flagging the use of Excel; on the other, AI is still new enough that readers may expect you to err on the side of caution. Still, the fundamental difference is not in what you're telling your audience, but what degree of supervision you're exercising over the machine."
dr tech

AI company Anthropic's ironic warning to job candidates: 'Please do not use AI' - 0 views

  •  
    "Anthropic has an "AI policy" for job candidates that discourages the technology from being used during the application process. The company says it wants to field candidates' human communication skills. Anthropic is known for its AI innovations-but the company doesn't want job candidates using the technology."
« First ‹ Previous 81 - 100 of 3460 Next › Last »
Showing 20 items per page