Skip to main content

Home/ Digit_al Society/ Group items tagged knowledge

Rss Feed Group items tagged

dr tech

Need medicine in hospital? Our study finds how often IT flaws lead to the wrong drug or... - 0 views

  •  
    "But as a growing body of research shows, these electronic systems are not perfect. Our new study shows how often these technology-related errors occur and what they mean for patient safety. Often they occur due to programming errors or poor design and are less to do with the health workers using the system."
dr tech

ChatGPT is bullshit | Ethics and Information Technology - 0 views

  •  
    "Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called "AI hallucinations". We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems."
dr tech

'You could single-handedly push it to extinction': how social media is putting our rare... - 0 views

  •  
    "'You could single-handedly push it to extinction': how social media is putting our rarest wildlife at risk"
dr tech

When robots can't riddle: What puzzles reveal about the depths of our own minds - 0 views

  •  
    "That's why the best systems may come from a combination of AI and human work; we can play to the machine's strengths, Ilievski says. But when we want to compare AI and the human mind, it's important to remember "there is no conclusive research providing evidence that humans and machines approach puzzles in a similar vein", he says. In other words, understanding AI may not give us any direct insight into the mind, or vice versa."
dr tech

Microsoft unveils 'trustworthy AI' features to fix hallucinations and boost privacy | V... - 0 views

  •  
    "One of the key features introduced is a "Correction" capability in Azure AI Content Safety. This tool aims to address the problem of AI hallucinations - instances where AI models generate false or misleading information. "When we detect there's a mismatch between the grounding context and the response… we give that information back to the AI system," Bird explained. "With that additional information, it's usually able to do better the second try.""
smilingoldman

How to Lead an Army of Digital Sleuths in the Age of AI | WIRED - 0 views

  • Yeah, and a lot of the stuff we find is actually from Israeli soldiers who’re misbehaving and doing stuff that I would say are definitely violations of international laws. But that’s coming on their social media accounts—they post it themselves.Another issue is: Because of the lack of electricity there, you actually get a lot of stuff happening at night that you can’t really see in the videos. Like the convoy attack that Israel had the drone footage of—there’s lots of footage of that, but it’s just all at night and it’s pitch-black. But there was a good piece of analysis I saw recently where they used the audio and could actually start establishing what weapons were being used. Just the sound itself makes it very distinct …
skibidirizzler

Voiced | Every Voice Matters - 0 views

  •  
    "Meeting your next significant other naturally is slowly becoming unheard of, especially amid a pandemic. Plus, with online apps like Bumble, Tinder, OK Cupid, Plenty of Fish, and Hinge gaining popularity, it's no wonder people are willing to give virtual dating a try. In fact, I bet most of your single friends are swiping right and left while you're reading this. I even gave it a try or two, but it never worked out where I found my Prince Charming. "
smilingoldman

'Disinformation on steroids': is the US prepared for AI's influence on the election? | ... - 0 views

  • Already this year, a robocall generated using artificial intelligence targeted New Hampshire voters in the January primary, purporting to be President Joe Biden and telling them to stay home in what officials said could be the first attempt at using AI to interfere with a US election. The “deepfake” calls were linked to two Texas companies, Life Corporation and Lingo Telecom.
  • It’s not clear if the deepfake calls actually prevented voters from turning out, but that doesn’t really matter, said Lisa Gilbert, executive vice-president of Public Citizen, a group that’s been pushing for federal and state regulation of AI’s use in politics.
  • Examples of what could be ahead for the US are happening all over the world. In Slovakia, fake audio recordings may have swayed an election in what serves as a “frightening harbinger of the sort of interference the United States will likely experience during the 2024 presidential election”, CNN reported. In Indonesia, an AI-generated avatar of a military commander helped rebrand the country’s defense minister as a “chubby-cheeked” man who “makes Korean-style finger hearts and cradles his beloved cat, Bobby, to the delight of Gen Z voters”, Reuters reported. In India, AI versions of dead politicians have been brought back to compliment elected officials, according to Al Jazeera.
  • ...1 more annotation...
  • she said, “what if AI could do all this? Then maybe I shouldn’t be trusting everything that I’m seeing.”
dr tech

- 0 views

  •  
    "Exposure to false and inflammatory content is remarkably low, with just 1% of Twitter users accounting for 80% of exposure to dubious websites during the 2016 U.S. election. This is heavily concentrated among a small fringe of users actively seeking it out. Examples: 6.3% of YouTube users were responsible for 79.8% of exposure to extremist channels from July to December 2020, 85% of vaccine-sceptical content was consumed by less than 1% of US citizens in the 2016-2019 period. Conventional wisdom blames platform algorithms for spreading misinformation. However, evidence suggests user preferences play an outsized role. For instance, a mere 0.04% of YouTube's algorithmic recommendations directed users to extremist content. It's tempting to draw a straight line between social media usage and societal ills. But studies rigorously designed to untangle cause and effect often come up short. "
dr tech

Scientists should use AI as a tool, not an oracle - 0 views

  •  
    "A core selling point of machine learning is discovery without understanding, which is why errors are particularly common in machine-learning-based science. Three years ago, we compiled evidence revealing that an error called leakage - the machine learning version of teaching to the test - was pervasive, affecting hundreds of papers from 17 disciplines. Since then, we have been trying to understand the problem better and devise solutions.  This post presents an update. In short, we think things will get worse before they get better, although there are glimmers of hope on the horizon."
dr tech

Writers condemn startup's plans to publish 8,000 books next year using AI | Books | The... - 0 views

  •  
    "The company, Spines, will charge authors between $1,200 and $5,000 to have their books edited, proofread, formatted, designed and distributed with the help of AI. Independent publisher Canongate said "these dingbats … don't care about writing or books", in a Bluesky post. Spines is charging "hopeful would-be authors to automate the process of flinging their book out into the world, with the least possible attention, care or craft". "These aren't people who care about books or reading or anything remotely related," said author Suyi Davies Okungbowa, whose most recent book is Lost Ark Dreaming, in a post on Bluesky. "These are opportunists and extractive capitalists.""
dr tech

If AI can provide a better diagnosis than a doctor, what's the prognosis for medics? | ... - 0 views

  •  
    "Or, as the New York Times summarised it, "doctors who were given ChatGPT-4 along with conventional resources did only slightly better than doctors who did not have access to the bot. And, to the researchers' surprise, ChatGPT alone outperformed the doctors." More interesting, though, were two other revelations: the experiment demonstrated doctors' sometimes unwavering belief in a diagnosis they had made, even when ChatGPT suggested a better one; and it also suggested that at least some of the physicians didn't really know how best to exploit the tool's capabilities. Which in turn revealed what AI advocates such as Ethan Mollick have been saying for aeons: that effective "prompt engineering" - knowing what to ask an LLM to get the most out of it - is a subtle and poorly understood art."
dr tech

Microsoft's AI speech generator VALL-E 2 'reaches human parity' - but it's too dangerou... - 0 views

  •  
    "Microsoft researchers said VALL-E 2 was capable of generating "accurate, natural speech in the exact voice of the original speaker, comparable to human performance," in a paper that appeared June 17 on the pre-print server arXiv. In other words, the new AI voice generator is convincing enough to be mistaken for a real person - at least, according to its creators."
dr tech

OpenAI's Project Strawberry Said to Be Building AI That Reasons and Does 'Deep Research' - 0 views

  •  
    "A source told Reuters that OpenAI has tested a model internally that achieved a 90 percent score on a challenging test of AI math skills, though it again couldn't confirm if this was related to project Strawberry. But another two sources reported seeing demos from the Q* project that involved models solving math and science questions that would be beyond today's leading commercial AIs."
dr tech

"We are basically the last generation": An interview with Thomas Ramge on writing - Goe... - 0 views

  •  
    "Yes of course. We are basically the last generation, or maybe there will be one more after us, who grew up without strong AI writing assistants. But these AI assistants are here now, especially in English. In German the systems are following suit, even though they're still much stronger in English. You get to a stage where someone who cannot write very well, can be pulled to a decent level of writing through machine assistance. And this raises important questions: Are we no longer learning the basics? In order to step up and really improve your writing, you will probably always need to be deeply proficient in the cultural practice of writing. But we need to ask, what proportion of low and medium level writers will be raised with the help from machines to a very decent level? And what repercussions does this have on teaching and learning, and the proficient use of language and writing? We shouldn't neglect our writing skills, because we believe machines will get us there. Anyone who has children can clearly see the dangers autocorrect and autocomplete will have for the future of writing."
dr tech

What opposition to delivery drones shows about big tech's disrespect for democracy | Jo... - 0 views

  •  
    "Tech determinism is an ideology, really; it's what determines how you think when you don't even know that you're thinking. And it feeds on a narrative of technological inevitability, which says that new stuff is coming down the line whether you like it or not. As the writer LM Sacasas puts it, "all assertions of inevitability have agendas, and narratives of technological inevitability provide convenient cover for tech companies to secure their desired ends, minimise resistance, and convince consumers that they are buying into a necessary, if not necessarily desirable future"."
dr tech

Students using artificial intelligence did worse on tests, experiment shows | EdSource - 0 views

  •  
    "Students using ChatGPT solved 48% more of the problems correctly, and those with the AI tutor solved 127% more problems correctly, according to the report. But their peers who did not use ChatGPT outscored them on the related tests. In fact, students using ChatGPT scored 17% worse on tests.  Kids working on their own performed the same on practice assignments and tests.  Researchers told The Hechinger Report that students are using the chatbot as a "crutch" and that it can "substantially inhibit learning.""
« First ‹ Previous 61 - 80 of 135 Next › Last »
Showing 20 items per page