Skip to main content

Home/ Digit_al Society/ Group items tagged blackbox

Rss Feed Group items tagged

dr tech

Hackers Used to Be Humans. Soon, AIs Will Hack Humanity | WIRED - 0 views

  •  
    "In 2015, a research group fed an AI system called Deep Patient health and medical data from some 700,000 people, and tested whether it could predict diseases. It could, but Deep Patient provides no explanation for the basis of a diagnosis, and the researchers have no idea how it comes to its conclusions. A doctor either can either trust or ignore the computer, but that trust will remain blind."
dr tech

Study explores inner life of AI with robot that 'thinks' out loud | Robots | The Guardian - 0 views

  •  
    "The researchers programmed a robot called Pepper, made by SoftBank Robotics, with the ability to vocalise its thought processes. This means the robot is no longer a "black box" and its underlying decision-making is more transparent to the user."
dr tech

New Tool Reveals How AI Makes Decisions - Scientific American - 0 views

  •  
    "Most AI programs function like a "black box." "We know exactly what a model does but not why it has now specifically recognized that a picture shows a cat," said computer scientist Kristian Kersting of the Technical University of Darmstadt in Germany to the German-language newspaper Handelsblatt. That dilemma prompted Kersting-along with computer scientists Patrick Schramowski of the Technical University of Darmstadt and Björn Deiseroth, Mayukh Deb and Samuel Weinbach, all at the Heidelberg, Germany-based AI company Aleph Alpha-to introduce an algorithm called AtMan earlier this year. AtMan allows large AI systems such as ChatGPT, Dall-E and Midjourney to finally explain their outputs."
dr tech

The world's biggest AI models aren't very transparent, Stanford study says - The Verge - 0 views

  •  
    "No prominent developer of AI foundation models - a list including companies like OpenAI and Meta - is releasing sufficient information about their potential impact on society, determines a new report from Stanford HAI (Human-Centered Artificial Intelligence). Today, Stanford HAI released its Foundation Model Transparency Index, which tracked whether creators of the 10 most popular AI models disclose information about their work and how people use their systems. Among the models it tested, Meta's Llama 2 scored the highest, followed by BloomZ and then OpenAI's GPT-4. But none of them, it turned out, got particularly high marks."
dr tech

This ChatGPT Plugin is Truly Groundbreaking | by Reid Elliot | Predict | Apr, 2023 | Me... - 0 views

  •  
    "In combining these factors, we arrive at a civilization built upon a technological infrastructure that we fundamentally cannot understand. The same systems that promise us technological emancipation put the whole of society at risk. I vaguely recall a wise man once saying that only the fool builds his house upon sand. And so, how can a society maintain itself if the stones of its foundation are black boxes? Before we answer this question, let's examine the current state of affairs."
1 - 5 of 5
Showing 20 items per page