Skip to main content

Home/ Digit_al Society/ Group items matching "algorithm,model" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
dr tech

DeepMind is developing one algorithm to rule them all | VentureBeat - 0 views

  •  
    "DeepMind is trying to combine deep learning and algorithms, creating the one algorithm to rule them all: a deep learning model that can learn how to emulate any algorithm, generating an algorithm-equivalent model that can work with real-world data."
dr tech

Predicting crime, LAPD-style | Cities | theguardian.com - 0 views

  •  
    "The algorithm at play is performing what's commonly referred to as predictive policing. Using years - and sometimes decades - worth of crime reports, the algorithm analyses the data to identify areas with high probabilities for certain types of crime, placing little red boxes on maps of the city that are streamed into patrol cars."
dr tech

Algorithmic cruelty: when Gmail adds your harasser to your speed-dial / Boing Boing - 0 views

  •  
    "It's not that Google wants to do this, it's that they didn't anticipate this outcome, and compounded that omission by likewise omitting a way to overrule the algorithm's judgment. As with other examples of algorithmic cruelty, it's not so much this specific example as was it presages for a future in which more and more of our external reality is determined by models derived from machine learning systems whose workings we're not privy to and have no say in. "
dr tech

I Tried Predictim AI That Scans for 'Risky' Babysitters - 0 views

  •  
    "The founders of Predictim want to be clear with me: Their product-an algorithm that scans the online footprint of a prospective babysitter to determine their "risk" levels for parents-is not racist. It is not biased. "We take ethics and bias extremely seriously," Sal Parsa, Predictim's CEO, tells me warily over the phone. "In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.""
dr tech

The world's biggest AI models aren't very transparent, Stanford study says - The Verge - 0 views

  •  
    "No prominent developer of AI foundation models - a list including companies like OpenAI and Meta - is releasing sufficient information about their potential impact on society, determines a new report from Stanford HAI (Human-Centered Artificial Intelligence). Today, Stanford HAI released its Foundation Model Transparency Index, which tracked whether creators of the 10 most popular AI models disclose information about their work and how people use their systems. Among the models it tested, Meta's Llama 2 scored the highest, followed by BloomZ and then OpenAI's GPT-4. But none of them, it turned out, got particularly high marks."
dr tech

Advanced AI suffers 'complete accuracy collapse' in face of complex problems, study finds | Artificial intelligence (AI) | The Guardian - 0 views

  •  
    "For higher-complexity problems, however, the models would enter "collapse", failing to generate any correct solutions. In one case, even when provided with an algorithm that would solve the problem, the models failed. The paper said: "Upon approaching a critical threshold - which closely corresponds to their accuracy collapse point - models counterintuitively begin to reduce their reasoning effort despite increasing problem difficulty." The Apple experts said this indicated a "fundamental scaling limitation in the thinking capabilities of current reasoning models"."
dr tech

Values in the wild: Discovering and analyzing values in real-world language model interactions \ Anthropic - 0 views

  •  
    "AI models will inevitably have to make value judgments. If we want those judgments to be congruent with our own values (which is, after all, the central goal of AI alignment research) then we need to have ways of testing which values a model expresses in the real world. Our method provides a new, data-focused method of doing this, and of seeing where we might've succeeded-or indeed failed-at aligning our models' behavior."
dr tech

Artificial intelligence creates sound effects for silent videos that fool humans / Boing Boing - 0 views

  •  
    "This algorithm uses a recurrent neural network to predict sound features from videos and then produces a waveform from these features with an example-based synthesis procedure. We show that the sounds predicted by our model are realistic enough to fool participants in a "real or fake" psychophysical experiment, and that they convey significant information about material properties and physical interactions."
dr tech

Algorithm 'identifies future trolls from just five posts' | Technology | The Guardian - 0 views

  •  
    "With all the information together, they created a prediction model which can guess with 80% accuracy whether or not that user will go on to be banned from just their first five posts. "
dr tech

Ethics committee raises alarm over 'predictive policing' tool | UK news | The Guardian - 0 views

  •  
    "Amid mounting financial pressure, at least a dozen police forces are using or considering predictive analytics, despite warnings from campaigners that use of algorithms and "predictive policing" models risks locking discrimination into the criminal justice system."
dr tech

Algorithm finds hidden connections between paintings at the Met | MIT CSAIL - 0 views

  •  
    "What Hamilton and his colleagues found surprising was that this approach could also be applied to helping find problems with existing deep networks, related to the surge of "deepfakes" that have recently cropped up. They applied this data structure to find areas where probabilistic models, such as the generative adversarial networks (GANs) that are often used to create deepfakes, break down. They coined these problematic areas "blind spots," and note that they give us insight into how GANs can be biased. Such blind spots further show that GANs struggle to represent particular areas of a dataset, even if most of their fakes can fool a human. "
dr tech

NVIDIA's latest AI model helps robots perform pen spinning tricks as well as humans - 0 views

  •  
    "The use for humans in the world of robotics, even as teachers, is shrinking thanks to AI. NVIDIA Research has announced the creation of Eureka, an AI agent powered by GPT-4 that has trained robots to perform tasks using reward algorithms. Notably, Eureka taught a robotic hand to do pen spinning tricks as well as a human can (honestly, as you can see in the YouTube video below, better than many of us)."
dr tech

ChatGPT is bullshit | Ethics and Information Technology - 0 views

  •  
    "Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called "AI hallucinations". We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems."
dr tech

Unleashing Chaos: Hackers 'Jailbreak' Powerful AI Models - Fusion Chat - 0 views

  •  
    "Pliny the Prompter is known for his ability to disrupt the world's most robust artificial intelligence models within approximately thirty minutes. This pseudonymous hacker has managed to manipulate Meta's Llama 3 into sharing instructions on creating napalm and even caused Elon Musk's Grok to praise Adolf Hitler. One of his own modified versions of OpenAI's latest GPT-4o model, named "Godmode GPT," was banned by the startup after it started providing advice on illegal activities."
dr tech

New Tool Reveals How AI Makes Decisions - Scientific American - 0 views

  •  
    "Most AI programs function like a "black box." "We know exactly what a model does but not why it has now specifically recognized that a picture shows a cat," said computer scientist Kristian Kersting of the Technical University of Darmstadt in Germany to the German-language newspaper Handelsblatt. That dilemma prompted Kersting-along with computer scientists Patrick Schramowski of the Technical University of Darmstadt and Björn Deiseroth, Mayukh Deb and Samuel Weinbach, all at the Heidelberg, Germany-based AI company Aleph Alpha-to introduce an algorithm called AtMan earlier this year. AtMan allows large AI systems such as ChatGPT, Dall-E and Midjourney to finally explain their outputs."
dr tech

How bad were Ofqual's grades - by Huy Duong - HEPI - 0 views

  •  
    "Therefore even Ofqual's best model significantly worsened grade accuracy for most A-level subjects when the cohort size is below 50, which is common (almost 62% of the total in 2019). For GCSEs, even with larger cohorts, the best model would have worsened the grade accuracy for Maths and Sciences. A very conservative figure of 25% of wrong grades would have amounted to 180,000 wrong A-level grades and 1.25 million wrong GCSE grades."
dr tech

An algorithm for detecting face-swaps in videos / Boing Boing - 0 views

  •  
    "So they trained a deep-learning neural net on tons of examples of deepfaked videos, and produced a model that's better than any previous automated technique at spotting hoaxery. (Their paper documenting the work is here.)"
dr tech

Why machine learning struggles with causality | VentureBeat - 0 views

  •  
    "In a paper titled "Towards Causal Representation Learning," researchers at the Max Planck Institute for Intelligent Systems, the Montreal Institute for Learning Algorithms (Mila), and Google Research discuss the challenges arising from the lack of causal representations in machine learning models and provide directions for creating artificial intelligence systems that can learn causal representations."
dr tech

AI is making literary leaps - now we need the rules to catch up | Opinion | The Guardian - 0 views

  •  
    "If true, this would be a big deal. But, said OpenAI, "due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.""
dr tech

A Brain Scanner Combined with an AI Language Model Can Provide a Glimpse into Your Thoughts - Scientific American - 0 views

  •  
    "Now researchers have taken a step forward by combining fMRI's ability to monitor neural activity with the predictive power of artificial intelligence language models. The hybrid technology has resulted in a decoder that can reproduce, with a surprising level of accuracy, the stories that a person listened to or imagined telling in the scanner. The decoder could even guess the story behind a short film that someone watched in the scanner, though with less accuracy."
1 - 20 of 32 Next ›
Showing 20 items per page