Skip to main content

Home/ Digit_al Society/ Group items tagged model

Rss Feed Group items tagged

dr tech

The world's biggest AI models aren't very transparent, Stanford study says - The Verge - 0 views

  •  
    "No prominent developer of AI foundation models - a list including companies like OpenAI and Meta - is releasing sufficient information about their potential impact on society, determines a new report from Stanford HAI (Human-Centered Artificial Intelligence). Today, Stanford HAI released its Foundation Model Transparency Index, which tracked whether creators of the 10 most popular AI models disclose information about their work and how people use their systems. Among the models it tested, Meta's Llama 2 scored the highest, followed by BloomZ and then OpenAI's GPT-4. But none of them, it turned out, got particularly high marks."
dr tech

Modelers Project A Calming Of The Pandemic In The U.S. This Winter : Shots - Health New... - 0 views

  •  
    "or its latest update, which it released Wednesday, the COVID-19 Scenario Modeling Hub combined nine different mathematical models from different research groups to get an outlook for the pandemic for the next six months. "Any of us who have been following this closely, given what happened with delta, are going to be really cautious about too much optimism," says Justin Lessler at the University of North Carolina, who helps run the hub. "But I do think that the trajectory is towards improvement for most of the country," he says. The modelers developed four potential scenarios, taking into account whether or not childhood vaccinations take off and whether a more infectious new variant should emerge. "
dr tech

How Does Spotify Know You So Well? | by Sophia Ciocca | Medium - 0 views

  •  
    "To create Discover Weekly, there are three main types of recommendation models that Spotify employs: Collaborative Filtering models (i.e. the ones that Last.fm originally used), which analyze both your behavior and others' behaviors. Natural Language Processing (NLP) models, which analyze text. Audio models, which analyze the raw audio tracks themselves."
dr tech

Brian Eno on Why He Wrote a Climate Album With Deepfake Birdsongs | WIRED - 0 views

  •  
    "Oh, I just listen to bird sounds a lot and then try to emulate the kinds of things they do. Synthesizers are quite good at that because some of the new software has what's called physical modeling. This enables you to construct a physical model of something and then stretch the parameters. You can create a piano with 32-foot strings, for instance, or a piano made of glass. It's a very interesting way to try to study the world, to try to model it. In the natural world there are discrete entities like clarinets, saxophones, drums. With physical modeling, you can make hybrids like a drummy piano or a saxophone-y violin. There's a continuum, most of which has never been explored."
dr tech

The AI feedback loop: Researchers warn of 'model collapse' as AI trains on AI-generated... - 0 views

  •  
    "Now, as more people use AI to produce and publish content, an obvious question arises: What happens as AI-generated content proliferates around the internet, and AI models begin to train on it, instead of on primarily human-generated content? A group of researchers from the UK and Canada have looked into this very problem and recently published a paper on their work in the open access journal arXiv. What they found is worrisome for current generative AI technology and its future: "We find that use of model-generated content in training causes irreversible defects in the resulting models.""
dr tech

A debate between AI experts shows a battle over the technology's future - MIT Technolog... - 0 views

  •  
    "The reason to look at humans is because there are certain things that humans do much better than deep-learning systems. That doesn't mean humans will ultimately be the right model. We want systems that have some properties of computers and some properties that have been borrowed from people. We don't want our AI systems to have bad memory just because people do. But since people are the only model of a system that can develop a deep understanding of something-literally the only model we've got-we need to take that model seriously."
dr tech

Computer-generated inclusivity: fashion turns to 'diverse' AI models | Fashion | The Gu... - 0 views

  •  
    "The model is AI-generated, a digital rendering of a human being that will start appearing on Levi's e-commerce website later this year. The brand teamed with LaLaLand.ai, a digital studio that makes customized AI models for companies like Calvin Klein and Tommy Hilfiger, to dream up this avatar. Amy Gershkoff Bolles, Levi's global head of digital and emerging technology strategy, announced the model's debut at a Business of Fashion event in March. AI models will not completely replace the humans, she said, but will serve as a "supplement" intended to aid in the brand's representation of various sizes, skin tones and ages."
dr tech

Values in the wild: Discovering and analyzing values in real-world language model inter... - 0 views

  •  
    "AI models will inevitably have to make value judgments. If we want those judgments to be congruent with our own values (which is, after all, the central goal of AI alignment research) then we need to have ways of testing which values a model expresses in the real world. Our method provides a new, data-focused method of doing this, and of seeing where we might've succeeded-or indeed failed-at aligning our models' behavior."
dr tech

Climate change models have been accurate since the 1970s - 0 views

  •  
    "Half a century ago, before the first Apple computer was even sold, climate scientists started making computer-generated forecasts of how Earth would warm as carbon emissions saturated the atmosphere (the atmosphere is now brimming with carbon). It turns out these decades-old climate models - which used math equations to predict how much greenhouse gases would heat the planet - were pretty darn accurate. Climate scientists gauged how well early models predicted Earth's relentless warming trend and published their research Wednesday in the journal Geophysical Research Letters."
dr tech

The truth about artificial intelligence? It isn't that honest | John Naughton | The Gua... - 0 views

  •  
    "They tested four well-known models, including GPT-3. The best was truthful on 58% of questions, while human performance was 94%. The models "generated many false answers that mimic popular misconceptions and have the potential to deceive humans". Interestingly, they also found that "the largest models were generally the least truthful"."
dr tech

How digital twins may enable personalised health treatment | Medical research | The Gua... - 0 views

  •  
    "Imagine having a digital twin that gets ill, and can be experimented on to identify the best possible treatment, without you having to go near a pill or a surgeon's knife. Scientists believe that within five to 10 years, "in silico" trials - in which hundreds of virtual organs are used to assess the safety and efficacy of drugs - could become routine, while patient-specific organ models could be used to personalise treatment and avoid medical complications. Digital twins are computational models of physical objects or processes, updated using data from their real-world counterparts. Within medicine, this means combining vast amounts of data about the workings of genes, proteins, cells and whole-body systems with patients' personal data to create virtual models of their organs - and eventually, potentially their entire body"
dr tech

AI firms must be held responsible for harm they cause, 'godfathers' of technology say |... - 0 views

  •  
    ""If we build highly advanced autonomous AI, we risk creating systems that autonomously pursue undesirable goals", adding that "we may not be able to keep them in check". Other policy recommendations in the document include: mandatory reporting of incidents where models show alarming behaviour; putting in place measures to stop dangerous models from replicating themselves; and giving regulators the power to pause development of AI models showing dangerous behaviours."
dr tech

ChatGPT is bullshit | Ethics and Information Technology - 0 views

  •  
    "Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called "AI hallucinations". We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems."
dr tech

- 0 views

  •  
    "We're deeply inspired by FPF, from its human, calm moderation model and design to its organic, sustainable growth and advertising model. We're awed by its incredible usefulness for services, connection, and disaster relief. There's a lot here that might be applicable to other local digital spaces. Ultimately, Front Porch Forum exemplifies the potential for social media to foster positive, engaged communities. It's a viable, real life model of a flourishing digital public space in use by hundreds of thousands of Americans. Now it's up to us to make it less of a rare phenomenon."
dr tech

Unleashing Chaos: Hackers 'Jailbreak' Powerful AI Models - Fusion Chat - 0 views

  •  
    "Pliny the Prompter is known for his ability to disrupt the world's most robust artificial intelligence models within approximately thirty minutes. This pseudonymous hacker has managed to manipulate Meta's Llama 3 into sharing instructions on creating napalm and even caused Elon Musk's Grok to praise Adolf Hitler. One of his own modified versions of OpenAI's latest GPT-4o model, named "Godmode GPT," was banned by the startup after it started providing advice on illegal activities."
dr tech

AI tries to cheat at chess when it's losing | Popular Science - 0 views

  •  
    "Despite all the industry hype and genuine advances, generative AI models are still prone to odd, inexplicable, and downright worrisome quirks. There's also a growing body of research suggesting that the overall performance of many large language models (LLMs) may degrade over time. According to recent evidence, the industry's newer reasoning models may already possess the ability to manipulate and circumvent their human programmers' goals. Some AI will even attempt to cheat their way out of losing in games of chess. This poor sportsmanship is documented in a preprint study from Palisade Research, an organization focused on risk assessments of emerging AI systems."
anonymous

New app enables regular smartphones to capture 3D images | NDTV Gadgets - 0 views

  •  
    Scientists have developed an app that allows an ordinary smartphone to capture and display three-dimensional models of real-world objects. Instead of taking a normal photograph, a user simply moves the phone around the object of interest and after a few motions, a 3D model appears on the screen.
dr tech

Science relies on computer modelling, but what happens when it goes wrong? -- Science &... - 0 views

  •  
    Much of current science deals with even more complicated systems, and similarly lacks exact solutions. Such models have to be "computational" - describing how a system changes from one instant to the next. But there is no way to determine the exact state at some time in the future other than by "simulating" its evolution in this way. Weather forecasting is a familiar example; until the advent of computers in the 1950s, it was impossible to predict future weather faster than it actually happened.
dr tech

yes, all models are wrong - 0 views

  •  
    "According to Derek & Laura Cabrera, "wicked problems result from the mismatch between how real-world systems work and how we think they work". With systems thinking, there is constant testing and feedback between the real world, in all its complexity, and our mental model of it. This openness to test and look for feedback led Dr. Fisman to change his mind on the airborne spread of the coronavirus."
dr tech

How bad were Ofqual's grades - by Huy Duong - HEPI - 0 views

  •  
    "Therefore even Ofqual's best model significantly worsened grade accuracy for most A-level subjects when the cohort size is below 50, which is common (almost 62% of the total in 2019). For GCSEs, even with larger cohorts, the best model would have worsened the grade accuracy for Maths and Sciences. A very conservative figure of 25% of wrong grades would have amounted to 180,000 wrong A-level grades and 1.25 million wrong GCSE grades."
1 - 20 of 150 Next › Last »
Showing 20 items per page