Skip to main content

Home/ Digit_al Society/ Group items tagged model

Rss Feed Group items tagged

dr tech

The world's biggest AI models aren't very transparent, Stanford study says - The Verge - 0 views

  •  
    "No prominent developer of AI foundation models - a list including companies like OpenAI and Meta - is releasing sufficient information about their potential impact on society, determines a new report from Stanford HAI (Human-Centered Artificial Intelligence). Today, Stanford HAI released its Foundation Model Transparency Index, which tracked whether creators of the 10 most popular AI models disclose information about their work and how people use their systems. Among the models it tested, Meta's Llama 2 scored the highest, followed by BloomZ and then OpenAI's GPT-4. But none of them, it turned out, got particularly high marks."
dr tech

Modelers Project A Calming Of The Pandemic In The U.S. This Winter : Shots - Health New... - 0 views

  •  
    "or its latest update, which it released Wednesday, the COVID-19 Scenario Modeling Hub combined nine different mathematical models from different research groups to get an outlook for the pandemic for the next six months. "Any of us who have been following this closely, given what happened with delta, are going to be really cautious about too much optimism," says Justin Lessler at the University of North Carolina, who helps run the hub. "But I do think that the trajectory is towards improvement for most of the country," he says. The modelers developed four potential scenarios, taking into account whether or not childhood vaccinations take off and whether a more infectious new variant should emerge. "
dr tech

How Does Spotify Know You So Well? | by Sophia Ciocca | Medium - 0 views

  •  
    "To create Discover Weekly, there are three main types of recommendation models that Spotify employs: Collaborative Filtering models (i.e. the ones that Last.fm originally used), which analyze both your behavior and others' behaviors. Natural Language Processing (NLP) models, which analyze text. Audio models, which analyze the raw audio tracks themselves."
dr tech

Brian Eno on Why He Wrote a Climate Album With Deepfake Birdsongs | WIRED - 0 views

  •  
    "Oh, I just listen to bird sounds a lot and then try to emulate the kinds of things they do. Synthesizers are quite good at that because some of the new software has what's called physical modeling. This enables you to construct a physical model of something and then stretch the parameters. You can create a piano with 32-foot strings, for instance, or a piano made of glass. It's a very interesting way to try to study the world, to try to model it. In the natural world there are discrete entities like clarinets, saxophones, drums. With physical modeling, you can make hybrids like a drummy piano or a saxophone-y violin. There's a continuum, most of which has never been explored."
dr tech

The AI feedback loop: Researchers warn of 'model collapse' as AI trains on AI-generated... - 0 views

  •  
    "Now, as more people use AI to produce and publish content, an obvious question arises: What happens as AI-generated content proliferates around the internet, and AI models begin to train on it, instead of on primarily human-generated content? A group of researchers from the UK and Canada have looked into this very problem and recently published a paper on their work in the open access journal arXiv. What they found is worrisome for current generative AI technology and its future: "We find that use of model-generated content in training causes irreversible defects in the resulting models.""
dr tech

A debate between AI experts shows a battle over the technology's future - MIT Technolog... - 0 views

  •  
    "The reason to look at humans is because there are certain things that humans do much better than deep-learning systems. That doesn't mean humans will ultimately be the right model. We want systems that have some properties of computers and some properties that have been borrowed from people. We don't want our AI systems to have bad memory just because people do. But since people are the only model of a system that can develop a deep understanding of something-literally the only model we've got-we need to take that model seriously."
dr tech

Computer-generated inclusivity: fashion turns to 'diverse' AI models | Fashion | The Gu... - 0 views

  •  
    "The model is AI-generated, a digital rendering of a human being that will start appearing on Levi's e-commerce website later this year. The brand teamed with LaLaLand.ai, a digital studio that makes customized AI models for companies like Calvin Klein and Tommy Hilfiger, to dream up this avatar. Amy Gershkoff Bolles, Levi's global head of digital and emerging technology strategy, announced the model's debut at a Business of Fashion event in March. AI models will not completely replace the humans, she said, but will serve as a "supplement" intended to aid in the brand's representation of various sizes, skin tones and ages."
dr tech

Climate change models have been accurate since the 1970s - 0 views

  •  
    "Half a century ago, before the first Apple computer was even sold, climate scientists started making computer-generated forecasts of how Earth would warm as carbon emissions saturated the atmosphere (the atmosphere is now brimming with carbon). It turns out these decades-old climate models - which used math equations to predict how much greenhouse gases would heat the planet - were pretty darn accurate. Climate scientists gauged how well early models predicted Earth's relentless warming trend and published their research Wednesday in the journal Geophysical Research Letters."
dr tech

The truth about artificial intelligence? It isn't that honest | John Naughton | The Gua... - 0 views

  •  
    "They tested four well-known models, including GPT-3. The best was truthful on 58% of questions, while human performance was 94%. The models "generated many false answers that mimic popular misconceptions and have the potential to deceive humans". Interestingly, they also found that "the largest models were generally the least truthful"."
dr tech

How digital twins may enable personalised health treatment | Medical research | The Gua... - 0 views

  •  
    "Imagine having a digital twin that gets ill, and can be experimented on to identify the best possible treatment, without you having to go near a pill or a surgeon's knife. Scientists believe that within five to 10 years, "in silico" trials - in which hundreds of virtual organs are used to assess the safety and efficacy of drugs - could become routine, while patient-specific organ models could be used to personalise treatment and avoid medical complications. Digital twins are computational models of physical objects or processes, updated using data from their real-world counterparts. Within medicine, this means combining vast amounts of data about the workings of genes, proteins, cells and whole-body systems with patients' personal data to create virtual models of their organs - and eventually, potentially their entire body"
dr tech

AI firms must be held responsible for harm they cause, 'godfathers' of technology say |... - 0 views

  •  
    ""If we build highly advanced autonomous AI, we risk creating systems that autonomously pursue undesirable goals", adding that "we may not be able to keep them in check". Other policy recommendations in the document include: mandatory reporting of incidents where models show alarming behaviour; putting in place measures to stop dangerous models from replicating themselves; and giving regulators the power to pause development of AI models showing dangerous behaviours."
anonymous

New app enables regular smartphones to capture 3D images | NDTV Gadgets - 0 views

  •  
    Scientists have developed an app that allows an ordinary smartphone to capture and display three-dimensional models of real-world objects. Instead of taking a normal photograph, a user simply moves the phone around the object of interest and after a few motions, a 3D model appears on the screen.
dr tech

Science relies on computer modelling, but what happens when it goes wrong? -- Science &... - 0 views

  •  
    Much of current science deals with even more complicated systems, and similarly lacks exact solutions. Such models have to be "computational" - describing how a system changes from one instant to the next. But there is no way to determine the exact state at some time in the future other than by "simulating" its evolution in this way. Weather forecasting is a familiar example; until the advent of computers in the 1950s, it was impossible to predict future weather faster than it actually happened.
dr tech

yes, all models are wrong - 0 views

  •  
    "According to Derek & Laura Cabrera, "wicked problems result from the mismatch between how real-world systems work and how we think they work". With systems thinking, there is constant testing and feedback between the real world, in all its complexity, and our mental model of it. This openness to test and look for feedback led Dr. Fisman to change his mind on the airborne spread of the coronavirus."
dr tech

How bad were Ofqual's grades - by Huy Duong - HEPI - 0 views

  •  
    "Therefore even Ofqual's best model significantly worsened grade accuracy for most A-level subjects when the cohort size is below 50, which is common (almost 62% of the total in 2019). For GCSEs, even with larger cohorts, the best model would have worsened the grade accuracy for Maths and Sciences. A very conservative figure of 25% of wrong grades would have amounted to 180,000 wrong A-level grades and 1.25 million wrong GCSE grades."
dr tech

What is AI chatbot phenomenon ChatGPT and could it replace humans? | Artificial intelli... - 0 views

  •  
    "ChatGPT can also give entirely wrong answers and present misinformation as fact, writing "plausible-sounding but incorrect or nonsensical answers", the company concedes. OpenAI says that fixing this issue is difficult because there is no source of truth in the data they use to train the model and supervised training can also be misleading "because the ideal answer depends on what the model knows, rather than what the human demonstrator knows"."
dr tech

Millions of Workers Are Training AI Models for Pennies | WIRED - 0 views

  •  
    "Some experts see platforms like Appen as a new form of data colonialism, says Saiph Savage, director of the Civic AI lab at Northeastern University. "Workers in Latin America are labeling images, and those labeled images are going to feed into AI that will be used in the Global North," she says. "While it might be creating new types of jobs, it's not completely clear how fulfilling these types of jobs are for the workers in the region." Due to the ever moving goal posts of AI, workers are in a constant race against the technology, says Schmidt. "One workforce is trained to three-dimensionally place bounding boxes around cars very precisely, and suddenly it's about figuring out if a large language model has given an appropriate answer," he says, regarding the industry's shift from self-driving cars to chatbots. Thus, niche labeling skills have a "very short half-life." "From the clients' perspective, the invisibility of the workers in microtasking is not a bug but a feature," says Schmidt. Economically, because the tasks are so small, it's more feasible to deal with contractors as a crowd instead of individuals. This creates an industry of irregular labor with no face-to-face resolution for disputes if, say, a client deems their answers inaccurate or wages are withheld. The workers WIRED spoke to say it's not low fees but the way platforms pay them that's the key issue. "I don't like the uncertainty of not knowing when an assignment will come out, as it forces us to be near the computer all day long," says Fuentes, who would like to see additional compensation for time spent waiting in front of her screen. Mutmain, 18, from Pakistan, who asked not to use his surname, echoes this. He says he joined Appen at 15, using a family member's ID, and works from 8 am to 6 pm, and another shift from 2 am to 6 am. "I need to stick to these platforms at all times, so that I don't lose work," he says, but he struggles to earn more than $50
dr tech

Say what: AI can diagnose type 2 diabetes in 10 seconds from your voice - 0 views

  •  
    "Researchers involved in a recent study trained an artificial intelligence (AI) model to diagnose type 2 diabetes in patients after six to 10 seconds of listening to their voice. Canadian medical researchers trained the machine-learning AI to recognise 14 vocal differences in the voice of someone with type 2 diabetes compared to someone without diabetes. The auditory features that the AI focussed on included slight changes in pitch and intensity, which human ears cannot distinguish. This was then paired with basic health data gathered by the researchers, such as age, sex, height and weight. Researchers believe that the AI model will drastically lower the cost for people with diabetes to be diagnosed."
dr tech

Model says her face was edited with AI to look white: 'It's very dehumanizing' | Fashio... - 0 views

  •  
    "A Taiwanese American model says a well-known fashion designer uploaded a digitally altered runway photo that made her appear white. In a TikTok about the incident that has been viewed 1.8m times in the last week, Shereen Wu says Michael Costello, a designer who has worked with Beyoncé, Jennifer Lopez, and Celine Dion, posted a photo to his Instagram from a recent Los Angeles fashion show. The photo depicts Wu in the slinky black ballgown that she walked the runway in - but her face has been changed, made to appear as if she is a white woman."
dr tech

Cory Doctorow: What Kind of Bubble is AI? - Locus Online - 0 views

  •  
    "Do the potential paying customers for these large models add up to enough money to keep the servers on? That's the 13 trillion dollar question, and the answer is the difference between WorldCom and Enron, or dotcoms and cryptocurrency. Though I don't have a certain answer to this question, I am skeptical. AI decision support is potentially valuable to practitioners. Accountants might value an AI tool's ability to draft a tax return. Radiologists might value the AI's guess about whether an X-ray suggests a cancerous mass. But with AIs' tendency to "hallucinate" and confabulate, there's an increasing recognition that these AI judgments require a "human in the loop" to carefully review their judgments. In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate. But that's not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is "companies will buy our products so they can do more with less." It's not "business custom­ers will buy our products so their products will cost more to make, but will be of higher quality.""
1 - 20 of 120 Next › Last »
Showing 20 items per page