Skip to main content

Home/ Digit_al Society/ Group items tagged machine learning modelling

Rss Feed Group items tagged

dr tech

Why machine learning struggles with causality | VentureBeat - 0 views

  •  
    "In a paper titled "Towards Causal Representation Learning," researchers at the Max Planck Institute for Intelligent Systems, the Montreal Institute for Learning Algorithms (Mila), and Google Research discuss the challenges arising from the lack of causal representations in machine learning models and provide directions for creating artificial intelligence systems that can learn causal representations."
dr tech

A machine-learning system that guesses whether text was produced by machine-learning sy... - 0 views

  •  
    "Automatically produced texts use language models derived from statistical analysis of vast corpuses of human-generated text to produce machine-generated texts that can be very hard for a human to distinguish from text produced by another human. These models could help malicious actors in many ways, including generating convincing spam, reviews, and comments -- so it's really important to develop tools that can help us distinguish between human-generated and machine-generated texts."
dr tech

Full Page Reload - 0 views

  •  
    "These experiments in computational creativity are enabled by the dramatic advances in deep learning over the past decade. Deep learning has several key advantages for creative pursuits. For starters, it's extremely flexible, and it's relatively easy to train deep-learning systems (which we call models) to take on a wide variety of tasks."
dr tech

A debate between AI experts shows a battle over the technology's future - MIT Technolog... - 0 views

  •  
    "The reason to look at humans is because there are certain things that humans do much better than deep-learning systems. That doesn't mean humans will ultimately be the right model. We want systems that have some properties of computers and some properties that have been borrowed from people. We don't want our AI systems to have bad memory just because people do. But since people are the only model of a system that can develop a deep understanding of something-literally the only model we've got-we need to take that model seriously."
dr tech

Say what: AI can diagnose type 2 diabetes in 10 seconds from your voice - 0 views

  •  
    "Researchers involved in a recent study trained an artificial intelligence (AI) model to diagnose type 2 diabetes in patients after six to 10 seconds of listening to their voice. Canadian medical researchers trained the machine-learning AI to recognise 14 vocal differences in the voice of someone with type 2 diabetes compared to someone without diabetes. The auditory features that the AI focussed on included slight changes in pitch and intensity, which human ears cannot distinguish. This was then paired with basic health data gathered by the researchers, such as age, sex, height and weight. Researchers believe that the AI model will drastically lower the cost for people with diabetes to be diagnosed."
dr tech

How Does Spotify Know You So Well? | by Sophia Ciocca | Medium - 0 views

  •  
    "To create Discover Weekly, there are three main types of recommendation models that Spotify employs: Collaborative Filtering models (i.e. the ones that Last.fm originally used), which analyze both your behavior and others' behaviors. Natural Language Processing (NLP) models, which analyze text. Audio models, which analyze the raw audio tracks themselves."
dr tech

The AI feedback loop: Researchers warn of 'model collapse' as AI trains on AI-generated... - 0 views

  •  
    "Now, as more people use AI to produce and publish content, an obvious question arises: What happens as AI-generated content proliferates around the internet, and AI models begin to train on it, instead of on primarily human-generated content? A group of researchers from the UK and Canada have looked into this very problem and recently published a paper on their work in the open access journal arXiv. What they found is worrisome for current generative AI technology and its future: "We find that use of model-generated content in training causes irreversible defects in the resulting models.""
dr tech

DeepMind is developing one algorithm to rule them all | VentureBeat - 0 views

  •  
    "DeepMind is trying to combine deep learning and algorithms, creating the one algorithm to rule them all: a deep learning model that can learn how to emulate any algorithm, generating an algorithm-equivalent model that can work with real-world data."
dr tech

ChatGPT isn't a great leap forward, it's an expensive deal with the devil | John Naught... - 0 views

  •  
    "The intriguing echo of Eliza in thinking about ChatGPT is that people regard it as magical even though they know how it works - as a "stochastic parrot" (in the words of Timnit Gebru, a well-known researcher) or as a machine for "hi-tech plagiarism" (Noam Chomsky). But actually we do not know the half of it yet - not the CO2 emissions incurred in training its underlying language model or the carbon footprint of all those delighted interactions people are having with it. Or, pace Chomsky, that the technology only exists because of its unauthorised appropriation of the creative work of millions of people that just happened to be lying around on the web? What's the business model behind these tools? And so on. Answer: we don't know."
dr tech

I Tried Predictim AI That Scans for 'Risky' Babysitters - 0 views

  •  
    "The founders of Predictim want to be clear with me: Their product-an algorithm that scans the online footprint of a prospective babysitter to determine their "risk" levels for parents-is not racist. It is not biased. "We take ethics and bias extremely seriously," Sal Parsa, Predictim's CEO, tells me warily over the phone. "In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.""
dr tech

Google's AI stoplight program is now calming traffic in a dozen cities worldwide - 0 views

  •  
    "Green Light uses machine learning systems to comb through Maps data to calculate the amount of traffic congestion present at a given light, as well as the average wait times of vehicles stopped there. That information is then used to train AI models that can autonomously optimize the traffic timing at that intersection, reducing idle times as well as the amount of braking and accelerating vehicles have to do there. It's all part of Google's goal to help its partners collectively reduce their carbon emissions by a gigaton by 2030."
dr tech

The truth about artificial intelligence? It isn't that honest | John Naughton | The Gua... - 0 views

  •  
    "They tested four well-known models, including GPT-3. The best was truthful on 58% of questions, while human performance was 94%. The models "generated many false answers that mimic popular misconceptions and have the potential to deceive humans". Interestingly, they also found that "the largest models were generally the least truthful"."
dr tech

This company is building AI for African languages | MIT Technology Review - 0 views

  •  
    "Abbott's experience mirrors the situation faced by Africans who don't speak English. Many language models like ChatGPT do not perform well for languages with smaller numbers of speakers, especially African ones. But a new venture called Lelapa AI, a collaboration between Abbott and a biomedical engineer named Pelonomi Moiloa, is trying to use machine learning to create tools that specifically work for Africans."
dr tech

What is AI chatbot phenomenon ChatGPT and could it replace humans? | Artificial intelli... - 0 views

  •  
    "ChatGPT can also give entirely wrong answers and present misinformation as fact, writing "plausible-sounding but incorrect or nonsensical answers", the company concedes. OpenAI says that fixing this issue is difficult because there is no source of truth in the data they use to train the model and supervised training can also be misleading "because the ideal answer depends on what the model knows, rather than what the human demonstrator knows"."
dr tech

Artificial Intelligence Is a House Divided - 0 views

  •  
    "A natural alternative to symbolic AI came to prominence: Instead of modeling high-level reasoning processes, why not instead model the brain? After all, brains are the only things that we know for certain can produce intelligent behavior. Why not start with them?"
dr tech

Algorithmic cruelty: when Gmail adds your harasser to your speed-dial / Boing Boing - 0 views

  •  
    "It's not that Google wants to do this, it's that they didn't anticipate this outcome, and compounded that omission by likewise omitting a way to overrule the algorithm's judgment. As with other examples of algorithmic cruelty, it's not so much this specific example as was it presages for a future in which more and more of our external reality is determined by models derived from machine learning systems whose workings we're not privy to and have no say in. "
dr tech

Deepfake detectors can be defeated, computer scientists show for the first time | Eurek... - 0 views

  •  
    "Researchers showed detectors can be defeated by inserting inputs called adversarial examples into every video frame. The adversarial examples are slightly manipulated inputs which cause artificial intelligence systems such as machine learning models to make a mistake. In addition, the team showed that the attack still works after videos are compressed."
dr tech

AI suggested 40,000 new possible chemical weapons in just six hours - The Verge - 0 views

  •  
    "Researchers put AI normally used to search for helpful drugs into a kind of "bad actor" mode to show how easily it could be abused at a biological arms control conference. All the researchers had to do was tweak their methodology to seek out, rather than weed out toxicity. The AI came up with tens of thousands of new substances, some of which are similar to VX, the most potent nerve agent ever developed. Shaken, they published their findings this month in the journal Nature Machine Intelligence."
dr tech

The future, soon: what I learned from Bing's AI - 0 views

  •  
    "I have been working with generative AI and, even though I have been warning that these tools are improving rapidly, I did not expect them to really be improving that rapidly. On every dimension, Bing's AI, which does not actually represent a technological leap over ChatGPT, far outpaces the earlier AI - which is less than three months old! There are many larger, more capable models on their way in the coming months, and we are not really ready."
dr tech

The AI future for lesson plans is already here | EduResearch Matters - 0 views

  •  
    "What do today's AI-generated lesson plans look like? AI-generated lesson plans are already better than many people realise. Here's an example generated through the GPT-3 deep learning language model:"
1 - 20 of 35 Next ›
Showing 20 items per page