Skip to main content

Home/ Digit_al Society/ Group items matching "system,model" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
dr tech

AI now surpasses humans in almost all performance benchmarks - 0 views

  •  
    "The new AI Index report notes that in 2023, AI still struggled with complex cognitive tasks like advanced math problem-solving and visual commonsense reasoning. However, 'struggled' here might be misleading; it certainly doesn't mean AI did badly. Performance on MATH, a dataset of 12,500 challenging competition-level math problems, improved dramatically in the two years since its introduction. In 2021, AI systems could solve only 6.9% of problems. By contrast, in 2023, a GPT-4-based model solved 84.3%. The human baseline is 90%. "
dr tech

How digital twins may enable personalised health treatment | Medical research | The Guardian - 0 views

  •  
    "Imagine having a digital twin that gets ill, and can be experimented on to identify the best possible treatment, without you having to go near a pill or a surgeon's knife. Scientists believe that within five to 10 years, "in silico" trials - in which hundreds of virtual organs are used to assess the safety and efficacy of drugs - could become routine, while patient-specific organ models could be used to personalise treatment and avoid medical complications. Digital twins are computational models of physical objects or processes, updated using data from their real-world counterparts. Within medicine, this means combining vast amounts of data about the workings of genes, proteins, cells and whole-body systems with patients' personal data to create virtual models of their organs - and eventually, potentially their entire body"
dr tech

NVIDIA's latest AI model helps robots perform pen spinning tricks as well as humans - 0 views

  •  
    "The use for humans in the world of robotics, even as teachers, is shrinking thanks to AI. NVIDIA Research has announced the creation of Eureka, an AI agent powered by GPT-4 that has trained robots to perform tasks using reward algorithms. Notably, Eureka taught a robotic hand to do pen spinning tricks as well as a human can (honestly, as you can see in the YouTube video below, better than many of us)."
dr tech

AI firms must be held responsible for harm they cause, 'godfathers' of technology say | Artificial intelligence (AI) | The Guardian - 0 views

  •  
    ""If we build highly advanced autonomous AI, we risk creating systems that autonomously pursue undesirable goals", adding that "we may not be able to keep them in check". Other policy recommendations in the document include: mandatory reporting of incidents where models show alarming behaviour; putting in place measures to stop dangerous models from replicating themselves; and giving regulators the power to pause development of AI models showing dangerous behaviours."
dr tech

Generative AI like Midjourney creates images full of stereotypes - Rest of World - 0 views

  •  
    ""Essentially what this is doing is flattening descriptions of, say, 'an Indian person' or 'a Nigerian house' into particular stereotypes which could be viewed in a negative light," Amba Kak, executive director of the AI Now Institute, a U.S.-based policy research organization, told Rest of World. Even stereotypes that are not inherently negative, she said, are still stereotypes: They reflect a particular value judgment, and a winnowing of diversity. Midjourney did not respond to multiple requests for an interview or comment for this story."
dr tech

Google's AI stoplight program is now calming traffic in a dozen cities worldwide - 0 views

  •  
    "Green Light uses machine learning systems to comb through Maps data to calculate the amount of traffic congestion present at a given light, as well as the average wait times of vehicles stopped there. That information is then used to train AI models that can autonomously optimize the traffic timing at that intersection, reducing idle times as well as the amount of braking and accelerating vehicles have to do there. It's all part of Google's goal to help its partners collectively reduce their carbon emissions by a gigaton by 2030."
dr tech

The world's biggest AI models aren't very transparent, Stanford study says - The Verge - 0 views

  •  
    "No prominent developer of AI foundation models - a list including companies like OpenAI and Meta - is releasing sufficient information about their potential impact on society, determines a new report from Stanford HAI (Human-Centered Artificial Intelligence). Today, Stanford HAI released its Foundation Model Transparency Index, which tracked whether creators of the 10 most popular AI models disclose information about their work and how people use their systems. Among the models it tested, Meta's Llama 2 scored the highest, followed by BloomZ and then OpenAI's GPT-4. But none of them, it turned out, got particularly high marks."
dr tech

Google says AI systems should be able to mine publishers' work unless companies opt out | Artificial intelligence (AI) | The Guardian - 0 views

  •  
    "The company has called for Australian policymakers to promote "copyright systems that enable appropriate and fair use of copyrighted content to enable the training of AI models in Australia on a broad and diverse range of data, while supporting workable opt-outs for entities that prefer their data not to be trained in using AI systems". The call for a fair use exception for AI systems is a view the company has expressed to the Australian government in the past, but the notion of an opt-out option for publishers is a new argument from Google."
dr tech

New Tool Reveals How AI Makes Decisions - Scientific American - 0 views

  •  
    "Most AI programs function like a "black box." "We know exactly what a model does but not why it has now specifically recognized that a picture shows a cat," said computer scientist Kristian Kersting of the Technical University of Darmstadt in Germany to the German-language newspaper Handelsblatt. That dilemma prompted Kersting-along with computer scientists Patrick Schramowski of the Technical University of Darmstadt and Björn Deiseroth, Mayukh Deb and Samuel Weinbach, all at the Heidelberg, Germany-based AI company Aleph Alpha-to introduce an algorithm called AtMan earlier this year. AtMan allows large AI systems such as ChatGPT, Dall-E and Midjourney to finally explain their outputs."
dr tech

A Brain Scanner Combined with an AI Language Model Can Provide a Glimpse into Your Thoughts - Scientific American - 0 views

  •  
    "Now researchers have taken a step forward by combining fMRI's ability to monitor neural activity with the predictive power of artificial intelligence language models. The hybrid technology has resulted in a decoder that can reproduce, with a surprising level of accuracy, the stories that a person listened to or imagined telling in the scanner. The decoder could even guess the story behind a short film that someone watched in the scanner, though with less accuracy."
dr tech

Could AI save the Amazon rainforest? | Artificial intelligence (AI) | The Guardian - 0 views

  •  
    "The model takes a two-pronged approach. First, it focuses on trends present in the region, looking at geostatistics and historical data from Prodes, the annual government monitoring system for deforestation in the Amazon. Understanding what has happened can help make predictions more precise. When already deforested areas are recent, this indicates gangs are operating in the area, so there's a higher risk that nearby forest will soon be wiped out. Second, it looks at variables that put the brakes on deforestation - land protected by Indigenous and quilombola (descendent of rebel slaves) communities, and areas with bodies of water, or other terrain that doesn't lend itself to agricultural expansion, for instance - and variables that make deforestation more likely, including higher population density, the presence of settlements and rural properties, and higher density of road infrastructure, both legal and illegal."
dr tech

Pause Giant AI Experiments: An Open Letter - Future of Life Institute - 0 views

  •  
    "Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now."
dr tech

ChatGPT, artificial intelligence, and the future of education - Vox - 0 views

  •  
    "The technology certainly has its flaws. While the system is theoretically designed not to cross some moral red lines - it's adamant that Hitler was bad - it's not difficult to trick the AI into sharing advice on how to engage in all sorts of evil and nefarious activities, particularly if you tell the chatbot that it's writing fiction. The system, like other AI models, can also say biased and offensive things. As my colleague Sigal Samuel has explained, an earlier version of GPT generated extremely Islamophobic content, and also produced some pretty concerning talking points about the treatment of Uyghur Muslims in China."
dr tech

Digital twin: How a virtual representation of a system boosts effectivity - 0 views

  •  
    "A digital twin is a virtual representation of a real system - a building, the power grid, a city, even a human being - that mimics the characteristics of the system. A digital twin is more than just a computer model, however. It receives data from sensors in the real system to constantly parallel the system's state."
dr tech

yes, all models are wrong - 0 views

  •  
    "According to Derek & Laura Cabrera, "wicked problems result from the mismatch between how real-world systems work and how we think they work". With systems thinking, there is constant testing and feedback between the real world, in all its complexity, and our mental model of it. This openness to test and look for feedback led Dr. Fisman to change his mind on the airborne spread of the coronavirus."
dr tech

Why machine learning struggles with causality | VentureBeat - 0 views

  •  
    "In a paper titled "Towards Causal Representation Learning," researchers at the Max Planck Institute for Intelligent Systems, the Montreal Institute for Learning Algorithms (Mila), and Google Research discuss the challenges arising from the lack of causal representations in machine learning models and provide directions for creating artificial intelligence systems that can learn causal representations."
dr tech

SoundCloud announces overhaul of royalties model to 'fan-powered' system | Soundcloud | The Guardian - 0 views

  •  
    "SoundCloud announced on Tuesday it would become the first major streaming service to start directing subscribers' fees only to the artists they listen to, a move welcomed by musicians campaigning for fairer pay. Current practice for streaming services including Spotify, Deezer and Apple is to pool royalty payments and dish them out based on which artists have the most global plays. Many artists and unions have criticised this system, saying it disproportionately favours megastars and leaves y little for musicians further down the pecking order."
dr tech

Deepfake detectors can be defeated, computer scientists show for the first time | EurekAlert! Science News - 0 views

  •  
    "Researchers showed detectors can be defeated by inserting inputs called adversarial examples into every video frame. The adversarial examples are slightly manipulated inputs which cause artificial intelligence systems such as machine learning models to make a mistake. In addition, the team showed that the attack still works after videos are compressed."
dr tech

Full Page Reload - 0 views

  •  
    "These experiments in computational creativity are enabled by the dramatic advances in deep learning over the past decade. Deep learning has several key advantages for creative pursuits. For starters, it's extremely flexible, and it's relatively easy to train deep-learning systems (which we call models) to take on a wide variety of tasks."
dr tech

A debate between AI experts shows a battle over the technology's future - MIT Technology Review - 0 views

  •  
    "The reason to look at humans is because there are certain things that humans do much better than deep-learning systems. That doesn't mean humans will ultimately be the right model. We want systems that have some properties of computers and some properties that have been borrowed from people. We don't want our AI systems to have bad memory just because people do. But since people are the only model of a system that can develop a deep understanding of something-literally the only model we've got-we need to take that model seriously."
1 - 20 of 30 Next ›
Showing 20 items per page