Skip to main content

Home/ Digit_al Society/ Group items tagged system model

Rss Feed Group items tagged

1More

A debate between AI experts shows a battle over the technology's future - MIT Technolog... - 0 views

  •  
    "The reason to look at humans is because there are certain things that humans do much better than deep-learning systems. That doesn't mean humans will ultimately be the right model. We want systems that have some properties of computers and some properties that have been borrowed from people. We don't want our AI systems to have bad memory just because people do. But since people are the only model of a system that can develop a deep understanding of something-literally the only model we've got-we need to take that model seriously."
1More

The world's biggest AI models aren't very transparent, Stanford study says - The Verge - 0 views

  •  
    "No prominent developer of AI foundation models - a list including companies like OpenAI and Meta - is releasing sufficient information about their potential impact on society, determines a new report from Stanford HAI (Human-Centered Artificial Intelligence). Today, Stanford HAI released its Foundation Model Transparency Index, which tracked whether creators of the 10 most popular AI models disclose information about their work and how people use their systems. Among the models it tested, Meta's Llama 2 scored the highest, followed by BloomZ and then OpenAI's GPT-4. But none of them, it turned out, got particularly high marks."
1More

yes, all models are wrong - 0 views

  •  
    "According to Derek & Laura Cabrera, "wicked problems result from the mismatch between how real-world systems work and how we think they work". With systems thinking, there is constant testing and feedback between the real world, in all its complexity, and our mental model of it. This openness to test and look for feedback led Dr. Fisman to change his mind on the airborne spread of the coronavirus."
1More

Digital twin: How a virtual representation of a system boosts effectivity - 0 views

  •  
    "A digital twin is a virtual representation of a real system - a building, the power grid, a city, even a human being - that mimics the characteristics of the system. A digital twin is more than just a computer model, however. It receives data from sensors in the real system to constantly parallel the system's state."
1More

Google says AI systems should be able to mine publishers' work unless companies opt out... - 0 views

  •  
    "The company has called for Australian policymakers to promote "copyright systems that enable appropriate and fair use of copyrighted content to enable the training of AI models in Australia on a broad and diverse range of data, while supporting workable opt-outs for entities that prefer their data not to be trained in using AI systems". The call for a fair use exception for AI systems is a view the company has expressed to the Australian government in the past, but the notion of an opt-out option for publishers is a new argument from Google."
1More

Science relies on computer modelling, but what happens when it goes wrong? -- Science &... - 0 views

  •  
    Much of current science deals with even more complicated systems, and similarly lacks exact solutions. Such models have to be "computational" - describing how a system changes from one instant to the next. But there is no way to determine the exact state at some time in the future other than by "simulating" its evolution in this way. Weather forecasting is a familiar example; until the advent of computers in the 1950s, it was impossible to predict future weather faster than it actually happened.
1More

How digital twins may enable personalised health treatment | Medical research | The Gua... - 0 views

  •  
    "Imagine having a digital twin that gets ill, and can be experimented on to identify the best possible treatment, without you having to go near a pill or a surgeon's knife. Scientists believe that within five to 10 years, "in silico" trials - in which hundreds of virtual organs are used to assess the safety and efficacy of drugs - could become routine, while patient-specific organ models could be used to personalise treatment and avoid medical complications. Digital twins are computational models of physical objects or processes, updated using data from their real-world counterparts. Within medicine, this means combining vast amounts of data about the workings of genes, proteins, cells and whole-body systems with patients' personal data to create virtual models of their organs - and eventually, potentially their entire body"
1More

AI firms must be held responsible for harm they cause, 'godfathers' of technology say |... - 0 views

  •  
    ""If we build highly advanced autonomous AI, we risk creating systems that autonomously pursue undesirable goals", adding that "we may not be able to keep them in check". Other policy recommendations in the document include: mandatory reporting of incidents where models show alarming behaviour; putting in place measures to stop dangerous models from replicating themselves; and giving regulators the power to pause development of AI models showing dangerous behaviours."
1More

Pause Giant AI Experiments: An Open Letter - Future of Life Institute - 0 views

  •  
    "Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now."
1More

A machine-learning system that guesses whether text was produced by machine-learning sy... - 0 views

  •  
    "Automatically produced texts use language models derived from statistical analysis of vast corpuses of human-generated text to produce machine-generated texts that can be very hard for a human to distinguish from text produced by another human. These models could help malicious actors in many ways, including generating convincing spam, reviews, and comments -- so it's really important to develop tools that can help us distinguish between human-generated and machine-generated texts."
1More

Together we can thwart the big-tech data grab. Here's how | John Harris | Opinion | The... - 0 views

  •  
    "Blockchain technology has also opened the way to new models whereby endless micropayments can be made in return for particular online services or content; and, if people voluntarily allow elements of their data to be used, rewards can flow the other way. Here perhaps lies the key to a system beyond the current, Google-led model, in which services appear to be free but the letting-go of personal data is the actual price."
1More

Why machine learning struggles with causality | VentureBeat - 0 views

  •  
    "In a paper titled "Towards Causal Representation Learning," researchers at the Max Planck Institute for Intelligent Systems, the Montreal Institute for Learning Algorithms (Mila), and Google Research discuss the challenges arising from the lack of causal representations in machine learning models and provide directions for creating artificial intelligence systems that can learn causal representations."
1More

SoundCloud announces overhaul of royalties model to 'fan-powered' system | Soundcloud |... - 0 views

  •  
    "SoundCloud announced on Tuesday it would become the first major streaming service to start directing subscribers' fees only to the artists they listen to, a move welcomed by musicians campaigning for fairer pay. Current practice for streaming services including Spotify, Deezer and Apple is to pool royalty payments and dish them out based on which artists have the most global plays. Many artists and unions have criticised this system, saying it disproportionately favours megastars and leaves y little for musicians further down the pecking order."
1More

ChatGPT, artificial intelligence, and the future of education - Vox - 0 views

  •  
    "The technology certainly has its flaws. While the system is theoretically designed not to cross some moral red lines - it's adamant that Hitler was bad - it's not difficult to trick the AI into sharing advice on how to engage in all sorts of evil and nefarious activities, particularly if you tell the chatbot that it's writing fiction. The system, like other AI models, can also say biased and offensive things. As my colleague Sigal Samuel has explained, an earlier version of GPT generated extremely Islamophobic content, and also produced some pretty concerning talking points about the treatment of Uyghur Muslims in China."
1More

A Brain Scanner Combined with an AI Language Model Can Provide a Glimpse into Your Thou... - 0 views

  •  
    "Now researchers have taken a step forward by combining fMRI's ability to monitor neural activity with the predictive power of artificial intelligence language models. The hybrid technology has resulted in a decoder that can reproduce, with a surprising level of accuracy, the stories that a person listened to or imagined telling in the scanner. The decoder could even guess the story behind a short film that someone watched in the scanner, though with less accuracy."
1More

Could AI save the Amazon rainforest? | Artificial intelligence (AI) | The Guardian - 0 views

  •  
    "The model takes a two-pronged approach. First, it focuses on trends present in the region, looking at geostatistics and historical data from Prodes, the annual government monitoring system for deforestation in the Amazon. Understanding what has happened can help make predictions more precise. When already deforested areas are recent, this indicates gangs are operating in the area, so there's a higher risk that nearby forest will soon be wiped out. Second, it looks at variables that put the brakes on deforestation - land protected by Indigenous and quilombola (descendent of rebel slaves) communities, and areas with bodies of water, or other terrain that doesn't lend itself to agricultural expansion, for instance - and variables that make deforestation more likely, including higher population density, the presence of settlements and rural properties, and higher density of road infrastructure, both legal and illegal."
1More

New Tool Reveals How AI Makes Decisions - Scientific American - 0 views

  •  
    "Most AI programs function like a "black box." "We know exactly what a model does but not why it has now specifically recognized that a picture shows a cat," said computer scientist Kristian Kersting of the Technical University of Darmstadt in Germany to the German-language newspaper Handelsblatt. That dilemma prompted Kersting-along with computer scientists Patrick Schramowski of the Technical University of Darmstadt and Björn Deiseroth, Mayukh Deb and Samuel Weinbach, all at the Heidelberg, Germany-based AI company Aleph Alpha-to introduce an algorithm called AtMan earlier this year. AtMan allows large AI systems such as ChatGPT, Dall-E and Midjourney to finally explain their outputs."
1More

Harvard Study Proves Apple Slows Down old iPhones to Sell Millions of New Models - Anon... - 0 views

  •  
    "People have made the anecdotal observation that their Apple products become much slower right before the release of a new model. Now, a Harvard University study has done what any person with Google Trends could do, and pointed out that Google searches for "iPhone slow" spiked multiple times, just before the release of a new iPhone each time."
1More

Algorithmic cruelty: when Gmail adds your harasser to your speed-dial / Boing Boing - 0 views

  •  
    "It's not that Google wants to do this, it's that they didn't anticipate this outcome, and compounded that omission by likewise omitting a way to overrule the algorithm's judgment. As with other examples of algorithmic cruelty, it's not so much this specific example as was it presages for a future in which more and more of our external reality is determined by models derived from machine learning systems whose workings we're not privy to and have no say in. "
1 - 20 of 29 Next ›
Showing 20 items per page