Skip to main content

Home/ Digit_al Society/ Group items tagged system integrity ai

Rss Feed Group items tagged

dr tech

Top 10 AI failures of 2016 - TechRepublic - 0 views

  •  
    "But with all of the successes of AI, it's also important to pay attention to when, and how, it can go wrong, in order to prevent future errors. A recent paper by Roman Yampolskiy, director of the Cybersecurity Lab at the University of Louisville, outlines a history of AI failures which are "directly related to the mistakes produced by the intelligence such systems are designed to exhibit." According to Yampolskiy, these types of failures can be attributed to mistakes during the learning phase or mistakes in the performance phase of the AI system."
dr tech

The AI Delusion: An Unbiased General Purpose Chatbot - 0 views

  •  
    "Can AI ever be unbiased? As AI systems become more integrated into our daily lives, it's crucial that we understand the complexities of bias and how it impacts these technologies. From chatbots to hiring algorithms, the potential for AI to perpetuate and even amplify existing biases is a genuine concern. "
dr tech

'The Gospel': how Israel uses AI to select bombing targets in Gaza | Israel | The Guardian - 0 views

  •  
    "Sources familiar with how AI-based systems have been integrated into the IDF's operations said such tools had significantly sped up the target creation process. "We prepare the targets automatically and work according to a checklist," a source who previously worked in the target division told +972/Local Call. "It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate." A separate source told the publication the Gospel had allowed the IDF to run a "mass assassination factory" in which the "emphasis is on quantity and not on quality". A human eye, they said, "will go over the targets before each attack, but it need not spend a lot of time on them". For some experts who research AI and international humanitarian law, an acceleration of this kind raises a number of concerns."
dr tech

New AI fake text generator may be too dangerous to release, say creators | Technology |... - 0 views

  •  
    "The creators of a revolutionary AI system that can write news stories and works of fiction - dubbed "deepfakes for text" - have taken the unusual step of not releasing their research publicly, for fear of potential misuse."
dr tech

The lessons we all must learn from the A-levels algorithm debacle | WIRED UK - 0 views

  •  
    "More algorithmic decision making and decision augmenting systems will be used in the coming years. Unlike the approach taken for A-levels, future systems may include opaque AI-led decision making. Despite such risks there remain no clear picture of how public sector bodies - government, local councils, police forces and more - are using algorithmic systems for decision making."
dr tech

Police across the US are training crime-predicting AIs on falsified data - MIT Technolo... - 0 views

  •  
    "The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department's discriminatory practices."
dr tech

AI learns to write its own code by stealing from other programs | New Scientist - 0 views

  •  
    "DeepCoder uses a technique called program synthesis: creating new programs by piecing together lines of code taken from existing software - just like a programmer might. Given a list of inputs and outputs for each code fragment, DeepCoder learned which pieces of code were needed to achieve the desired result overall. "It could allow non-coders to simply describe an idea for a program and let the system build it""
dr tech

A machine-learning system that guesses whether text was produced by machine-learning sy... - 0 views

  •  
    "Automatically produced texts use language models derived from statistical analysis of vast corpuses of human-generated text to produce machine-generated texts that can be very hard for a human to distinguish from text produced by another human. These models could help malicious actors in many ways, including generating convincing spam, reviews, and comments -- so it's really important to develop tools that can help us distinguish between human-generated and machine-generated texts."
dr tech

YouTube will temporarily increase automated content moderation | Engadget - 0 views

  •  
    "YouTube will rely more on machine learning and less on human reviewers during the coronavirus outbreak. Normally, algorithms detect potentially harmful content and send it to human reviewers for assessment. But these are not normal times, and in an effort to reduce the need for employees and contractors to come into an office, YouTube will allow its automated system to remove some content without human review."
1 - 9 of 9
Showing 20 items per page