"The robot was named Ai-Da after the 19th century mathematician Ada Lovelace. According to its creators, it is capable of drawing real people using its camera eye and a pencil in hand."
"Qoves founder Shafee Hassan claimed to MIT Technology Review that beauty scoring is widespread; social media platforms use it to identify attractive faces and give them more attention."
""I consider 'bias' a euphemism," says Brandeis Marshall, PhD, data scientist and CEO of DataedX, an edtech and data science firm. "The words that are used are varied: There's fairness, there's responsibility, there's algorithmic bias, there's a number of terms… but really, it's dancing around the real topic… A dataset is inherently entrenched in systemic racism and sexism.""
"In 2015, a research group fed an AI system called Deep Patient health and medical data from some 700,000 people, and tested whether it could predict diseases. It could, but Deep Patient provides no explanation for the basis of a diagnosis, and the researchers have no idea how it comes to its conclusions. A doctor either can either trust or ignore the computer, but that trust will remain blind."
"But historically, these tools have been put into use only after a rigorous peer review of the raw data and statistical analyses used to develop them. Epic's Deterioration Index, on the other hand, remains proprietary despite its widespread deployment. Although physicians are provided with a list of the variables used to calculate the index and a rough estimate of each variable's impact on the score, we aren't allowed under the hood to evaluate the raw data and calculations. "
"Beginning in 2017, I did a project with artist Trevor Paglen to look at how people were being labelled. We found horrifying classificatory terms that were misogynist, racist, ableist, and judgmental in the extreme. Pictures of people were being matched to words like kleptomaniac, alcoholic, bad person, closet queen, call girl, slut, drug addict and far more I cannot say here. ImageNet has now removed many of the obviously problematic people categories - certainly an improvement - however, the problem persists because these training sets still circulate on torrent sites [where files are shared between peers]."
"We need to look at the nose to tail production of artificial intelligence. The seeds of the data problem were planted in the 1980s, when it became common to use data sets without close knowledge of what was inside, or concern for privacy. It was just "raw" material, reused across thousands of projects."
Sounds a bit extreme just to make sure no one can log on to your laptop or smartphone, but a team of researchers from Stanford and Northwestern universities as well as SRI International is nonetheless experimenting at the computer-, cognitive- and neuroscience intersection to combat identity theft and shore up cyber security-by taking advantage of the human brain's innate abilities to learn and recognize patterns.
"Over the Bridge hopes the project emphasizes exactly how much work goes into creating AI music. "There's an inordinate amount of human hands at the beginning, middle and end to create something like this," explained Michael Scriven, a rep for Lemmon Entertainment whose CEO is on Over the Bridge's board of directors.
Scriven added, "A lot of people may think [AI] is going to replace musicians at some point, but at this point, the number of humans that are required just to get to a point where a song is listenable is actually quite significant.""
"
The children's charity NSPCC has called on Facebook to resume a programme that scanned private messages for indications of child abuse, with new data suggesting that almost half of referrals for child sexual abuse material are now falling below the radar.
Recent changes to the European commission's e-privacy directive, which are being finalised, require messaging services to follow strict new restrictions on the privacy of message data. Facebook blamed that directive for shutting down the child protection operation, but the children's charity says Facebook has gone too far in reading the law as banning it entirely."
"Researchers fed these algorithms (which function like autocomplete, but for images) pictures of a man cropped below his neck: 43% of the time the image was autocompleted with the man wearing a suit. When you fed the same algorithm a similarly cropped photo of a woman, it auto-completed her wearing a low-cut top or bikini a massive 53% of the time. For some reason, the researchers gave the algorithm a picture of the Democratic congresswoman Alexandria Ocasio-Cortez and found that it also automatically generated an image of her in a bikini. (After ethical concerns were raised on Twitter, the researchers had the computer-generated image of AOC in a swimsuit removed from the research paper.)"
"We seem to be safe for the moment, however - the MIT team said it has no interest in taking artificially intelligent horror machines to the next level or exploring their darker possibilities. "We wanted to playfully commemorate humanity's fear of AI, which is a growing theme in popular culture, but we currently have no plans to use the immense power of AI to scare people further," Yanardag said. "The world is already pretty scary!""
" A new AI pair programmer that helps you write better code. It helps you quickly discover alternative ways to solve problems, write tests, and explore new APIs without having to tediously tailor a search for answers on the internet. As you type, it adapts to the way you write code - to help you complete your work faster.
In other words, Copilot will sit on your computer and do a chunk of your coding work for you. There's a long-running joke in the coding community that a substantial portion of the actual work of programming is searching online for people who've solved the same problems as you, and copying their code into your program. Well, now there's an AI that will do that part for you."
"Activists from Fight for the Future prowled the halls of Congress in "jumpsuits with phone strapped to their heads conducting live facial recognition surveillance" to "show why this tech should be banned.""
"In just the past few months, three cities - San Francisco, Oakland, and Somerville, Massachusetts - have passed laws to ban government use of the controversial technology, which analyzes pictures or live video of human faces in order to identify them. Cambridge, Massachusetts, is also moving toward a government ban. Congress recently held two oversight hearings on the topic and there are at least four pieces of current federal legislation to limit the technology in some way. "
"An artificial intelligence program has been developed that is better at spotting breast cancer in mammograms than expert radiologists.
The AI outperformed the specialists by detecting cancers that the radiologists missed in the images, while ignoring features they falsely flagged as possible tumours."
"Assigning female genders to digital assistants such as Apple's Siri and Amazon's Alexa is helping entrench harmful gender biases, according to a UN agency."
"Liverpool are using incredible data science during matches, and effects are extraordinary
Liverpool's sport-leading data science is providing Jürgen Klopp with the tools to change football matches as they're happening."