Why machine learning struggles with causality | VentureBeat - 0 views
-
"In a paper titled "Towards Causal Representation Learning," researchers at the Max Planck Institute for Intelligent Systems, the Montreal Institute for Learning Algorithms (Mila), and Google Research discuss the challenges arising from the lack of causal representations in machine learning models and provide directions for creating artificial intelligence systems that can learn causal representations."
Why is machine learning so hard to explain? Making it clear can help with stakeholder b... - 0 views
-
"Will Knight wrote. "Last year, a strange self-driving car was released onto the quiet roads of Monmouth County, New Jersey… . The car didn't follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it. "Getting a car to drive this way was an impressive feat. But it's also a bit unsettling, since it isn't completely clear how the car makes its decisions…. What if one day it did something unexpected-crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why." "
I helped build ByteDance's censorship machine - Protocol - The people, power and politi... - 0 views
-
"My job was to use technology to make the low-level content moderators' work more efficient. For example, we created a tool that allowed them to throw a video clip into our database and search for similar content. When I was at ByteDance, we received multiple requests from the bases to develop an algorithm that could automatically detect when a Douyin user spoke Uyghur, and then cut off the livestream session. The moderators had asked for this because they didn't understand the language. Streamers speaking ethnic languages and dialects that Mandarin-speakers don't understand would receive a warning to switch to Mandarin."
Mathematicians Boycott Police Work - 0 views
-
"That can include statistical or machine learning algorithms that rely on police records detailing the time, location, and nature of past crimes in a bid to predict if, when, where, and who may commit future infractions. In theory, this should help authorities use resources more wisely and spend more time policing certain neighborhoods that they think will yield higher crime rates."
This Word Does Not Exist - 0 views
Online manipulation expert Renée DiResta: 'Conspiracy theories shape our poli... - 0 views
-
"I started to feel that propaganda had fundamentally changed. The types of actors who could create it and spread it had shifted, and the impact it was having on our society was quite significant, but we weren't using the word. We were using words like "misinformation" or "disinformation", which seemed to be misdiagnoses of the problem. And so I wanted to write a book that asked, in this media ecosystem, what does propaganda look like?"
Will the future of transportation be robotaxis - or your own self-driving car? | Techn... - 0 views
-
Tenant-screening systems like SafeRent are often used in place of humans as a way to 'avoid engaging' directly with the applicants and pass the blame for a denial to a computer system, said Todd Kaplan, one of the attorneys representing Louis and the class of plaintiffs who sued the company. The property management company told Louis the software alone decided to reject her, but the SafeRent report indicated it was the management company that set the threshold for how high someone needed to score to have their application accepted. Louis and the other named plaintiff alleged SafeRent's algorithm disproportionately scored Black and Hispanic renters who use housing vouchers lower than white applicants. SafeRent has settled. In addition to making a $2.3m payment, the company has agreed to stop using a scoring system or make any kind of recommendation when it comes to prospective tenants who used housing vouchers for five years.
Lip-Reading AI Smashes Humans At Interpreting Silent Sentences | Digital Trends - 0 views
-
"The performance of LipNet compares incredibly favorably to human lipreading experts on GRID corpus, the largest publicly-available sentence-level lipreading dataset. In fact, where human experts got just 52 percent, LipNet scored 93 percent. Its sentence-based approach to lip-reading also smashed the best previous attempt by a machine, which managed 79.6 percent accuracy on the same dataset."
Alexa and Google Home have capacity to predict if couple are struggling and can interru... - 0 views
-
""AI can pick up missed cues and suggest nudges to bridge the gap in emotional intelligence and communication styles. It can identify optimal ways to discuss common problems and alleviate common misunderstandings based on these different priorities and ways of viewing the world. We could be looking at a different gender dynamics in a decade.""
Police across the US are training crime-predicting AIs on falsified data - MIT Technolo... - 0 views
-
"The system used historical data, including arrest records and electronic police reports, to forecast crime and help shape public safety strategies, according to company and city government materials. At no point did those materials suggest any effort to clean or amend the data to address the violations revealed by the DOJ. In all likelihood, the corrupted data was fed directly into the system, reinforcing the department's discriminatory practices."
Researchers criticize AI software that predicts emotions - CNA - 0 views
School for teenage codebreakers to open in Bletchley Park | Technology | The Guardian - 0 views
-
"The school will teach cyber skills to some of the UK's most gifted 16- to 19-year-olds. It will select on talent alone, looking in particular for exceptional problem solvers and logic fiends, regardless of wealth or family background, according to Alastair MacWillson, a driving force behind the initiative. "The cyber threat is the real threat facing the UK, and the problem it's causing the UK government and companies is growing exponentially," said MacWillson, chair of Qufaro, a not-for-profit organisation created by a consortium of cybersecurity experts for the purposes of education."
Statistically, self-driving cars are about to kill someone. What happens next? | Scienc... - 0 views
-
"As the miles grow, the odds shrink. At some point, a car driving autonomously or semi-autonomously will cause a fatal accident. If their performance is remotely comparable to a human's, that moment could come within the next 18-24 months. If so, by the law of averages it will probably involve a Tesla Model 3. Self-driving cars may be about to have their Driscoll moment."
« First
‹ Previous
141 - 160 of 176
Next ›
Showing 20▼ items per page