Skip to main content

Home/ Digit_al Society/ Group items tagged people and machines machine learning bias

Rss Feed Group items tagged

dr tech

I Tried Predictim AI That Scans for 'Risky' Babysitters - 0 views

  •  
    "The founders of Predictim want to be clear with me: Their product-an algorithm that scans the online footprint of a prospective babysitter to determine their "risk" levels for parents-is not racist. It is not biased. "We take ethics and bias extremely seriously," Sal Parsa, Predictim's CEO, tells me warily over the phone. "In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.""
dr tech

I Know Some Algorithms Are Biased--because I Created One - Scientific American Blog Net... - 0 views

  •  
    "Creating an algorithm that discriminates or shows bias isn't as hard as it might seem, however. As a first-year graduate student, my advisor asked me to create a machine-learning algorithm to analyze a survey sent to United States physics instructors about teaching computer programming in their courses."
dr tech

How Artificial Intelligence Perpetuates Gender Imbalance - 0 views

  •  
    "Ege Gürdeniz: There are two components to Artificial Intelligence (AI) bias. The first is an AI application making biased decisions regarding certain groups of people. This could be ethnicity, religion, gender, and so on. To understand that we first need to understand how AI works and how it's trained to complete specific tasks."
dr tech

A Robot, A Recruiter & A REST API Walk Into A Bar… - Peterson Technology Part... - 0 views

  •  
    "One great way to tell the difference is to ask AI recruiting companies what they use artificial intelligence, machine learning and/or deep learning for. Hopefully the hiring firm can what it's using the new technology for and not just that it is. If not it's time to dig a bit deeper."
dr tech

AI Inventing Its Own Culture, Passing It On to Humans, Sociologists Find - 0 views

  •  
    ""As expected, we found evidence of a performance improvement over generations due to social learning," the researchers wrote. "Adding an algorithm with a different problem-solving bias than humans temporarily improved human performance but improvements were not sustained in following generations. While humans did copy solutions from the algorithm, they appeared to do so at a lower rate than they copied other humans' solutions with comparable performance." Brinkmann told Motherboard that while they were surprised superior solutions weren't more commonly adopted, this was in line with other research suggesting human biases in decision-making persist despite social learning. Still, the team is optimistic that future research can yield insight into how to amend this."
dr tech

Don't ask if artificial intelligence is good or fair, ask how it shifts power - 0 views

  •  
    "When the field of AI believes it is neutral, it both fails to notice biased data and builds systems that sanctify the status quo and advance the interests of the powerful. What is needed is a field that exposes and critiques systems that concentrate power, while co-creating new systems with impacted communities: AI by and for the people."
dr tech

Digital assistants like Siri and Alexa entrench gender biases, says UN | Technology | T... - 0 views

  •  
    "Assigning female genders to digital assistants such as Apple's Siri and Amazon's Alexa is helping entrench harmful gender biases, according to a UN agency."
dr tech

Microsoft's Kate Crawford: 'AI is neither artificial nor intelligent' | Artificial inte... - 0 views

  •  
    "Beginning in 2017, I did a project with artist Trevor Paglen to look at how people were being labelled. We found horrifying classificatory terms that were misogynist, racist, ableist, and judgmental in the extreme. Pictures of people were being matched to words like kleptomaniac, alcoholic, bad person, closet queen, call girl, slut, drug addict and far more I cannot say here. ImageNet has now removed many of the obviously problematic people categories - certainly an improvement - however, the problem persists because these training sets still circulate on torrent sites [where files are shared between peers]."
dr tech

What a picture of Alexandria Ocasio-Cortez in a bikini tells us about the disturbing fu... - 0 views

  •  
    "Researchers fed these algorithms (which function like autocomplete, but for images) pictures of a man cropped below his neck: 43% of the time the image was autocompleted with the man wearing a suit. When you fed the same algorithm a similarly cropped photo of a woman, it auto-completed her wearing a low-cut top or bikini a massive 53% of the time. For some reason, the researchers gave the algorithm a picture of the Democratic congresswoman Alexandria Ocasio-Cortez and found that it also automatically generated an image of her in a bikini. (After ethical concerns were raised on Twitter, the researchers had the computer-generated image of AOC in a swimsuit removed from the research paper.)"
1 - 9 of 9
Showing 20 items per page