Skip to main content

Home/ Digit_al Society/ Group items tagged machine learning bias

Rss Feed Group items tagged

dr tech

An A.I. Training Tool Has Been Passing Its Bias to Algorithms for Almost Two Decades | ... - 0 views

  •  
    ""I consider 'bias' a euphemism," says Brandeis Marshall, PhD, data scientist and CEO of DataedX, an edtech and data science firm. "The words that are used are varied: There's fairness, there's responsibility, there's algorithmic bias, there's a number of terms… but really, it's dancing around the real topic… A dataset is inherently entrenched in systemic racism and sexism.""
dr tech

Big Data Ethics: racially biased training data versus machine learning / Boing Boing - 0 views

  •  
    "O'Neill recounts an exercise to improve service to homeless families in New York City, in which data-analysis was used to identify risk-factors for long-term homelessness. The problem, O'Neill describes, was that many of the factors in the existing data on homelessness were entangled with things like race (and its proxies, like ZIP codes, which map extensively to race in heavily segregated cities like New York). Using data that reflects racism in the system to train a machine-learning algorithm whose conclusions can't be readily understood runs the risk of embedding that racism in a new set of policies, these ones scrubbed clean of the appearance of bias with the application of objective-seeming mathematics. "
dr tech

How to Detect Bias in AI - Towards Data Science - 0 views

  •  
    "Bias in Artificial Intelligence (AI) has been a popular topic over the last few years as AI-solutions have become more ingrained in our daily lives."
dr tech

I Tried Predictim AI That Scans for 'Risky' Babysitters - 0 views

  •  
    "The founders of Predictim want to be clear with me: Their product-an algorithm that scans the online footprint of a prospective babysitter to determine their "risk" levels for parents-is not racist. It is not biased. "We take ethics and bias extremely seriously," Sal Parsa, Predictim's CEO, tells me warily over the phone. "In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.""
dr tech

We can reduce gender bias in natural-language AI, but it will take a lot more work | Ve... - 0 views

  •  
    "However, since machine learning algorithms are what they eat (in other words, they function based on the training data they ingest), they inevitably end up picking up on human biases that exist in language data itself."
dr tech

I Know Some Algorithms Are Biased--because I Created One - Scientific American Blog Net... - 0 views

  •  
    "Creating an algorithm that discriminates or shows bias isn't as hard as it might seem, however. As a first-year graduate student, my advisor asked me to create a machine-learning algorithm to analyze a survey sent to United States physics instructors about teaching computer programming in their courses."
dr tech

How Artificial Intelligence Perpetuates Gender Imbalance - 0 views

  •  
    "Ege Gürdeniz: There are two components to Artificial Intelligence (AI) bias. The first is an AI application making biased decisions regarding certain groups of people. This could be ethnicity, religion, gender, and so on. To understand that we first need to understand how AI works and how it's trained to complete specific tasks."
dr tech

Columbia researchers find white men are the worst at reducing AI bias | VentureBeat - 0 views

  •  
    "Researchers at Columbia University sought to shed light on the problem by tasking 400 AI engineers with creating algorithms that made over 8.2 million predictions about 20,000 people. In a study accepted by the NeurIPS 2020 machine learning conference, the researchers conclude that biased predictions are mostly caused by imbalanced data but that the demographics of engineers also play a role."
dr tech

Police trial AI software to help process mobile phone evidence | UK news | The Guardian - 0 views

  •  
    "Cellebrite, the Israeli-founded and now Japanese-owned company behind some of the software, claims a wider rollout would solve problems over failures to disclose crucial digital evidence that have led to the collapse of a series of rape trials and other prosecutions in the past year. However, the move by police has prompted concerns over privacy and the potential for software to introduce bias into processing of criminal evidence."
dr tech

Twitter apologises for 'racist' image-cropping algorithm | Twitter | The Guardian - 0 views

  •  
    "But users began to spot flaws in the feature over the weekend. The first to highlight the issue was PhD student Colin Madland, who discovered the issue while highlighting a different racial bias in the video-conference software Zoom. When Madland, who is white, posted an image of himself and a black colleague who had been erased from a Zoom call after its algorithm failed to recognise his face, Twitter automatically cropped the image to only show Madland."
dr tech

Artificial intelligence - coming to a government near you soon? | Artificial intelligen... - 0 views

  •  
    "How that effects systems of governance has yet to be fully explored, but there are cautions. "Algorithms are only as good as the data on which they are based, and the problem with current AI is that it was trained on data that was incomplete or unrepresentative and the risk of bias or unfairness is quite substantial," says West. The fairness and equity of algorithms are only as good as the data-programming that underlie them. "For the last few decades we've allowed the tech companies to decide, so we need better guardrails and to make sure the algorithms respect human values," West says. "We need more oversight.""
dr tech

AI Inventing Its Own Culture, Passing It On to Humans, Sociologists Find - 0 views

  •  
    ""As expected, we found evidence of a performance improvement over generations due to social learning," the researchers wrote. "Adding an algorithm with a different problem-solving bias than humans temporarily improved human performance but improvements were not sustained in following generations. While humans did copy solutions from the algorithm, they appeared to do so at a lower rate than they copied other humans' solutions with comparable performance." Brinkmann told Motherboard that while they were surprised superior solutions weren't more commonly adopted, this was in line with other research suggesting human biases in decision-making persist despite social learning. Still, the team is optimistic that future research can yield insight into how to amend this."
dr tech

The coded gaze: biased and understudied facial recognition technology / Boing Boing - 0 views

  •  
    " "Why isn't my face being detected? We have to look at how we give machines sight," she said in a TED Talk late last year. "Computer vision uses machine-learning techniques to do facial recognition. You create a training set with examples of faces. However, if the training sets aren't really that diverse, any face that deviates too much from the established norm will be harder to detect.""
dr tech

A Robot, A Recruiter & A REST API Walk Into A Bar… - Peterson Technology Part... - 0 views

  •  
    "One great way to tell the difference is to ask AI recruiting companies what they use artificial intelligence, machine learning and/or deep learning for. Hopefully the hiring firm can what it's using the new technology for and not just that it is. If not it's time to dig a bit deeper."
dr tech

The New Age of Hiring: AI Is Changing the Game for Job Seekers - CNET - 0 views

  •  
    "If you've been job hunting recently, chances are you've interacted with a resume robot, a nickname for an Applicant Tracking System, or ATS. In its most basic form, an ATS acts like an online assistant, helping hiring managers write job descriptions, scan resumes and schedule interviews. As artificial intelligence advances, employers are increasingly relying on a combination of predictive analytics, machine learning and complex algorithms to sort through candidates, evaluate their skills and estimate their performance. Today, it's not uncommon for applicants to be rejected by a robot before they're connected with an actual human in human resources. The job market is ripe for the explosion of AI recruitment tools. Hiring managers are coping with deflated HR budgets while confronting growing pools of applicants, a result of both the economic downturn and the post-pandemic expansion of remote work. As automated software makes pivotal decisions about our employment, usually without any oversight, it's posing fundamental questions about privacy, accountability and transparency."
dr tech

Don't ask if artificial intelligence is good or fair, ask how it shifts power - 0 views

  •  
    "When the field of AI believes it is neutral, it both fails to notice biased data and builds systems that sanctify the status quo and advance the interests of the powerful. What is needed is a field that exposes and critiques systems that concentrate power, while co-creating new systems with impacted communities: AI by and for the people."
dr tech

What a picture of Alexandria Ocasio-Cortez in a bikini tells us about the disturbing fu... - 0 views

  •  
    "Researchers fed these algorithms (which function like autocomplete, but for images) pictures of a man cropped below his neck: 43% of the time the image was autocompleted with the man wearing a suit. When you fed the same algorithm a similarly cropped photo of a woman, it auto-completed her wearing a low-cut top or bikini a massive 53% of the time. For some reason, the researchers gave the algorithm a picture of the Democratic congresswoman Alexandria Ocasio-Cortez and found that it also automatically generated an image of her in a bikini. (After ethical concerns were raised on Twitter, the researchers had the computer-generated image of AOC in a swimsuit removed from the research paper.)"
dr tech

Digital assistants like Siri and Alexa entrench gender biases, says UN | Technology | T... - 0 views

  •  
    "Assigning female genders to digital assistants such as Apple's Siri and Amazon's Alexa is helping entrench harmful gender biases, according to a UN agency."
dr tech

Technologist Vivienne Ming: 'AI is a human right' | Technology | The Guardian - 0 views

  •  
    "At the heart of the problem that troubles Ming is the training that computer engineers receive and their uncritical faith in AI. Too often, she says, their approach to a problem is to train a neural network on a mass of data and expect the result to work fine. She berates companies for failing to engage with the problem first - applying what is already known about good employees and successful students, for example - before applying the AI."
dr tech

Smart lie-detection system to tight... - Information Centre - Research & Innovation - E... - 0 views

  •  
    "The unique approach to 'deception detection' analyses the micro-gestures of travellers to figure out if the interviewee is lying."
1 - 20 of 27 Next ›
Showing 20 items per page