Skip to main content

Home/ Digit_al Society/ Group items tagged bias people

Rss Feed Group items tagged

1More

Student proves Twitter algorithm 'bias' toward lighter, slimmer, younger faces | Twitte... - 0 views

  •  
    "Twitter's image cropping algorithm prefers younger, slimmer faces with lighter skin, an investigation into algorithmic bias at the company has found. The finding, while embarrassing for the company, which had previously apologised to users after reports of bias, marks the successful conclusion of Twitter's first ever "algorithmic bug bounty"."
1More

The Bias Embedded in Algorithms | Pocket - 0 views

  •  
    "Algorithms and the data that drive them are designed and created by people, which means those systems can carry biases based on who builds them and how they're ultimately deployed. Safiya Umoja Noble, author of Algorithms of Oppression: How Search Engines Reinforce Racism, offers a curated reading list exploring how technology can replicate and reinforce racist and sexist beliefs, how that bias can affect everything from health outcomes to financial credit to criminal justice, and why data discrimination is a major 21st century challenge."
1More

The AI startup erasing call center worker accents: is it fighting bias - or perpetuatin... - 0 views

  •  
    "But it also raises uncomfortable questions: is AI technology helping marginalized people overcome bias, or just perpetuating the biases that make their lives hard in the first place?"
1More

How Artificial Intelligence Perpetuates Gender Imbalance - 0 views

  •  
    "Ege Gürdeniz: There are two components to Artificial Intelligence (AI) bias. The first is an AI application making biased decisions regarding certain groups of people. This could be ethnicity, religion, gender, and so on. To understand that we first need to understand how AI works and how it's trained to complete specific tasks."
1More

Is an algorithm any less racist than a human? | Technology | The Guardian - 0 views

  •  
    "There's an increasingly popular solution to this problem: why not let an intelligent algorithm make hiring decisions for you? Surely, the thinking goes, a computer is more able to be impartial than a person, and can simply look at the relevant data vectors to select the most qualified people from a heap of applications, removing human bias and making the process more efficient to boot."
1More

Columbia researchers find white men are the worst at reducing AI bias | VentureBeat - 0 views

  •  
    "Researchers at Columbia University sought to shed light on the problem by tasking 400 AI engineers with creating algorithms that made over 8.2 million predictions about 20,000 people. In a study accepted by the NeurIPS 2020 machine learning conference, the researchers conclude that biased predictions are mostly caused by imbalanced data but that the demographics of engineers also play a role."
1More

I Tried Predictim AI That Scans for 'Risky' Babysitters - 0 views

  •  
    "The founders of Predictim want to be clear with me: Their product-an algorithm that scans the online footprint of a prospective babysitter to determine their "risk" levels for parents-is not racist. It is not biased. "We take ethics and bias extremely seriously," Sal Parsa, Predictim's CEO, tells me warily over the phone. "In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.""
1More

Social media's enduring effect on adolescent life satisfaction | PNAS - 0 views

  •  
    "Scientists must embrace circumspection, transparency, and robust ways of working that safeguard against bias and analytical flexibility. Doing so will provide parents and policymakers with the reliable insights they need on a topic most often characterized by unfounded media hype."
1More

AI expert calls for end to UK use of 'racially biased' algorithms | Technology | The Gu... - 0 views

  •  
    "On inbuilt bias in algorithms, Sharkey said: "There are so many biases happening now, from job interviews to welfare to determining who should get bail and who should go to jail. It is quite clear that we really have to stop using decision algorithms, and I am someone who has always been very light on regulation and always believed that it stifles innovation."
1More

I Know Some Algorithms Are Biased--because I Created One - Scientific American Blog Net... - 0 views

  •  
    "Creating an algorithm that discriminates or shows bias isn't as hard as it might seem, however. As a first-year graduate student, my advisor asked me to create a machine-learning algorithm to analyze a survey sent to United States physics instructors about teaching computer programming in their courses."
1More

'Conditioning an entire society': the rise of biometric data technology | Biometrics | ... - 0 views

  •  
    "In each case, biometric data has been harnessed to try to save time and money. But the growing use of our bodies to unlock areas of the public and private sphere has raised questions about everything from privacy to data security and racial bias."
1More

Content Moderation is a Dead End. - by Ravi Iyer - 0 views

  •  
    "One of the many policy-based projects I worked on at Meta was Engagement Bait, which is defined as "a tactic that urges people to interact with Facebook posts through likes, shares, comments, and other actions in order to artificially boost engagement and get greater reach." Accordingly, "Posts and Pages that use this tactic will be demoted." To do this, "models are built off of certain guidelines" trained using "hundreds of thousands of posts" that "teams at Facebook have reviewed and categorized." The examples provided are obvious (eg. a post saying "comment "Yes" if you love rock as much as I do"), but the problem is that there will always be far subtler ways to get people to engage with something artificially. As an example, psychology researchers have a long history of studying negativity bias, which has been shown to operate across a wide array of domains, and to lead to increased online engagement. "
1More

Google pauses AI-generated images of people after ethnicity criticism | Artificial inte... - 0 views

  •  
    "Google has put a temporary block on its new artificial intelligence model producing images of people after it portrayed German second world war soldiers and Vikings as people of colour. The tech company said it would stop its Gemini model generating images of people after social media users posted examples of images generated by the tool that depicted some historical figures - including popes and the founding fathers of the US - in a variety of ethnicities and genders."
1More

Recognising (and addressing) bias in facial recognition tech - the Gender Shades Audit ... - 0 views

  •  
    "What if facial recognition technology isn't as good at recognising faces as it has sometimes been claimed to be? If the technology is being used in the criminal justice system, and gets the identification wrong, this can cause serious problems for people (see Robert Williams' story in "Facing up to the problems of recognising faces")."
1More

AI Inventing Its Own Culture, Passing It On to Humans, Sociologists Find - 0 views

  •  
    ""As expected, we found evidence of a performance improvement over generations due to social learning," the researchers wrote. "Adding an algorithm with a different problem-solving bias than humans temporarily improved human performance but improvements were not sustained in following generations. While humans did copy solutions from the algorithm, they appeared to do so at a lower rate than they copied other humans' solutions with comparable performance." Brinkmann told Motherboard that while they were surprised superior solutions weren't more commonly adopted, this was in line with other research suggesting human biases in decision-making persist despite social learning. Still, the team is optimistic that future research can yield insight into how to amend this."
1More

Rite Aid facial recognition misidentified Black, Latino and Asian people as 'likely' sh... - 0 views

  •  
    "Rite Aid facial recognition misidentified Black, Latino and Asian people as 'likely' shoplifters Surveillance systems incorrectly and without customer consent marked shoppers as 'persons of interest', an FTC settlement says Johana Bhuiyan and agencies Wed 20 Dec 2023 14.29 EST Last modified on Thu 21 Dec 2023 12.04 EST Rite Aid used facial recognition systems to identify shoppers that were previously deemed "likely to engage" in shoplifting without customer consent and misidentified people - particularly women and Black, Latino or Asian people - on "numerous" occasions, according to a new settlement with the Federal Trade Commission. As part of the settlement, Rite Aid has been forbidden from deploying facial recognition technology in its stores for five years."
1More

How Bias Ruins A.I. - OneZero - 0 views

  •  
    "To what extent do the decisions of these types of algorithms reflect the conscious or unconscious biases of their creators?"
1More

Microsoft's Kate Crawford: 'AI is neither artificial nor intelligent' | Artificial inte... - 0 views

  •  
    "Beginning in 2017, I did a project with artist Trevor Paglen to look at how people were being labelled. We found horrifying classificatory terms that were misogynist, racist, ableist, and judgmental in the extreme. Pictures of people were being matched to words like kleptomaniac, alcoholic, bad person, closet queen, call girl, slut, drug addict and far more I cannot say here. ImageNet has now removed many of the obviously problematic people categories - certainly an improvement - however, the problem persists because these training sets still circulate on torrent sites [where files are shared between peers]."
1More

Don't ask if artificial intelligence is good or fair, ask how it shifts power - 0 views

  •  
    "When the field of AI believes it is neutral, it both fails to notice biased data and builds systems that sanctify the status quo and advance the interests of the powerful. What is needed is a field that exposes and critiques systems that concentrate power, while co-creating new systems with impacted communities: AI by and for the people."
1More

A beauty contest was judged by AI and the robots didn't like dark skin | Technology | T... - 0 views

  •  
    The ensuing controversy has sparked renewed debates about the ways in which algorithms can perpetuate biases, yielding unintended and often offensive results.
1 - 20 of 34 Next ›
Showing 20 items per page