Skip to main content

Home/ Digit_al Society/ Group items tagged human bias

Rss Feed Group items tagged

dr tech

AI Inventing Its Own Culture, Passing It On to Humans, Sociologists Find - 0 views

  •  
    ""As expected, we found evidence of a performance improvement over generations due to social learning," the researchers wrote. "Adding an algorithm with a different problem-solving bias than humans temporarily improved human performance but improvements were not sustained in following generations. While humans did copy solutions from the algorithm, they appeared to do so at a lower rate than they copied other humans' solutions with comparable performance." Brinkmann told Motherboard that while they were surprised superior solutions weren't more commonly adopted, this was in line with other research suggesting human biases in decision-making persist despite social learning. Still, the team is optimistic that future research can yield insight into how to amend this."
dr tech

Is an algorithm any less racist than a human? | Technology | The Guardian - 0 views

  •  
    "There's an increasingly popular solution to this problem: why not let an intelligent algorithm make hiring decisions for you? Surely, the thinking goes, a computer is more able to be impartial than a person, and can simply look at the relevant data vectors to select the most qualified people from a heap of applications, removing human bias and making the process more efficient to boot."
dr tech

In the age of the algorithm, the human gatekeeper is back | Technology | The Guardian - 0 views

  •  
    "Facebook is mired in a series of controversies about the curation of its news feed, from its broadcasting live killings, to editing out an iconic photo of the Vietnam war, to accusations of political bias. It recently tried to smooth the process out by firing its human editors … only to find the news feed degenerated into a mass of fake and controversial news stories."
dr tech

I Tried Predictim AI That Scans for 'Risky' Babysitters - 0 views

  •  
    "The founders of Predictim want to be clear with me: Their product-an algorithm that scans the online footprint of a prospective babysitter to determine their "risk" levels for parents-is not racist. It is not biased. "We take ethics and bias extremely seriously," Sal Parsa, Predictim's CEO, tells me warily over the phone. "In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.""
dr tech

We can reduce gender bias in natural-language AI, but it will take a lot more work | Ve... - 0 views

  •  
    "However, since machine learning algorithms are what they eat (in other words, they function based on the training data they ingest), they inevitably end up picking up on human biases that exist in language data itself."
dr tech

Artificial intelligence - coming to a government near you soon? | Artificial intelligen... - 0 views

  •  
    "How that effects systems of governance has yet to be fully explored, but there are cautions. "Algorithms are only as good as the data on which they are based, and the problem with current AI is that it was trained on data that was incomplete or unrepresentative and the risk of bias or unfairness is quite substantial," says West. The fairness and equity of algorithms are only as good as the data-programming that underlie them. "For the last few decades we've allowed the tech companies to decide, so we need better guardrails and to make sure the algorithms respect human values," West says. "We need more oversight.""
dr tech

Can't read a map or add up? Don't worry, we've always let technology do the boring stuf... - 0 views

  •  
    "The economist Oren Cass has a compelling answer for these concerns. He says they suffer from bias: the idea that this technological revolution is somehow unique, when we have lived through many epochs of innovation and upheaval. They also overestimate the pace of change (robots are a long way off from competing with humans in many areas) and assume that new kinds of jobs will not be created in the process."
dr tech

The AI Delusion: An Unbiased General Purpose Chatbot - 0 views

  •  
    "Can AI ever be unbiased? As AI systems become more integrated into our daily lives, it's crucial that we understand the complexities of bias and how it impacts these technologies. From chatbots to hiring algorithms, the potential for AI to perpetuate and even amplify existing biases is a genuine concern. "
dr tech

The terrifying, hidden reality of Ridiculously Complicated Algorithms - 0 views

  •  
    ""Weapons of math destruction" is how the writer Cathy O'Neil describes the nasty and pernicious kinds of algorithms that are not subject to the same challenges that human decision-makers are. Parole algorithms (not Jure's) can bias decisions on the basis of income or (indirectly) ethnicity. Recruitment algorithms can reject candidates on the basis of mistaken identity. In some circumstances, such as policing, they might create feedback loops, sending police into areas with more crime, which causes more crime to be detected."
dr tech

Surveillance Technology: Everything, Everywhere, All at Once - 0 views

  •  
    "Countries around the world are deploying technologies-like digital IDs, facial recognition systems, GPS devices, and spyware-that are meant to improve governance and reduce crime. But there has been little evidence to back these claims, all while introducing a high risk of exclusion, bias, misidentification, and privacy violations. It's important to note that these impacts are not equal. They fall disproportionately on religious, ethnic, and sexual minorities, migrants and refugees, as well as human rights activists and political dissidents."
dr tech

The New Age of Hiring: AI Is Changing the Game for Job Seekers - CNET - 0 views

  •  
    "If you've been job hunting recently, chances are you've interacted with a resume robot, a nickname for an Applicant Tracking System, or ATS. In its most basic form, an ATS acts like an online assistant, helping hiring managers write job descriptions, scan resumes and schedule interviews. As artificial intelligence advances, employers are increasingly relying on a combination of predictive analytics, machine learning and complex algorithms to sort through candidates, evaluate their skills and estimate their performance. Today, it's not uncommon for applicants to be rejected by a robot before they're connected with an actual human in human resources. The job market is ripe for the explosion of AI recruitment tools. Hiring managers are coping with deflated HR budgets while confronting growing pools of applicants, a result of both the economic downturn and the post-pandemic expansion of remote work. As automated software makes pivotal decisions about our employment, usually without any oversight, it's posing fundamental questions about privacy, accountability and transparency."
dr tech

Recognising (and addressing) bias in facial recognition tech - the Gender Shades Audit ... - 0 views

  •  
    "What if facial recognition technology isn't as good at recognising faces as it has sometimes been claimed to be? If the technology is being used in the criminal justice system, and gets the identification wrong, this can cause serious problems for people (see Robert Williams' story in "Facing up to the problems of recognising faces")."
dr tech

More than 1,200 Google workers condemn firing of AI scientist Timnit Gebru | Google | T... - 0 views

  •  
    "The paper, co-authored by researchers inside and outside Google, contended that technology companies could do more to ensure AI systems aimed at mimicking human writing and speech do not exacerbate historical gender biases and use of offensive language, according to a draft copy seen by Reuters."
dr tech

Technologist Vivienne Ming: 'AI is a human right' | Technology | The Guardian - 0 views

  •  
    "At the heart of the problem that troubles Ming is the training that computer engineers receive and their uncritical faith in AI. Too often, she says, their approach to a problem is to train a neural network on a mass of data and expect the result to work fine. She berates companies for failing to engage with the problem first - applying what is already known about good employees and successful students, for example - before applying the AI."
1 - 14 of 14
Showing 20 items per page