Skip to main content

Home/ Digit_al Society/ Group items tagged ai bias

Rss Feed Group items tagged

1More

How to Detect Bias in AI - Towards Data Science - 0 views

  •  
    "Bias in Artificial Intelligence (AI) has been a popular topic over the last few years as AI-solutions have become more ingrained in our daily lives."
1More

Warning over use in UK of unregulated AI chatbots to create social care plans | Artific... - 0 views

  •  
    "A pilot study by academics at the University of Oxford found some care providers had been using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care. That presents a potential risk to patient confidentiality, according to Dr Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who surveyed care organisations for the study. "If you put any type of personal data into [a generative AI chatbot], that data is used to train the language model," Green said. "That personal data could be generated and revealed to somebody else." She said carers might act on faulty or biased information and inadvertently cause harm, and an AI-generated care plan might be substandard."
1More

How Artificial Intelligence Perpetuates Gender Imbalance - 0 views

  •  
    "Ege Gürdeniz: There are two components to Artificial Intelligence (AI) bias. The first is an AI application making biased decisions regarding certain groups of people. This could be ethnicity, religion, gender, and so on. To understand that we first need to understand how AI works and how it's trained to complete specific tasks."
1More

An A.I. Training Tool Has Been Passing Its Bias to Algorithms for Almost Two Decades | ... - 0 views

  •  
    ""I consider 'bias' a euphemism," says Brandeis Marshall, PhD, data scientist and CEO of DataedX, an edtech and data science firm. "The words that are used are varied: There's fairness, there's responsibility, there's algorithmic bias, there's a number of terms… but really, it's dancing around the real topic… A dataset is inherently entrenched in systemic racism and sexism.""
1More

The AI startup erasing call center worker accents: is it fighting bias - or perpetuatin... - 0 views

  •  
    "But it also raises uncomfortable questions: is AI technology helping marginalized people overcome bias, or just perpetuating the biases that make their lives hard in the first place?"
1More

In facial recognition challenge, top-ranking algorithms show bias against Black women |... - 0 views

  •  
    "The results are unfortunately not surprising - countless studies have shown that facial recognition is susceptible to bias. A paper last fall by University of Colorado, Boulder researchers demonstrated that AI from Amazon, Clarifai, Microsoft, and others maintained accuracy rates above 95% for cisgender men and women but misidentified trans men as women 38% of the time."
1More

Columbia researchers find white men are the worst at reducing AI bias | VentureBeat - 0 views

  •  
    "Researchers at Columbia University sought to shed light on the problem by tasking 400 AI engineers with creating algorithms that made over 8.2 million predictions about 20,000 people. In a study accepted by the NeurIPS 2020 machine learning conference, the researchers conclude that biased predictions are mostly caused by imbalanced data but that the demographics of engineers also play a role."
1More

Artificial intelligence - coming to a government near you soon? | Artificial intelligen... - 0 views

  •  
    "How that effects systems of governance has yet to be fully explored, but there are cautions. "Algorithms are only as good as the data on which they are based, and the problem with current AI is that it was trained on data that was incomplete or unrepresentative and the risk of bias or unfairness is quite substantial," says West. The fairness and equity of algorithms are only as good as the data-programming that underlie them. "For the last few decades we've allowed the tech companies to decide, so we need better guardrails and to make sure the algorithms respect human values," West says. "We need more oversight.""
1More

Technologist Vivienne Ming: 'AI is a human right' | Technology | The Guardian - 0 views

  •  
    "At the heart of the problem that troubles Ming is the training that computer engineers receive and their uncritical faith in AI. Too often, she says, their approach to a problem is to train a neural network on a mass of data and expect the result to work fine. She berates companies for failing to engage with the problem first - applying what is already known about good employees and successful students, for example - before applying the AI."
1More

AI and the American Smile. How AI misrepresents culture through a… | by jenka... - 0 views

  •  
    "AI and the American Smile How AI misrepresents culture through a facial expression."
2More

Protocols, Not Platforms: A Technological Approach to Free Speech | Knight First Amendm... - 1 views

  •  
    "Some have argued for much greater policing of content online, and companies like Facebook, YouTube, and Twitter have talked about hiring thousands to staff up their moderation teams.8 8. April Glaser, Want a Terrible Job? Facebook and Google May Be Hiring,Slate (Jan. 18, 2018), https://slate.com/technology/2018/01/facebook-and-google-are-building-an-army-of-content-moderators-for-2018.html (explaining that major platforms have hired or have announced plans to hire thousands, in some cases more than ten thousand, new content moderators).On the other side of the coin, companies are increasingly investing in more and more sophisticated technology help, such as artificial intelligence, to try to spot contentious content earlier in the process.9 9. Tom Simonite, AI Has Started Cleaning Up Facebook, But Can It Finish?,Wired (Dec. 18, 2018), https://www.wired.com/story/ai-has-started-cleaning-facebook-can-it-finish/.Others have argued that we should change Section 230 of the CDA, which gives platforms a free hand in determining how they moderate (or how they don't moderate).10 10. Gohmert Press Release, supra note 7 ("Social media companies enjoy special legal protections under Section 230 of the Communications Act of 1934, protections not shared by other media. Instead of acting like the neutral platforms they claim to be in order obtain their immunity, these companies have turned Section 230 into a license to potentially defraud and defame with impunity… Since there still appears to be no sincere effort to stop this disconcerting behavior, it is time for social media companies to be liable for any biased and unethical impropriety of their employees as any other media company. If these companies want to continue to act like a biased medium and publish their own agendas to the detriment of others, they need to be held accountable."); Eric Johnson, Silicon Valley's Self-Regulating Days "Probably Should Be" Over, Nancy Pelosi Says, Vox (Apr. 11, 2019), https:/
  •  
    "After a decade or so of the general sentiment being in favor of the internet and social media as a way to enable more speech and improve the marketplace of ideas, in the last few years the view has shifted dramatically-now it seems that almost no one is happy. Some feel that these platforms have become cesspools of trolling, bigotry, and hatred.1 1. Zachary Laub, Hate Speech on Social Media: Global Comparisons, Council on Foreign Rel. (Jun. 7, 2019), https://www.cfr.org/backgrounder/hate-speech-social-media-global-comparisons.Meanwhile, others feel that these platforms have become too aggressive in policing language and are systematically silencing or censoring certain viewpoints.2 2. Tony Romm, Republicans Accused Facebook, Google and Twitter of Bias. Democrats Called the Hearing 'Dumb.', Wash. Post (Jul. 17, 2018), https://www.washingtonpost.com/technology/2018/07/17/republicans-accused-facebook-google-twitter-bias-democrats-called-hearing-dumb/?utm_term=.895b34499816.And that's not even touching on the question of privacy and what these platforms are doing (or not doing) with all of the data they collect."
1More

I Tried Predictim AI That Scans for 'Risky' Babysitters - 0 views

  •  
    "The founders of Predictim want to be clear with me: Their product-an algorithm that scans the online footprint of a prospective babysitter to determine their "risk" levels for parents-is not racist. It is not biased. "We take ethics and bias extremely seriously," Sal Parsa, Predictim's CEO, tells me warily over the phone. "In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.""
1More

Police trial AI software to help process mobile phone evidence | UK news | The Guardian - 0 views

  •  
    "Cellebrite, the Israeli-founded and now Japanese-owned company behind some of the software, claims a wider rollout would solve problems over failures to disclose crucial digital evidence that have led to the collapse of a series of rape trials and other prosecutions in the past year. However, the move by police has prompted concerns over privacy and the potential for software to introduce bias into processing of criminal evidence."
1More

AI expert calls for end to UK use of 'racially biased' algorithms | Technology | The Gu... - 0 views

  •  
    "On inbuilt bias in algorithms, Sharkey said: "There are so many biases happening now, from job interviews to welfare to determining who should get bail and who should go to jail. It is quite clear that we really have to stop using decision algorithms, and I am someone who has always been very light on regulation and always believed that it stifles innovation."
1More

We can reduce gender bias in natural-language AI, but it will take a lot more work | Ve... - 0 views

  •  
    "However, since machine learning algorithms are what they eat (in other words, they function based on the training data they ingest), they inevitably end up picking up on human biases that exist in language data itself."
1More

The New Age of Hiring: AI Is Changing the Game for Job Seekers - CNET - 0 views

  •  
    "If you've been job hunting recently, chances are you've interacted with a resume robot, a nickname for an Applicant Tracking System, or ATS. In its most basic form, an ATS acts like an online assistant, helping hiring managers write job descriptions, scan resumes and schedule interviews. As artificial intelligence advances, employers are increasingly relying on a combination of predictive analytics, machine learning and complex algorithms to sort through candidates, evaluate their skills and estimate their performance. Today, it's not uncommon for applicants to be rejected by a robot before they're connected with an actual human in human resources. The job market is ripe for the explosion of AI recruitment tools. Hiring managers are coping with deflated HR budgets while confronting growing pools of applicants, a result of both the economic downturn and the post-pandemic expansion of remote work. As automated software makes pivotal decisions about our employment, usually without any oversight, it's posing fundamental questions about privacy, accountability and transparency."
1More

More than 1,200 Google workers condemn firing of AI scientist Timnit Gebru | Google | T... - 0 views

  •  
    "The paper, co-authored by researchers inside and outside Google, contended that technology companies could do more to ensure AI systems aimed at mimicking human writing and speech do not exacerbate historical gender biases and use of offensive language, according to a draft copy seen by Reuters."
1More

Generative AI like Midjourney creates images full of stereotypes - Rest of World - 0 views

  •  
    ""Essentially what this is doing is flattening descriptions of, say, 'an Indian person' or 'a Nigerian house' into particular stereotypes which could be viewed in a negative light," Amba Kak, executive director of the AI Now Institute, a U.S.-based policy research organization, told Rest of World. Even stereotypes that are not inherently negative, she said, are still stereotypes: They reflect a particular value judgment, and a winnowing of diversity. Midjourney did not respond to multiple requests for an interview or comment for this story."
1More

AI Inventing Its Own Culture, Passing It On to Humans, Sociologists Find - 0 views

  •  
    ""As expected, we found evidence of a performance improvement over generations due to social learning," the researchers wrote. "Adding an algorithm with a different problem-solving bias than humans temporarily improved human performance but improvements were not sustained in following generations. While humans did copy solutions from the algorithm, they appeared to do so at a lower rate than they copied other humans' solutions with comparable performance." Brinkmann told Motherboard that while they were surprised superior solutions weren't more commonly adopted, this was in line with other research suggesting human biases in decision-making persist despite social learning. Still, the team is optimistic that future research can yield insight into how to amend this."
1More

Twitter apologises for 'racist' image-cropping algorithm | Twitter | The Guardian - 0 views

  •  
    "But users began to spot flaws in the feature over the weekend. The first to highlight the issue was PhD student Colin Madland, who discovered the issue while highlighting a different racial bias in the video-conference software Zoom. When Madland, who is white, posted an image of himself and a black colleague who had been erased from a Zoom call after its algorithm failed to recognise his face, Twitter automatically cropped the image to only show Madland."
1 - 20 of 47 Next › Last »
Showing 20 items per page