Skip to main content

Home/ Digit_al Society/ Group items tagged bias ITGS ai technology

Rss Feed Group items tagged

dr tech

Warning over use in UK of unregulated AI chatbots to create social care plans | Artific... - 0 views

  •  
    "A pilot study by academics at the University of Oxford found some care providers had been using generative AI chatbots such as ChatGPT and Bard to create care plans for people receiving care. That presents a potential risk to patient confidentiality, according to Dr Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, who surveyed care organisations for the study. "If you put any type of personal data into [a generative AI chatbot], that data is used to train the language model," Green said. "That personal data could be generated and revealed to somebody else." She said carers might act on faulty or biased information and inadvertently cause harm, and an AI-generated care plan might be substandard."
dr tech

The AI startup erasing call center worker accents: is it fighting bias - or perpetuatin... - 0 views

  •  
    "But it also raises uncomfortable questions: is AI technology helping marginalized people overcome bias, or just perpetuating the biases that make their lives hard in the first place?"
dr tech

Artificial intelligence - coming to a government near you soon? | Artificial intelligen... - 0 views

  •  
    "How that effects systems of governance has yet to be fully explored, but there are cautions. "Algorithms are only as good as the data on which they are based, and the problem with current AI is that it was trained on data that was incomplete or unrepresentative and the risk of bias or unfairness is quite substantial," says West. The fairness and equity of algorithms are only as good as the data-programming that underlie them. "For the last few decades we've allowed the tech companies to decide, so we need better guardrails and to make sure the algorithms respect human values," West says. "We need more oversight.""
dr tech

AI expert calls for end to UK use of 'racially biased' algorithms | Technology | The Gu... - 0 views

  •  
    "On inbuilt bias in algorithms, Sharkey said: "There are so many biases happening now, from job interviews to welfare to determining who should get bail and who should go to jail. It is quite clear that we really have to stop using decision algorithms, and I am someone who has always been very light on regulation and always believed that it stifles innovation."
dr tech

More than 1,200 Google workers condemn firing of AI scientist Timnit Gebru | Google | T... - 0 views

  •  
    "The paper, co-authored by researchers inside and outside Google, contended that technology companies could do more to ensure AI systems aimed at mimicking human writing and speech do not exacerbate historical gender biases and use of offensive language, according to a draft copy seen by Reuters."
dr tech

'Conditioning an entire society': the rise of biometric data technology | Biometrics | ... - 0 views

  •  
    "In each case, biometric data has been harnessed to try to save time and money. But the growing use of our bodies to unlock areas of the public and private sphere has raised questions about everything from privacy to data security and racial bias."
dr tech

This AI-powered app will tell you if you're beautiful - and reinforce biases, too | Art... - 0 views

  •  
    "Qoves founder Shafee Hassan claimed to MIT Technology Review that beauty scoring is widespread; social media platforms use it to identify attractive faces and give them more attention."
dr tech

Twitter apologises for 'racist' image-cropping algorithm | Twitter | The Guardian - 0 views

  •  
    "But users began to spot flaws in the feature over the weekend. The first to highlight the issue was PhD student Colin Madland, who discovered the issue while highlighting a different racial bias in the video-conference software Zoom. When Madland, who is white, posted an image of himself and a black colleague who had been erased from a Zoom call after its algorithm failed to recognise his face, Twitter automatically cropped the image to only show Madland."
dr tech

Microsoft's Kate Crawford: 'AI is neither artificial nor intelligent' | Artificial inte... - 0 views

  •  
    "Beginning in 2017, I did a project with artist Trevor Paglen to look at how people were being labelled. We found horrifying classificatory terms that were misogynist, racist, ableist, and judgmental in the extreme. Pictures of people were being matched to words like kleptomaniac, alcoholic, bad person, closet queen, call girl, slut, drug addict and far more I cannot say here. ImageNet has now removed many of the obviously problematic people categories - certainly an improvement - however, the problem persists because these training sets still circulate on torrent sites [where files are shared between peers]."
dr tech

Discrimination by algorithm: scientists devise test to detect AI bias | Technology | Th... - 0 views

  •  
    "Concerns have been growing about AI's so-called "white guy problem" and now scientists have devised a way to test whether an algorithm is introducing gender or racial biases into decision-making."
dr tech

Surveillance Technology: Everything, Everywhere, All at Once - 0 views

  •  
    "Countries around the world are deploying technologies-like digital IDs, facial recognition systems, GPS devices, and spyware-that are meant to improve governance and reduce crime. But there has been little evidence to back these claims, all while introducing a high risk of exclusion, bias, misidentification, and privacy violations. It's important to note that these impacts are not equal. They fall disproportionately on religious, ethnic, and sexual minorities, migrants and refugees, as well as human rights activists and political dissidents."
dr tech

Google pauses AI-generated images of people after ethnicity criticism | Artificial inte... - 0 views

  •  
    "Google has put a temporary block on its new artificial intelligence model producing images of people after it portrayed German second world war soldiers and Vikings as people of colour. The tech company said it would stop its Gemini model generating images of people after social media users posted examples of images generated by the tool that depicted some historical figures - including popes and the founding fathers of the US - in a variety of ethnicities and genders."
dr tech

A beauty contest was judged by AI and the robots didn't like dark skin | Technology | T... - 0 views

  •  
    The ensuing controversy has sparked renewed debates about the ways in which algorithms can perpetuate biases, yielding unintended and often offensive results.
dr tech

Digital assistants like Siri and Alexa entrench gender biases, says UN | Technology | T... - 0 views

  •  
    "Assigning female genders to digital assistants such as Apple's Siri and Amazon's Alexa is helping entrench harmful gender biases, according to a UN agency."
dr tech

The coded gaze: biased and understudied facial recognition technology / Boing Boing - 0 views

  •  
    " "Why isn't my face being detected? We have to look at how we give machines sight," she said in a TED Talk late last year. "Computer vision uses machine-learning techniques to do facial recognition. You create a training set with examples of faces. However, if the training sets aren't really that diverse, any face that deviates too much from the established norm will be harder to detect.""
1 - 15 of 15
Showing 20 items per page