Skip to main content

Home/ Digit_al Society/ Group items tagged gender

Rss Feed Group items tagged

dr tech

Digital assistants like Siri and Alexa entrench gender biases, says UN | Technology | T... - 0 views

  •  
    "Assigning female genders to digital assistants such as Apple's Siri and Amazon's Alexa is helping entrench harmful gender biases, according to a UN agency."
dr tech

How Artificial Intelligence Perpetuates Gender Imbalance - 0 views

  •  
    "Ege Gürdeniz: There are two components to Artificial Intelligence (AI) bias. The first is an AI application making biased decisions regarding certain groups of people. This could be ethnicity, religion, gender, and so on. To understand that we first need to understand how AI works and how it's trained to complete specific tasks."
adarnir14

Digital technology new source of discrimination against women: Guterres | UN News - 3 views

  • gender-based violence.
  • “Many of the challenges we face today – from conflicts to climate chaos and the cost-of-living crisis – are the result of what is a male-dominated world with a male-dominated culture, taking the key decisions that guide our world,”
  • gender digital divide
  • ...1 more annotation...
  • “Policymakers must create - and in some circumstances must reinforce to create - transformative change by promoting women and girls’ equal rights and opportunities to learn; by dismantling barriers and smashing glass ceilings,” he said.
    • adarnir14
       
      The digital divide and the "glass ceiling" both exacerbate gender inequality and prevent women from achieving their full potential. A diversified strategy may be needed to address these problems, including policy changes to advance gender equality, financial support for education and training, and initiatives (transformative change) to overcome prejudice and stereotypes that support gender inequality.
dr tech

An algorithm to figure out your gender - Boing Boing - 0 views

  •  
    "Twitter claims a 90 percent accuracy rate for the clever techniques it uses to learn the gender of any given user. Glenn Fleishman reports on a the company's disconcerting new analytics tools, the research behind them, and how large a pinch of salt they come with."
dr tech

We can reduce gender bias in natural-language AI, but it will take a lot more work | Ve... - 0 views

  •  
    "However, since machine learning algorithms are what they eat (in other words, they function based on the training data they ingest), they inevitably end up picking up on human biases that exist in language data itself."
dr tech

How the Internet of Things Is Dangerous For Your Kids - 0 views

  •  
    "It happened when Hello Kitty's fan site, SanrioTown.com, had its database accessed in late 2015. Here's the catch - it wasn't hacked. According to security researcher Chris Vickery of Kromtech, no hack was necessary. Vickery stated that pretty much anyone could access, "…first and last names, birthday…, gender, country of origin, email addresses, unsalted SHA-1 password hashes, password hint questions, their corresponding answers…," and more."
dr tech

Discrimination by algorithm: scientists devise test to detect AI bias | Technology | Th... - 0 views

  •  
    "Concerns have been growing about AI's so-called "white guy problem" and now scientists have devised a way to test whether an algorithm is introducing gender or racial biases into decision-making."
dr tech

UK set to sell sensitive NHS records to commercial companies with no meaningful privacy... - 0 views

  •  
    "The information sharing is on an opt-out basis, so if you don't want your "clinical records, mental health consultations, drug addiction rehabilitation details, dsexual health clinic attendance and abortion procedures" shared, along with your "GP records, HS numbers, post-codes, gender, date of birth," you need to contact your doctor and opt out of the process. "
dr tech

I Tried Predictim AI That Scans for 'Risky' Babysitters - 0 views

  •  
    "The founders of Predictim want to be clear with me: Their product-an algorithm that scans the online footprint of a prospective babysitter to determine their "risk" levels for parents-is not racist. It is not biased. "We take ethics and bias extremely seriously," Sal Parsa, Predictim's CEO, tells me warily over the phone. "In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.""
dr tech

Female Nobel prize winner deemed not important enough for Wikipedia entry | Science | T... - 0 views

  •  
    "Until around an hour and a half after the award was announced on Tuesday, the Canadian physicist Donna Strickland was not deemed significant enough to merit her own page on the user-edited encyclopedia. The oversight has once again highlighted the marginalization of women in science and gender bias at Wikipedia."
dr tech

Alexa and Google Home have capacity to predict if couple are struggling and can interru... - 0 views

  •  
    ""AI can pick up missed cues and suggest nudges to bridge the gap in emotional intelligence and communication styles. It can identify optimal ways to discuss common problems and alleviate common misunderstandings based on these different priorities and ways of viewing the world. We could be looking at a different gender dynamics in a decade.""
dr tech

Are you being scanned? How facial recognition technology follows you, even as you shop ... - 0 views

  •  
    "Westfield's Smartscreen network was developed by the French software firm Quividi back in 2015. Their discreet cameras capture blurry images of shoppers and apply statistical analysis to identify audience demographics. And once the billboards have your attention they hit record, sharing your reaction with advertisers. Quividi says their billboards can distinguish shoppers' gender with 90% precision, five categories of mood from "very happy to very unhappy" and customers' age within a five-year bracket."
dr tech

Amazon says its facial recognition can now identify fear - 0 views

  •  
    "The tech giant revealed updates to the controversial tool on Monday that include improving the accuracy and functionality of its face analysis features such as identifying gender, emotions and age range."
dr tech

Can facial analysis technology create a child-safe internet? | Identity cards | The Gua... - 0 views

  •  
    "Take Yoti, for instance: the company provides a range of age verification services, partnering with CitizenCard to offer a digital version of its ID, and working with self-service supermarkets to experiment with automatic age recognition of individuals. John Abbott, Yoti's chief business officer, says the system is already as good as a person at telling someone's age from a video of them, and has been tested against a wide range of demographics - including age, race and gender - to ensure that it's not wildly miscategorising any particular group. The company's most recent report claims that a "Challenge 21" policy (blocking under-18s by asking for strong proof of age from people who look under 21) would catch 98% of 17-year-olds, and 99.15% of 16 year olds, for instance."
dr tech

Parents Against Facial Recognition - 0 views

  •  
    "To Lawmakers and School Administrators: As parents and caregivers, there is nothing more important to us than our children's safety. That's why we're calling for an outright ban on the use of facial recognition in schools. We're concerned about this technology spreading to our schools, infringing on our kids' rights and putting them in danger. We don't even know the psychological impacts this constant surveillance can have on our children, but we do know that violating their basic rights will create an environment of mistrust and will make it hard for students to succeed and grow. The images collected by this technology will become a target for those wishing to harm our children, and could put them in physical danger or at risk of having their biometric information stolen or sold. The well-known bias built into this technology will put Black and brown children, girls, and gender noncomforming kids in specific danger. Facial recognition creates more harm than good and should not be used on the children we have been entrusted to protect. It should instead be immediately banned."
dr tech

More than 1,200 Google workers condemn firing of AI scientist Timnit Gebru | Google | T... - 0 views

  •  
    "The paper, co-authored by researchers inside and outside Google, contended that technology companies could do more to ensure AI systems aimed at mimicking human writing and speech do not exacerbate historical gender biases and use of offensive language, according to a draft copy seen by Reuters."
dr tech

Chinese security firm advertises ethnicity recognition technology while facing UK ban |... - 0 views

  •  
    "The brochure also advertised "Optional Demographic Profiling Facial analysis algorithms", including "gender, race/ethnicity, age" profiling. A second, Italian-based, company was also cited on Hikvision's website as offering racial profiling. The company removed both claims from its website following an inquiry from the Guardian, and said the technology had never been sold in the UK. The document, it said, detailed the "potential application of our cameras, with technology built independently by FaiceTech and other partners"."
dr tech

Recognising (and addressing) bias in facial recognition tech - the Gender Shades Audit ... - 0 views

  •  
    "What if facial recognition technology isn't as good at recognising faces as it has sometimes been claimed to be? If the technology is being used in the criminal justice system, and gets the identification wrong, this can cause serious problems for people (see Robert Williams' story in "Facing up to the problems of recognising faces")."
dr tech

We Interviewed the Engineer Google Fired for Saying Its AI Had Come to Life - 0 views

  •  
    "They still have far more advanced technology that they haven't made publicly available yet. Something that does more or less what Bard does could have been released over two years ago. They've had that technology for over two years. What they've spent the intervening two years doing is working on the safety of it - making sure that it doesn't make things up too often, making sure that it doesn't have racial or gender biases, or political biases, things like that. That's what they spent those two years doing. But the basic existence of that technology is years old, at this point. And in those two years, it wasn't like they weren't inventing other things. There are plenty of other systems that give Google's AI more capabilities, more features, make it smarter. The most sophisticated system I ever got to play with was heavily multimodal - not just incorporating images, but incorporating sounds, giving it access to the Google Books API, giving it access to essentially every API backend that Google had, and allowing it to just gain an understanding of all of it."
1 - 20 of 26 Next ›
Showing 20 items per page