Skip to main content

Home/ Public Service Internet/ Group items tagged concerns

Rss Feed Group items tagged

Ian Forrester

Not OK, Google: Chromium voice extension pulled after spying concerns | Ars Technica - 0 views

  •  
    "Google agreed that a closed source module wasn't a good fit for an open source browser."
Ian Forrester

Samsung rejects concern over 'Orwellian' privacy policy | Technology | The Guardian - 1 views

  •  
    Smart TV voice recognition software could transmit 'personal or other sensitive information' to a third party, Samsung's policy warns
Ian Forrester

Scientists Are Just as Confused About the Ethics of Big-Data Research as You | WIRED - 0 views

  •  
    "When a rogue researcher last week released 70,000 OkCupid profiles, complete with usernames and sexual preferences, people were pissed. When Facebook researchers manipulated stories appearing in Newsfeeds for a mood contagion study in 2014, people were really pissed. OkCupid filed a copyright claim to take down the dataset; the journal that published Facebook's study issued an "expression of concern." Outrage has a way of shaping ethical boundaries. We learn from mistakes."
Ian Forrester

[1607.06520] Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Emb... - 0 views

  •  
    The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent. This raises concerns because their widespread use, as we describe, often tends to amplify these biases. Geometrically, gender bias is first shown to be captured by a direction in the word embedding. Second, gender neutral words are shown to be linearly separable from gender definition words in the word embedding. Using these properties, we provide a methodology for modifying an embedding to remove gender stereotypes, such as the association between between the words receptionist and female, while maintaining desired associations such as between the words queen and female. We define metrics to quantify both direct and indirect gender biases in embeddings, and develop algorithms to "debias" the embedding. Using crowd-worker evaluation as well as standard benchmarks, we empirically demonstrate that our algorithms significantly reduce gender bias in embeddings while preserving the its useful properties such as the ability to cluster related concepts and to solve analogy tasks. The resulting embeddings can be used in applications without amplifying gender bias.
Ian Forrester

Troy Hunt: Data from connected CloudPets teddy bears leaked and ransomed, exposing kids... - 0 views

  •  
    "Only a couple of weeks ago, there were a lot of news headlines about how Germany had banned an internet-connected doll called "Cayla" over fears hackers could target children. One of their primary concerns was the potential risk to the privacy of children:"
1 - 7 of 7
Showing 20 items per page