"But users began to spot flaws in the feature over the weekend. The first to highlight the issue was PhD student Colin Madland, who discovered the issue while highlighting a different racial bias in the video-conference software Zoom.
When Madland, who is white, posted an image of himself and a black colleague who had been erased from a Zoom call after its algorithm failed to recognise his face, Twitter automatically cropped the image to only show Madland."
"In each case, biometric data has been harnessed to try to save time and money. But the growing use of our bodies to unlock areas of the public and private sphere has raised questions about everything from privacy to data security and racial bias."
"One great way to tell the difference is to ask AI recruiting companies what they use artificial intelligence, machine learning and/or deep learning for. Hopefully the hiring firm can what it's using the new technology for and not just that it is. If not it's time to dig a bit deeper."
"When the field of AI believes it is neutral, it both fails to notice biased data and builds systems that sanctify the status quo and advance the interests of the powerful. What is needed is a field that exposes and critiques systems that concentrate power, while co-creating new systems with impacted communities: AI by and for the people."
"Qoves founder Shafee Hassan claimed to MIT Technology Review that beauty scoring is widespread; social media platforms use it to identify attractive faces and give them more attention."
"Beginning in 2017, I did a project with artist Trevor Paglen to look at how people were being labelled. We found horrifying classificatory terms that were misogynist, racist, ableist, and judgmental in the extreme. Pictures of people were being matched to words like kleptomaniac, alcoholic, bad person, closet queen, call girl, slut, drug addict and far more I cannot say here. ImageNet has now removed many of the obviously problematic people categories - certainly an improvement - however, the problem persists because these training sets still circulate on torrent sites [where files are shared between peers]."
"Researchers fed these algorithms (which function like autocomplete, but for images) pictures of a man cropped below his neck: 43% of the time the image was autocompleted with the man wearing a suit. When you fed the same algorithm a similarly cropped photo of a woman, it auto-completed her wearing a low-cut top or bikini a massive 53% of the time. For some reason, the researchers gave the algorithm a picture of the Democratic congresswoman Alexandria Ocasio-Cortez and found that it also automatically generated an image of her in a bikini. (After ethical concerns were raised on Twitter, the researchers had the computer-generated image of AOC in a swimsuit removed from the research paper.)"
"Some Facebook users who recently watched a Daily Mail video depicting Black men reported seeing a label from Facebook asking if they were interested in watching more videos about "primates."
The label appeared in bold text under the video, stating "Keep seeing videos about Primates?" next to "Yes" and "Dismiss" buttons that users could click to answer the prompt. It's part of an AI-powered Facebook process that attempts to gather information on users' personal interests in order to deliver relevant content into their News Feed"
"Applying to some of the most common customer and food service jobs in the country now requires a long and bizarre personality quiz featuring blue humanoid aliens, which tells employers how potential hires rank in terms of "agreeableness" and "emotional stability."
If you've applied to a job at FedEx, McDonald's, or Darden Restaurants (the company that operates multiple chains including Olive Garden) you might have already encountered this quiz, as all these companies and others are clients of Paradox.ai, the company which runs the test and helps them with other recruiting tasks."
"Concerns have been growing about AI's so-called "white guy problem" and now scientists have devised a way to test whether an algorithm is introducing gender or racial biases into decision-making."
"In October, American teachers prevailed in a lawsuit with their school district over a computer program that assessed their performance.
The system rated teachers in Houston by comparing their students' test scores against state averages. Those with high ratings won praise and even bonuses. Those who fared poorly faced the sack.
The program did not please everyone. Some teachers felt that the system marked them down without good reason. But they had no way of checking if the program was fair or faulty: the company that built the software, the SAS Institute, regards its algorithm a trade secret and would not disclose its workings."
"A Taiwanese American model says a well-known fashion designer uploaded a digitally altered runway photo that made her appear white.
In a TikTok about the incident that has been viewed 1.8m times in the last week, Shereen Wu says Michael Costello, a designer who has worked with Beyoncé, Jennifer Lopez, and Celine Dion, posted a photo to his Instagram from a recent Los Angeles fashion show. The photo depicts Wu in the slinky black ballgown that she walked the runway in - but her face has been changed, made to appear as if she is a white woman."
"Google has put a temporary block on its new artificial intelligence model producing images of people after it portrayed German second world war soldiers and Vikings as people of colour.
The tech company said it would stop its Gemini model generating images of people after social media users posted examples of images generated by the tool that depicted some historical figures - including popes and the founding fathers of the US - in a variety of ethnicities and genders."
"Countries around the world are deploying technologies-like digital IDs, facial recognition systems, GPS devices, and spyware-that are meant to improve governance and reduce crime. But there has been little evidence to back these claims, all while introducing a high risk of exclusion, bias, misidentification, and privacy violations.
It's important to note that these impacts are not equal. They fall disproportionately on religious, ethnic, and sexual minorities, migrants and refugees, as well as human rights activists and political dissidents."
"More algorithmic decision making and decision augmenting systems will be used in the coming years. Unlike the approach taken for A-levels, future systems may include opaque AI-led decision making. Despite such risks there remain no clear picture of how public sector bodies - government, local councils, police forces and more - are using algorithmic systems for decision making."
"In 2019, Genevieve (co-author of this article) and her husband applied for the same credit card. Despite having a slightly better credit score and the same income, expenses, and debt as her husband, the credit card company set her credit limit at almost half the amount. "
The ensuing controversy has sparked renewed debates about the ways in which algorithms can perpetuate biases, yielding unintended and often offensive results.
"Apart from biases in the training databases, it's hard to know how well face-recognition systems actually perform in the real world, in spite of recent gains. Anil Jain, a professor of computer science at Michigan State University who has worked on face recognition for more than thirty years, told me, "Most of the testing on the private venders' products is done in a laboratory environment under controlled settings. In real practice, you're walking around in the streets of New York. It's a cold winter day, you have a scarf around your face, a cap, maybe your coat is pulled up so your chin is partially hidden, the illumination may not be the most favorable, and the camera isn't capturing a frontal view.""