Skip to main content

Home/ Digit_al Society/ Group items tagged racist

Rss Feed Group items tagged

dr tech

The Folly of DALL-E: How 4chan is Abusing Bing's New Image Model - bellingcat - 0 views

  •  
    "Racists on the notorious troll site 4chan are using a powerful new and free AI-powered image generator service offered by Microsoft to create antisemitic propaganda, according to posts reviewed by Bellingcat. Users of 4chan, which has frequently hosted hate speech and played home to posts by mass shooters, tasked Bing Image Creator to create photo-realistic antisemitic caricatures of Jews and, in recent days, shared images created by the platform depicting Orthodox men preparing to eat a baby, carrying migrants across the US border (the latter a nod to the racist Great Replacement conspiracy theory), and committing the 9/11 attacks."
dr tech

Sadiq Khan received racist abuse after false reports he blocked Queen statue | Sadiq Kh... - 0 views

  •  
    "Sadiq Khan has received a wave of social media abuse, some of it racist, after newspapers incorrectly that reported he might block a new statue of the Queen, days after the London mayor warned that some media outlets were "monetising" hatred."
dr tech

Is an algorithm any less racist than a human? | Technology | The Guardian - 0 views

  •  
    "There's an increasingly popular solution to this problem: why not let an intelligent algorithm make hiring decisions for you? Surely, the thinking goes, a computer is more able to be impartial than a person, and can simply look at the relevant data vectors to select the most qualified people from a heap of applications, removing human bias and making the process more efficient to boot."
dr tech

I Tried Predictim AI That Scans for 'Risky' Babysitters - 0 views

  •  
    "The founders of Predictim want to be clear with me: Their product-an algorithm that scans the online footprint of a prospective babysitter to determine their "risk" levels for parents-is not racist. It is not biased. "We take ethics and bias extremely seriously," Sal Parsa, Predictim's CEO, tells me warily over the phone. "In fact, in the last 18 months we trained our product, our machine, our algorithm to make sure it was ethical and not biased. We took sensitive attributes, protected classes, sex, gender, race, away from our training set. We continuously audit our model. And on top of that we added a human review process.""
dr tech

How white engineers built racist code - and why it's dangerous for black people | Techn... - 0 views

  •  
    "The lack of answers the Jacksonville sheriff's office have provided in Lynch's case is representative of the problems that facial recognition poses across the country. "It's considered an imperfect biometric," said Garvie, who in 2016 created a study on facial recognition software, published by the Center on Privacy and Technology at Georgetown Law, called The Perpetual Line-Up. "There's no consensus in the scientific community that it provides a positive identification of somebody.""
dr tech

Microsoft's Kate Crawford: 'AI is neither artificial nor intelligent' | Artificial inte... - 0 views

  •  
    "Beginning in 2017, I did a project with artist Trevor Paglen to look at how people were being labelled. We found horrifying classificatory terms that were misogynist, racist, ableist, and judgmental in the extreme. Pictures of people were being matched to words like kleptomaniac, alcoholic, bad person, closet queen, call girl, slut, drug addict and far more I cannot say here. ImageNet has now removed many of the obviously problematic people categories - certainly an improvement - however, the problem persists because these training sets still circulate on torrent sites [where files are shared between peers]."
dr tech

There's a new tactic for exposing you to radical content online: the 'slow red-pill' | ... - 0 views

  •  
    "This type of extreme racist post was frequently met with pushback from the community. Common responses included; "people should be treated as individuals not as part of a group" and "the Democrats are the ones who want to divide us up by race". Implicit or explicit gestures of antisemitism were strongly protested by evangelical Christians. Red-pill posts would rarely stay up long. In most cases, they were only intended to appear in one's Instagram feed and to vanish shortly after. The account would then resume posting popular content, wait another week and try it again. This process would continue for months, maybe a year. By posting mainstream conservative content most of the time, these extreme-right groups were able to build up an audience numbering in the range of 30,000 to 40,000, which they could then incrementally expose to radical content."
dr tech

Twitter apologises for 'racist' image-cropping algorithm | Twitter | The Guardian - 0 views

  •  
    "But users began to spot flaws in the feature over the weekend. The first to highlight the issue was PhD student Colin Madland, who discovered the issue while highlighting a different racial bias in the video-conference software Zoom. When Madland, who is white, posted an image of himself and a black colleague who had been erased from a Zoom call after its algorithm failed to recognise his face, Twitter automatically cropped the image to only show Madland."
dr tech

The Bias Embedded in Algorithms | Pocket - 0 views

  •  
    "Algorithms and the data that drive them are designed and created by people, which means those systems can carry biases based on who builds them and how they're ultimately deployed. Safiya Umoja Noble, author of Algorithms of Oppression: How Search Engines Reinforce Racism, offers a curated reading list exploring how technology can replicate and reinforce racist and sexist beliefs, how that bias can affect everything from health outcomes to financial credit to criminal justice, and why data discrimination is a major 21st century challenge."
dr tech

Cory Doctorow: 'Technologists have failed to listen to non-technologists' | Social medi... - 0 views

  •  
    "One of the problems with The Social Dilemma is that it supposes that tech did what it claims it did - that these are actually such incredible geniuses that they figured out how to use machine learning to control minds. And that's the problem - the mind control thing they designed to sell you fidget spinners got hijacked to make your uncle racist. But there's another possibility, which is that their claims are rubbish. They just overpromised in their sales material, and that what actually happened with that growth of monopolies and corruption in the public sphere made people cynical, angry, bitter and violent. In which case the problem isn't that their tools were misused. The problem is that the structures in which those tools were developed are intrinsically corrupt and corrupting."
aren01

Social networks' anti-racism policies belied by users' experience | Race | The Guardian - 1 views

  •  
    ""The abhorrent racist abuse directed at England players last night has absolutely no place on Twitter," the social network said on Monday morning. A Facebook spokesperson said similarly: "No one should have to experience racist abuse anywhere, and we don't want it on Instagram." But the statements bore little relation to the experience of the company's users. On Instagram, where thousands left comments on the pages of Marcus Rashford, Bukayo Saka and Jadon Sancho, supportive users who tried to flag abuse to the platform were surprised by the response."
  •  
    "The world's biggest social networks say racism isn't welcome on their platforms, but a combination of poor enforcement and weak rules have allowed hate to flourish."
1 - 12 of 12
Showing 20 items per page