Skip to main content

Home/ Instructional & Media Services at Dickinson College/ Group items tagged bias

Rss Feed Group items tagged

Ed Webb

I unintentionally created a biased AI algorithm 25 years ago - tech companies are still... - 0 views

  • How and why do well-educated, well-intentioned scientists produce biased AI systems? Sociological theories of privilege provide one useful lens.
  • Scientists also face a nasty subconscious dilemma when incorporating diversity into machine learning models: Diverse, inclusive models perform worse than narrow models.
  • fairness can still be the victim of competitive pressures in academia and industry. The flawed Bard and Bing chatbots from Google and Microsoft are recent evidence of this grim reality. The commercial necessity of building market share led to the premature release of these systems.
  • ...3 more annotations...
  • Their training data is biased. They are designed by an unrepresentative group. They face the mathematical impossibility of treating all categories equally. They must somehow trade accuracy for fairness. And their biases are hiding behind millions of inscrutable numerical parameters.
  • biased AI systems can still be created unintentionally and easily. It’s also clear that the bias in these systems can be harmful, hard to detect and even harder to eliminate.
  • with North American computer science doctoral programs graduating only about 23% female, and 3% Black and Latino students, there will continue to be many rooms and many algorithms in which underrepresented groups are not represented at all.
Ed Webb

'There is no standard': investigation finds AI algorithms objectify women's bodies | Ar... - 0 views

  • AI tags photos of women in everyday situations as sexually suggestive. They also rate pictures of women as more “racy” or sexually suggestive than comparable pictures of men.
  • “You cannot have one single uncontested definition of raciness.”
  • “Objectification of women seems deeply embedded in the system.”
  • ...7 more annotations...
  • Shadowbanning has been documented for years, but the Guardian journalists may have found a missing link to understand the phenomenon: biased AI algorithms. Social media platforms seem to leverage these algorithms to rate images and limit the reach of content that they consider too racy. The problem seems to be that these AI algorithms have built-in gender bias, rating women more racy than images containing men.
  • “You are looking at decontextualized information where a bra is being seen as inherently racy rather than a thing that many women wear every day as a basic item of clothing,”
  • suppressed the reach of countless images featuring women’s bodies, and hurt female-led businesses – further amplifying societal disparities.
  • these algorithms were probably labeled by straight men, who may associate men working out with fitness, but may consider an image of a woman working out as racy. It’s also possible that these ratings seem gender biased in the US and in Europe because the labelers may have been from a place with a more conservative culture
  • “There’s no standard of quality here,”
  • “I will censor as artistically as possible any nipples. I find this so offensive to art, but also to women,” she said. “I almost feel like I’m part of perpetuating that ridiculous cycle that I don’t want to have any part of.”
  • many people, including chronically ill and disabled folks, rely on making money through social media and shadowbanning harms their business
Ed Webb

K-12 Media Literacy No Panacea for Fake News, Report Argues - Digital Education - Educa... - 0 views

  • "Media literacy has long focused on personal responsibility, which can not only imbue individuals with a false sense of confidence in their skills, but also put the onus of monitoring media effects on the audience, rather than media creators, social media platforms, or regulators,"
  • the need to better understand the modern media environment, which is heavily driven by algorithm-based personalization on social-media platforms, and the need to be more systematic about evaluating the impact of various media-literacy strategies and interventions
  • In response, bills to promote media literacy in schools have been introduced or passed in more than a dozen states. A range of nonprofit, corporate, and media organizations have stepped up efforts to promote related curricula and programs. Such efforts should be applauded—but not viewed as a "panacea," the Data & Society researchers argue.
  • ...4 more annotations...
  • existing efforts "focus on the interpretive responsibilities of the individual,"
  • "if bad actors intentionally dump disinformation online with an aim to distract and overwhelm, is it possible to safeguard against media manipulation?"
  • A 2012 meta-analysis by academic researchers found that media literacy efforts could help boost students' critical awareness of messaging, bias, and representation in the media they consumed. There have been small studies suggesting that media-literacy efforts can change students' behaviors—for example, by making them less likely to seek out violent media for their own consumption. And more recently, a pair of researchers found that media-literacy training was more important than prior political knowledge when it comes to adopting a critical stance to partisan media content.
  • the roles of institutions, technology companies, and governments
Ed Webb

CRITICAL AI: Adapting College Writing for the Age of Large Language Models such as Chat... - 1 views

  • In the long run, we believe, teachers need to help students develop a critical awareness of generative machine models: how they work; why their content is often biased, false, or simplistic; and what their social, intellectual, and environmental implications might be. But that kind of preparation takes time, not least because journalism on this topic is often clickbait-driven, and “AI” discourse tends to be jargony, hype-laden, and conflated with science fiction.
  • Make explicit that the goal of writing is neither a product nor a grade but, rather, a process that empowers critical thinking
  • Students are more likely to misuse text generators if they trust them too much. The term “Artificial Intelligence” (“AI”) has become a marketing tool for hyping products. For all their impressiveness, these systems are not intelligent in the conventional sense of that term. They are elaborate statistical models that rely on mass troves of data—which has often been scraped indiscriminately from the web and used without knowledge or consent.
  • ...9 more annotations...
  • LLMs usually cannot do a good job of explaining how a particular passage from a longer text illuminates the whole of that longer text. Moreover, ChatGPT’s outputs on comparison and contrast are often superficial. Typically the system breaks down a task of logical comparison into bite-size pieces, conveys shallow information about each of those pieces, and then formulaically “compares” and “contrasts” in a noticeably superficial or repetitive way. 
  • In-class writing, whether digital or handwritten, may have downsides for students with anxiety and disabilities
  • ChatGPT can produce outputs that take the form of  “brainstorms,” outlines, and drafts. It can also provide commentary in the style of peer review or self-analysis. Nonetheless, students would need to coordinate multiple submissions of automated work in order to complete this type of assignment with a text generator.  
  • No one should present auto-generated writing as their own on the expectation that this deception is undiscoverable. 
  • LLMs often mimic the harmful prejudices, misconceptions, and biases found in data scraped from the internet
  • Show students examples of inaccuracy, bias, logical, and stylistic problems in automated outputs. We can build students’ cognitive abilities by modeling and encouraging this kind of critique. Given that social media and the internet are full of bogus accounts using synthetic text, alerting students to the intrinsic problems of such writing could be beneficial. (See the “ChatGPT/LLM Errors Tracker,” maintained by Gary Marcus and Ernest Davis.)
  • Since ChatGPT is good at grammar and syntax but suffers from formulaic, derivative, or inaccurate content, it seems like a poor foundation for building students’ skills and may circumvent their independent thinking.
  • Good journalism on language models is surprisingly hard to find since the technology is so new and the hype is ubiquitous. Here are a few reliable short pieces.     “ChatGPT Advice Academics Can Use Now” edited by Susan Dagostino, Inside Higher Ed, January 12, 2023  “University students recruit AI to write essays for them. Now what?” by Katyanna Quach, The Register, December 27, 2022  “How to spot AI-generated text” by Melissa Heikkilä, MIT Technology Review, December 19, 2022  The Road to AI We Can Trust, Substack by Gary Marcus, a cognitive scientist and AI researcher who writes frequently and lucidly about the topic. See also Gary Marcus and Ernest Davis, “GPT-3, Bloviator: OpenAI’s Language Generator Has No Idea What It’s Talking About” (2020).
  • “On the Dangers of Stochastic Parrots” by Emily M. Bender, Timnit Gebru, et al, FAccT ’21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, March 2021. Association for Computing Machinery, doi: 10.1145/3442188. A blog post summarizing and discussing the above essay derived from a Critical AI @ Rutgers workshop on the essay: summarizes key arguments, reprises discussion, and includes links to video-recorded presentations by digital humanist Katherine Bode (ANU) and computer scientist and NLP researcher Matthew Stone (Rutgers).
1 - 5 of 5
Showing 20 items per page