Skip to main content

Home/ Instructional & Media Services at Dickinson College/ Group items tagged journalists

Rss Feed Group items tagged

Ed Webb

Google pushes journalists to create G+ profiles · kegill · Storify - 0 views

  • linking search results with Google+ was like Microsoft bundling Internet Explore with Windows
  • Market strength in one place being used to leverage sub optimal products in another.
  • It's time to tell both Google and Bing that we want to decide for ourselves, thank you very much, if content is credible, instead of their making those decisions for us, decisions made behind hidden -- and suspicious -- algorithms.
Ed Webb

Bad News : CJR - 0 views

  • Students in Howard Rheingold’s journalism class at Stanford recently teamed up with NewsTrust, a nonprofit Web site that enables people to review and rate news articles for their level of quality, in a search for lousy journalism.
  • the News Hunt is a way of getting young journalists to critically examine the work of professionals. For Rheingold, an influential writer and thinker about the online world and the man credited with coining the phrase “virtual community,” it’s all about teaching them “crap detection.”
  • last year Rheingold wrote an important essay about the topic for the San Francisco Chronicle’s Web site
  • ...3 more annotations...
  • What’s at stake is no less than the quality of the information available in our society, and our collective ability to evaluate its accuracy and value. “Are we going to have a world filled with people who pass along urban legends and hoaxes?” Rheingold said, “or are people going to educate themselves about these tools [for crap detection] so we will have collective intelligence instead of misinformation, spam, urban legends, and hoaxes?”
  • I previously called fact-checking “one of the great American pastimes of the Internet age.” But, as Rheingold noted, the opposite is also true: the manufacture and promotion of bullshit is endemic. One couldn’t exist without the other. That makes Rheingold’s essay, his recent experiment with NewsTrust, and his wiki of online critical-thinking tools” essential reading for journalists. (He’s also writing a book about this topic.)
  • I believe if we want kids to succeed online, the biggest danger is not porn or predators—the biggest danger is them not being able to distinguish truth from carefully manufactured misinformation or bullshit
  •  
    As relevant to general education as to journalism training
Ed Webb

The Myth Of AI | Edge.org - 0 views

  • The distinction between a corporation and an algorithm is fading. Does that make an algorithm a person? Here we have this interesting confluence between two totally different worlds. We have the world of money and politics and the so-called conservative Supreme Court, with this other world of what we can call artificial intelligence, which is a movement within the technical culture to find an equivalence between computers and people. In both cases, there's an intellectual tradition that goes back many decades. Previously they'd been separated; they'd been worlds apart. Now, suddenly they've been intertwined.
  • Since our economy has shifted to what I call a surveillance economy, but let's say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on. And people often accept that because there's no empirical alternative to compare it to, there's no baseline. It's bad personal science. It's bad self-understanding.
  • there's no way to tell where the border is between measurement and manipulation in these systems
  • ...8 more annotations...
  • It's not so much a rise of evil as a rise of nonsense. It's a mass incompetence, as opposed to Skynet from the Terminator movies. That's what this type of AI turns into.
  • What's happened here is that translators haven't been made obsolete. What's happened instead is that the structure through which we receive the efforts of real people in order to make translations happen has been optimized, but those people are still needed.
  • because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth. The consumer tech companies, we tend to put a face in front of them, like a Cortana or a Siri
  • If you talk to translators, they're facing a predicament, which is very similar to some of the other early victim populations, due to the particular way we digitize things. It's similar to what's happened with recording musicians, or investigative journalists—which is the one that bothers me the most—or photographers. What they're seeing is a severe decline in how much they're paid, what opportunities they have, their long-term prospects.
  • In order to create this illusion of a freestanding autonomous artificial intelligent creature, we have to ignore the contributions from all the people whose data we're grabbing in order to make it work. That has a negative economic consequence.
  • If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I've just gone over, which include acceptance of bad user interfaces, where you can't tell if you're being manipulated or not, and everything is ambiguous. It creates incompetence, because you don't know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you're gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.
  • This idea that some lab somewhere is making these autonomous algorithms that can take over the world is a way of avoiding the profoundly uncomfortable political problem, which is that if there's some actuator that can do harm, we have to figure out some way that people don't do harm with it. There are about to be a whole bunch of those. And that'll involve some kind of new societal structure that isn't perfect anarchy. Nobody in the tech world wants to face that, so we lose ourselves in these fantasies of AI. But if you could somehow prevent AI from ever happening, it would have nothing to do with the actual problem that we fear, and that's the sad thing, the difficult thing we have to face.
  • To reject your own ignorance just casts you into a silly state where you're a lesser scientist. I don't see that so much in the neuroscience field, but it comes from the computer world so much, and the computer world is so influential because it has so much money and influence that it does start to bleed over into all kinds of other things.
Ed Webb

'There is no standard': investigation finds AI algorithms objectify women's bodies | Ar... - 0 views

  • AI tags photos of women in everyday situations as sexually suggestive. They also rate pictures of women as more “racy” or sexually suggestive than comparable pictures of men.
  • “You cannot have one single uncontested definition of raciness.”
  • “Objectification of women seems deeply embedded in the system.”
  • ...7 more annotations...
  • Shadowbanning has been documented for years, but the Guardian journalists may have found a missing link to understand the phenomenon: biased AI algorithms. Social media platforms seem to leverage these algorithms to rate images and limit the reach of content that they consider too racy. The problem seems to be that these AI algorithms have built-in gender bias, rating women more racy than images containing men.
  • “You are looking at decontextualized information where a bra is being seen as inherently racy rather than a thing that many women wear every day as a basic item of clothing,”
  • suppressed the reach of countless images featuring women’s bodies, and hurt female-led businesses – further amplifying societal disparities.
  • these algorithms were probably labeled by straight men, who may associate men working out with fitness, but may consider an image of a woman working out as racy. It’s also possible that these ratings seem gender biased in the US and in Europe because the labelers may have been from a place with a more conservative culture
  • “There’s no standard of quality here,”
  • “I will censor as artistically as possible any nipples. I find this so offensive to art, but also to women,” she said. “I almost feel like I’m part of perpetuating that ridiculous cycle that I don’t want to have any part of.”
  • many people, including chronically ill and disabled folks, rely on making money through social media and shadowbanning harms their business
1 - 4 of 4
Showing 20 items per page