Skip to main content

Home/ Dystopias/ Group items tagged journalism

Rss Feed Group items tagged

Ed Webb

The trust gap: how and why news on digital platforms is viewed more sceptically versus ... - 0 views

  • Levels of trust in news on social media, search engines, and messaging apps is consistently lower than audience trust in information in the news media more generally.
  • Many of the same people who lack trust in news encountered via digital media companies – who tend to be older, less educated, and less politically interested – also express less trust in the news regardless of whether found on platforms or through more traditional offline modes.
  • Many of the most common reasons people say they use platforms have little to do with news.
  • ...3 more annotations...
  • News about politics is viewed as particularly suspect and platforms are seen by many as contentious places for political conversation – at least for those most interested in politics. Rates of trust in news in general are comparatively higher than trust in news when it pertains to coverage of political affairs.
  • Negative perceptions about journalism are widespread and social media is one of the most often-cited places people say they see or hear criticism of news and journalism
  • Despite positive feelings towards most platforms, large majorities in all four countries agree that false and misleading information, harassment, and platforms using data irresponsibly are ‘big problems’ in their country for many platforms
Ed Webb

AI Tweets "Little Beetles Is An Arthropod," and Other Facts About The World, As It Lear... - 0 views

  • By saying that NELL has "adopted" the human behaviour of tweeting you are misleading the reader. It is more likely that the software was specifically progremmed to do so and therefore has "adopted" no "human behavior". FAIL.
  •  
    sloppy journalism
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
Ed Webb

On the Web's Cutting Edge, Anonymity in Name Only - WSJ.com - 0 views

  • A Wall Street Journal investigation into online privacy has found that the analytical skill of data handlers like [x+1] is transforming the Internet into a place where people are becoming anonymous in name only. The findings offer an early glimpse of a new, personalized Internet where sites have the ability to adjust many things—look, content, prices—based on the kind of person they think you are.
  • The technology raises the prospect that different visitors to a website could see different prices as well. Price discrimination is generally legal, so long as it's not based on race, gender or geography, which can be deemed "redlining."
  • marketplaces for online data sprang up
  • ...3 more annotations...
  • In a fifth of a second, [x+1] says it can access and analyze thousands of pieces of information about a single user
  • When he saw the 3,748 lines of code that passed in an instant between his computer and Capital One's website, Mr. Burney said: "There's a shocking amount of information there."
  • [x+1]'s assessment of Mr. Burney's location and Nielsen demographic segment are specific enough that it comes extremely close to identifying him as an individual—that is, "de- anonymizing" him—according to Peter Eckersley, staff scientist at the Electronic Frontier Foundation, a privacy-advocacy group.
Ed Webb

How they make those adverts go straight to your head - CNN.com - 0 views

  • "neuromarketing"
  • Currently there are three methodologies covered under the term neuromarketing: functional MRI, measuring skin temperature fluctuations, and utilizing Electroencephalography (EEG), which is the main technology currently used.
  • there has been little neuromarketing research published in peer-reviewed scientific journals, and there are too few publicly accessible data sets from controlled studies to demonstrate conclusively that buying behavior can be correlated with specific brain activity. "The major neuromarketing firms say that their client work demonstrates this, but none of this has been published in a way that the scientific community can critique it,"
Ed Webb

How ethical is it for advertisers to target your mood? | Emily Bell | Opinion | The Gua... - 0 views

  • The effectiveness of psychographic targeting is one bet being made by an increasing number of media companies when it comes to interrupting your viewing experience with advertising messages.
  • “Across the board, articles that were in top emotional categories, such as love, sadness and fear, performed significantly better than articles that were not.”
  • ESPN and USA Today are also using psychographic rather than demographic targeting to sell to advertisers, including in ESPN’s case, the decision to not show you advertising at all if your team is losing.
  • ...9 more annotations...
  • Media companies using this technology claim it is now possible for the “mood” of the reader or viewer to be tracked in real time and the content of the advertising to be changed accordingly
  • ads targeted at readers based on their predicted moods rather than their previous behaviour improved the click-through rate by 40%.
  • Given that the average click through rate (the number of times anyone actually clicks on an ad) is about 0.4%, this number (in gross terms) is probably less impressive than it sounds.
  • Cambridge Analytica, the company that misused Facebook data and, according to its own claims, helped Donald Trump win the 2016 election, used psychographic segmentation.
  • For many years “contextual” ads served by not very intelligent algorithms were the bane of digital editors’ lives. Improvements in machine learning should help eradicate the horrible business of showing insurance advertising to readers in the middle of an article about a devastating fire.
  • The words “brand safety” are increasingly used by publishers when demonstrating products such as Project Feels. It is a way publishers can compete on micro-targeting with platforms such as Facebook and YouTube by pointing out that their targeting will not land you next to a conspiracy theory video about the dangers of chemtrails.
  • the exploitation of psychographics is not limited to the responsible and transparent scientists at the NYT. While publishers were showing these shiny new tools to advertisers, Amazon was advertising for a managing editor for its surveillance doorbell, Ring, which contacts your device when someone is at your door. An editor for a doorbell, how is that going to work? In all kinds of perplexing ways according to the ad. It’s “an exciting new opportunity within Ring to manage a team of news editors who deliver breaking crime news alerts to our neighbours. This position is best suited for a candidate with experience and passion for journalism, crime reporting, and people management.” So if instead of thinking about crime articles inspiring fear and advertising doorbells in the middle of them, what if you took the fear that the surveillance-device-cum-doorbell inspires and layered a crime reporting newsroom on top of it to make sure the fear is properly engaging?
  • The media has arguably already played an outsized role in making sure that people are irrationally scared, and now that practice is being strapped to the considerably more powerful engine of an Amazon product.
  • This will not be the last surveillance-based newsroom we see. Almost any product that produces large data feeds can also produce its own “news”. Imagine the Fitbit newsroom or the managing editor for traffic reports from dashboard cams – anything that has a live data feed emanating from it, in the age of the Internet of Things, can produce news.
Ed Webb

What we still haven't learned from Gamergate - Vox - 0 views

  • Harassment and misogyny had been problems in the community for years before this; the deep resentment and anger toward women that powered Gamergate percolated for years on internet forums. Robert Evans, a journalist who specializes in extremist communities and the host of the Behind the Bastards podcast, described Gamergate to me as partly organic and partly born out of decades-long campaigns by white supremacists and extremists to recruit heavily from online forums. “Part of why Gamergate happened in the first place was because you had these people online preaching to these groups of disaffected young men,” he said. But what Gamergate had that those previous movements didn’t was an organized strategy, made public, cloaking itself as a political movement with a flimsy philosophical stance, its goals and targets amplified by the power of Twitter and a hashtag.
  • The hate campaign, we would later learn, was the moment when our ability to repress toxic communities and write them off as just “trolls” began to crumble. Gamergate ultimately gave way to something deeper, more violent, and more uncontrollable.
  • Police have to learn how to keep the rest of us safe from internet mobs
  • ...20 more annotations...
  • the justice system continues to be slow to understand the link between online harassment and real-life violence
  • In order to increase public safety this decade, it is imperative that police — and everyone else — become more familiar with the kinds of communities that engender toxic, militant systems of harassment, and the online and offline spaces where these communities exist. Increasingly, that means understanding social media’s dark corners, and the types of extremism they can foster.
  • Businesses have to learn when online outrage is manufactured
  • There’s a difference between organic outrage that arises because an employee actually does something outrageous, and invented outrage that’s an excuse to harass someone whom a group has already decided to target for unrelated reasons — for instance, because an employee is a feminist. A responsible business would ideally figure out which type of outrage is occurring before it punished a client or employee who was just doing their job.
  • Social media platforms didn’t learn how to shut down disingenuous conversations over ethics and free speech before they started to tear their cultures apart
  • Dedication to free speech over the appearance of bias is especially important within tech culture, where a commitment to protecting free speech is both a banner and an excuse for large corporations to justify their approach to content moderation — or lack thereof.
  • Reddit’s free-speech-friendly moderation stance resulted in the platform tacitly supporting pro-Gamergate subforums like r/KotakuInAction, which became a major contributor to Reddit’s growing alt-right community. Twitter rolled out a litany of moderation tools in the wake of Gamergate, intended to allow harassment targets to perpetually block, mute, and police their own harassers — without actually effectively making the site unwelcome for the harassers themselves. And YouTube and Facebook, with their algorithmic amplification of hateful and extreme content, made no effort to recognize the violence and misogyny behind pro-Gamergate content, or police them accordingly.
  • All of these platforms are wrestling with problems that seem to have grown beyond their control; it’s arguable that if they had reacted more swiftly to slow the growth of the internet’s most toxic and misogynistic communities back when those communities, particularly Gamergate, were still nascent, they could have prevented headaches in the long run — and set an early standard for how to deal with ever-broadening issues of extremist content online.
  • Gamergate simultaneously masqueraded as legitimate concern about ethics that demanded audiences take it seriously, and as total trolling that demanded audiences dismiss it entirely. Both these claims served to obfuscate its real aim — misogyny, and, increasingly, racist white supremacy
  • Somehow, the idea that all of that sexism and anti-feminist anger could be recruited, harnessed, and channeled into a broader white supremacist movement failed to generate any real alarm, even well into 2016
  • many of the perpetrators of real-world violence are radicalized online first
  • It remains difficult for many to accept the throughline from online abuse to real-world violence against women, much less the fact that violence against women, online and off, is a predictor of other kinds of real-world violence
  • Politicians and the media must take online “ironic” racism and misogyny seriously
  • Gamergate masked its misogyny in a coating of shrill yelling that had most journalists in 2014 writing off the whole incident as “satirical” and immature “trolling,” and very few correctly predicting that Gamergate’s trolling was the future of politics
  • Gamergate was all about disguising a sincere wish for violence and upheaval by dressing it up in hyperbole and irony in order to confuse outsiders and make it all seem less serious.
  • Violence against women is a predictor of other kinds of violence. We need to acknowledge it.
  • The public’s failure to understand and accept that the alt-right’s misogyny, racism, and violent rhetoric is serious goes hand in hand with its failure to understand and accept that such rhetoric is identical to that of President Trump
  • deploying offensive behavior behind a guise of mock outrage, irony, trolling, and outright misrepresentation, in order to mask the sincere extremism behind the message.
  • many members of the media, politicians, and members of the public still struggle to accept that Trump’s rhetoric is having violent consequences, despite all evidence to the contrary.
  • The movement’s insistence that it was about one thing (ethics in journalism) when it was about something else (harassing women) provided a case study for how extremists would proceed to drive ideological fissures through the foundations of democracy: by building a toxic campaign of hate beneath a veneer of denial.
Ed Webb

Lack of Transparency over Police Forces' Covert Use of Predictive Policing Software Rai... - 0 views

  • Currently, through the use of blanket exemption clauses – and without any clear legislative oversight – public access to information on systems that may be being used to surveil them remains opaque. Companies including Palantir, NSO Group, QuaDream, Dark Matter and Gamma Group are all exempt from disclosure under the precedent set by the police, along with another entity, Dataminr.
  • has helped police in the US monitor and break up Black Lives Matter and Muslim rights activism through social media monitoring. Dataminr software has also been used by the Ministry of Defence, Foreign Commonwealth and Development Office, and the Cabinet Office,
  • New research shows that, far from being a ‘neutral’ observational tool, Dataminr produces results that reflect its clients’ politics, business goals and ways of operating.
  • ...3 more annotations...
  • teaching the software to associate certain kinds of images, text and hashtags with a ‘dangerous’ protest results in politically and racially-biased definitions of what dangerous protests look like. This is because, to make these predictions, the system has to decide whether the event resembles other previous events that were labelled ‘dangerous’ – for example, past BLM protests. 
  • When in 2016 the ACLU proved that Dataminr’s interventions were contributing to racist policing, the company was subsequently banned from granting fusion centres in the US direct access to Twitter’s API. Fusion centres are state-owned and operated facilities and serve as focal points to gather, analyse and redistribute intelligence among state, local, tribal and territorial (SLTT), federal and private sector partners to detect criminal and terrorist activity.  However, US law enforcement found  a way around these limitations by continuing to receive Dataminr alerts outside of fusion centres.
  • Use of these technologies have, in the past, not been subject to public consultation and, without basic scrutiny at either a public or legislative level, there remains no solid mechanism for independent oversight of their use by law enforcement.
1 - 11 of 11
Showing 20 items per page