Skip to main content

Home/ Dystopias/ Group items tagged debate

Rss Feed Group items tagged

Ed Webb

Robot Debates Climate Change Deniers via Twitter - Global Challenges - 0 views

  •  
    Who can deny the robot?
Ed Webb

Retargeting Ads Follow Surfers to Other Sites - NYTimes.com - 0 views

  • it’s a little creepy, especially if you don’t know what’s going on
  • personalized retargeting or remarketing
  • the palpable feeling that they are being watched as they roam the virtual aisles of online stores
  • ...11 more annotations...
  • Others, though, find it disturbing. When a recent Advertising Age column noted the phenomenon, several readers chimed in to voice their displeasure.
  • she felt even worse when she was hounded recently by ads for a dieting service she had used online. “They are still following me around, and it makes me feel fat,” she said.
  • stalked by shoes
  • the technique is raising anew the threat of industry regulation
  • at there is a commercial surveillance system in place online that is sweeping in scope and raises privacy and civil liberties issues
  • Mr. Magness, of Zappos, said that consumers may be unnerved because they may feel that they are being tracked from site to site as they browse the Web. To reassure consumers, Zappos, which is using the ads to peddle items like shoes, handbags and women’s underwear, displays a message inside the banner ads that reads, “Why am I seeing these ads?” When users click on it, they are taken to the Web site of Criteo, the advertising technology company behind the Zappos ads, where the ads are explained.
  • “When you begin to give people a sense of how this is happening, they really don’t like it,”
  • Professor Turow, who studies digital media and recently testified at a Senate committee hearing on digital advertising, said he had a visceral negative reaction to the ads, even though he understands the technologies behind them. “It seemed so bold,” Professor Turow said. “I was not pleased, frankly.”
  • For Google, remarketing is a more specific form of behavioral targeting, the practice under which a person who has visited NBA.com, for instance, may be tagged as a basketball fan and later will be shown ads for related merchandise. Behavioral targeting has been hotly debated in Washington, and lawmakers are considering various proposals to regulate it. During the recent Senate hearing, Senator Claire McCaskill, Democrat of Missouri, said she found the technique troubling. “I understand that advertising supports the Internet, but I am a little spooked out,” Ms. McCaskill said of behavioral targeting. “This is creepy.”
  • being stalked by a pair of pants
  • “I don’t think that exposing all this detailed information you have about the customer is necessary,” said Alan Pearlstein, chief executive of Cross Pixel Media, a digital marketing agency. Mr. Pearlstein says he supports retargeting, but with more subtle ads that, for instance, could offer consumers a discount coupon if they return to an online store. “What is the benefit of freaking customers out?”
  •  
    Minority Report (movie)?
Ed Webb

From Helmand to Merseyside: Unmanned drones and the militarisation of UK policing | ope... - 0 views

  • the intensifying cross-overs between the use of drones to deploy lethal force in the war zones of Asia and the Middle East, and their introduction within western airspace, need to be stressed. The European Defence Agency, for example, a body funded by the UK and other European governments,  is lobbying hard to support the widespread diffusion of drones within UK and EU policing and security as a  means to bolster the existing strengths of European security corporations like BAE systems, EADS and Thales within  booming global markets for armed and military drones. The global market for drones is by far the most dynamic sector in the global airline industry. The current annual market of $2.7 billion is predicted to reach $8.3 billion by 2020 and $55 billion is likely to be spent on drones in the next decade. A specific concern of the EU is that European defense and security corporations are failing to stake claims within booming global drone markets whilst US and Israeli companies clean up.
  • what scholars of surveillance term ‘function-creep’ is likely to be a key feature of drone deployments
  •  it is startling that the main concern so far in public policy debates about the introduction of military-standard surveillance drones into routine police practice in Western countries has surrounded the (very real) dangers of collision with other aircraft.
  • ...1 more annotation...
  • the widespread introduction of almost silent, pilotless drones with military-standard imaging equipment raises major new questions about the way in which the UK as a ‘surveillance society’. Is the civilian deployment of such drones a justified and proportionate response to civilian policing needs or a thinly-veiled attempt by security corporations to build new and highly profitable markets? Once deployed, what ethical and regulatory guidelines need to be in place to govern drone deployment and the ‘targeting’ of drone sensors? Above all, are transparent regulatory systems in place to prevent law enforcement agencies from abusing radical extensions in their powers to vertically and covertly spy on all aspects of civilian life 24 hours a day?
Ed Webb

The Digital Maginot Line - 0 views

  • The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless). In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.
  • There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.
  • There’s very little incentive not to try everything: this is a revolution that is being A/B tested.
  • ...17 more annotations...
  • The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.
  • Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target Russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure
  • Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle. But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.
  • The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead
  • Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble. But ultimately the information war is about territory — just not the geographic kind. In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.
  • This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”
  • The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.
  • If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it.
  • Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.
  • Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.
  • The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare
  • Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups
  • platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors
  • What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation. We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.
  • We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.
  • Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.
  • Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets.
Ed Webb

The fight against toxic gamer culture has moved to the classroom - The Verge - 0 views

  • If there were any lessons to be learned from Gamergate — from how to recognize bad faith actors or steps on how to protect yourself, to failings in law enforcement or therapy focused on the internet — the education system doesn’t seem to have fully grasped these concepts.
  • It’s a problem that goes beyond just topics specific to the gaming industry, extending to topics like feminism, politics, or philosophy. “Suddenly everyone who watches Jordan Peterson videos thinks they know what postmodernism is,” says Emma Vossen, a post doctoral fellow with a PhD in gender and games. These problems with students are not about disagreements or debates. It’s not even about kids acting out, but rather harassers in the classroom who have tapped into social media as a powerful weapon. Many educators can’t grasp that, says Vossen. “This is about students who could potentially access this hate movement that’s circling around you and use it against you,” she says. “This is about being afraid to give bad marks to students because they might go to their favorite YouTuber with a little bit of personal information about you that could be used to dox you.” Every word you say can be taken out of context, twisted, and used against you. “Education has no idea how to deal with this problem,” Vossen says. “And I think it’s only going to get worse.
  • An educator’s job is no longer just about teaching, but helping students unlearn false or even harmful information they’ve picked up from the internet.
  • ...1 more annotation...
  • “If we started teaching students the basics of feminism at a very young age,” Wilcox says, “they would have a far better appreciation for how different perspectives will lead to different outcomes, and how the distribution of power and privilege in society can influence who gets to speak in the first place.”
Ed Webb

At age 13, I joined the alt-right, aided by Reddit and Google - 0 views

  • Now, I’m 16, and I’ve been able to reflect on how I got sucked into that void—and how others do, too. My brief infatuation with the alt-right has helped me understand the ways big tech companies and their algorithms are contributing to the problem of radicalization—and why it’s so important to be skeptical of what you read online.
  • while a quick burst of radiation probably won’t give you cancer, prolonged exposure is far more dangerous. The same is true for the alt-right. I knew that the messages I was seeing were wrong, but the more I saw them, the more curious I became. I was unfamiliar with most of the popular discussion topics on Reddit. And when you want to know more about something, what do you do? You probably don’t think to go to the library and check out a book on that subject, and then fact check and cross reference what you find. If you just google what you want to know, you can get the information you want within seconds.
  • I started googling things like “Illegal immigration,” “Sandy Hook actors,” and “Black crime rate.” And I found exactly what I was looking for.
  • ...11 more annotations...
  • The articles and videos I first found all backed up what I was seeing on Reddit—posts that asserted a skewed version of actual reality, using carefully selected, out-of-context, and dubiously sourced statistics that propped up a hateful world view. On top of that, my online results were heavily influenced by something called an algorithm. I understand algorithms to be secretive bits of code that a website like YouTube will use to prioritize content that you are more likely to click on first. Because all of the content I was reading or watching was from far-right sources, all of the links that the algorithms dangled on my screen for me to click were from far-right perspectives.
  • I spent months isolated in my room, hunched over my computer, removing and approving memes on Reddit and watching conservative “comedians” that YouTube served up to me.
  • The inflammatory language and radical viewpoints used by the alt-right worked to YouTube and Google’s favor—the more videos and links I clicked on, the more ads I saw, and in turn, the more ad revenue they generated.
  • the biggest step in my recovery came when I attended a pro-Trump rally in Washington, D.C., in September 2017, about a month after the “Unite the Right” rally in Charlottesville, Virginia
  • The difference between the online persona of someone who identifies as alt-right and the real thing is so extreme that you would think they are different people. Online, they have the power of fake and biased news to form their arguments. They sound confident and usually deliver their standard messages strongly. When I met them in person at the rally, they were awkward and struggled to back up their statements. They tripped over their own words, and when they were called out by any counter protestors in the crowd, they would immediately use a stock response such as “You’re just triggered.”
  • Seeing for myself that the people I was talking to online were weak, confused, and backwards was the turning point for me.
  • we’re too far gone to reverse the damage that the alt-right has done to the internet and to naive adolescents who don’t know any better—children like the 13-year-old boy I was. It’s convenient for a massive internet company like Google to deliberately ignore why people like me get misinformed in the first place, as their profit-oriented algorithms continue to steer ignorant, malleable people into the jaws of the far-right
  • Dylann Roof, the white supremacist who murdered nine people in a Charleston, South Carolina, church in 2015, was radicalized by far-right groups that spread misinformation with the aid of Google’s algorithms.
  • Over the past couple months, I’ve been getting anti-immigration YouTube ads that feature an incident presented as a “news” story, about two immigrants who raped an American girl. The ad offers no context or sources, and uses heated language to denounce immigration and call for our county to allow ICE to seek out illegal immigrants within our area. I wasn’t watching a video about immigration or even politics when those ads came on; I was watching the old Monty Python “Cheese Shop” sketch. How does British satire, circa 1972, relate to America’s current immigration debate? It doesn’t.
  • tech companies need to be held accountable for the radicalization that results from their systems and standards.
  • anyone can be manipulated like I was. It’s so easy to find information online that we collectively forget that so much of the content the internet offers us is biased
1 - 9 of 9
Showing 20 items per page