Skip to main content

Home/ Dystopias/ Group items tagged blog

Rss Feed Group items tagged

Ed Webb

A Snowpiercer Thinkpiece, Not to Be Taken Too Seriously, But For Very Serious Reasons -... - 0 views

  •  
    Many spoilers!!!
Ed Webb

How white male victimhood got monetised | The Independent - 0 views

  • I also learned a metric crap-tonne about how online communities of angry young nerd dudes function. Which is, to put it simply, around principles of pure toxicity. And now that toxicity has bled into wider society.
  • In a twist on the "1,000 true fans" principle worthy of Black Mirror, any alt-right demagogue who can gather 1,000 whining, bitter, angry men with zero self-awareness now has a self-sustaining full time job as an online sh*tposter.
  • Social media has been assailed by one toxic "movement" after another, from Gamergate to Incel terrorism. But the "leaders" of these movements, a ragtag band of demagogues, profiteers and charlatans, seem less interested in political change than in racking up Patreon backers.
  • ...5 more annotations...
  • Making a buck from the alt-right is quite simple. Get a blog or a YouTube channel. Then under the guise of political dialogue or pseudo-science, start spouting hate speech. You'll soon find followers flocking to your banner.
  • Publish a crappy ebook explaining why SJWs Always Lie. Or teach your followers how to “think like a silverback gorilla” (surely an arena where the far right already triumph?) via a pricey seminar. Launch a Kickstarter for a badly drawn comic packed with anti-diversity propaganda. They'll sell by the bucketload to followers eager to virtue-signal their membership in the rank and file of the alt-right
  • the seemingly bottomless reservoirs of white male victimhood
  • nowhere is there a better supply of the credulous than among the angry white men who flock to the far right. Embittered by their own life failures, the alt-right follower is eager to believe they have a genetically superior IQ and are simply the victim of a libtard conspiracy to keep them down
  • We're barely in the foothills of the mountains of madness that the internet and social media are unleashing into our political process. If you think petty demagogues like Jordan Peterson are good at milking cash from the crowd, you ain’t seen nothing yet. Because he was just the beginning – and his ideology of the white male victim is rapidly spiralling into something that even he can no longer control
Ed Webb

Sad by design | Eurozine - 0 views

  • ‘technological sadness’ – the default mental state of the online billions
  • If only my phone could gently weep. McLuhan’s ‘extensions of man’ has imploded right into the exhausted self.
  • Social reality is a corporate hybrid between handheld media and the psychic structure of the user. It’s a distributed form of social ranking that can no longer be reduced to the interests of state and corporate platforms. As online subjects, we too are implicit, far too deeply involved
  • ...20 more annotations...
  • Google and Facebook know how to utilize negative emotions, leading to the new system-wide goal: find personalized ways to make you feel bad
  • in Adam Greenfield’s Radical Technologies, where he notices that ‘it seems strange to assert that anything as broad as a class of technologies might have an emotional tenor, but the internet of things does. That tenor is sadness… a melancholy that rolls off it in waves and sheets. The entire pretext on which it depends is a milieu of continuously shattered attention, of overloaded awareness, and of gaps between people just barely annealed with sensors, APIs and scripts.’ It is a life ‘savaged by bullshit jobs, over-cranked schedules and long commutes, of intimacy stifled by exhaustion and the incapacity by exhaustion and the incapacity or unwillingness to be emotionally present.’
  • Omnipresent social media places a claim on our elapsed time, our fractured lives. We’re all sad in our very own way.4 As there are no lulls or quiet moments anymore, the result is fatigue, depletion and loss of energy. We’re becoming obsessed with waiting. How long have you been forgotten by your love ones? Time, meticulously measured on every app, tells us right to our face. Chronos hurts. Should I post something to attract attention and show I’m still here? Nobody likes me anymore. As the random messages keep relentlessly piling in, there’s no way to halt them, to take a moment and think it all through.
  • Unlike the blog entries of the Web 2.0 era, social media have surpassed the summary stage of the diary in a desperate attempt to keep up with real-time regime. Instagram Stories, for example, bring back the nostalgia of an unfolding chain of events – and then disappear at the end of the day, like a revenge act, a satire of ancient sentiments gone by. Storage will make the pain permanent. Better forget about it and move on
  • By browsing through updates, we’re catching up with machine time – at least until we collapse under the weight of participation fatigue. Organic life cycles are short-circuited and accelerated up to a point where the personal life of billions has finally caught up with cybernetics
  • The price of self-control in an age of instant gratification is high. We long to revolt against the restless zombie inside us, but we don’t know how.
  • Sadness arises at the point we’re exhausted by the online world.6 After yet another app session in which we failed to make a date, purchased a ticket and did a quick round of videos, the post-dopamine mood hits us hard. The sheer busyness and self-importance of the world makes you feel joyless. After a dive into the network we’re drained and feel socially awkward. The swiping finger is tired and we have to stop.
  • Much like boredom, sadness is not a medical condition (though never say never because everything can be turned into one). No matter how brief and mild, sadness is the default mental state of the online billions. Its original intensity gets dissipated, it seeps out, becoming a general atmosphere, a chronic background condition. Occasionally – for a brief moment – we feel the loss. A seething rage emerges. After checking for the tenth time what someone said on Instagram, the pain of the social makes us feel miserable, and we put the phone away. Am I suffering from the phantom vibration syndrome? Wouldn’t it be nice if we were offline? Why is life so tragic? He blocked me. At night, you read through the thread again. Do we need to quit again, to go cold turkey again? Others are supposed to move us, to arouse us, and yet we don’t feel anything anymore. The heart is frozen
  • If experience is the ‘habit of creating isolated moments within raw occurrence in order to save and recount them,’11 the desire to anaesthetize experience is a kind of immune response against ‘the stimulations of another modern novelty, the total aesthetic environment’.
  • unlike burn-out, sadness is a continuous state of mind. Sadness pops up the second events start to fade away – and now you’re down the rabbit hole once more. The perpetual now can no longer be captured and leaves us isolated, a scattered set of online subjects. What happens when the soul is caught in the permanent present? Is this what Franco Berardi calls the ‘slow cancellation of the future’? By scrolling, swiping and flipping, we hungry ghosts try to fill the existential emptiness, frantically searching for a determining sign – and failing
  • Millennials, as one recently explained to me, have grown up talking more openly about their state of mind. As work/life distinctions disappear, subjectivity becomes their core content. Confessions and opinions are externalized instantly. Individuation is no longer confined to the diary or small group of friends, but is shared out there, exposed for all to see.
  • Snapstreaks, the ‘best friends’ fire emoji next to a friend’s name indicating that ‘you and that special person in your life have snapped one another within 24 hours for at least two days in a row.’19 Streaks are considered a proof of friendship or commitment to someone. So it’s heartbreaking when you lose a streak you’ve put months of work into. The feature all but destroys the accumulated social capital when users are offline for a few days. The Snap regime forces teenagers, the largest Snapchat user group, to use the app every single day, making an offline break virtually impossible.20 While relationships amongst teens are pretty much always in flux, with friendships being on the edge and always questioned, Snap-induced feelings sync with the rapidly changing teenage body, making puberty even more intense
  • The bare-all nature of social media causes rifts between lovers who would rather not have this information. But in the information age, this does not bode well with the social pressure to participate in social networks.
  • dating apps like Tinder. These are described as time-killing machines – the reality game that overcomes boredom, or alternatively as social e-commerce – shopping my soul around. After many hours of swiping, suddenly there’s a rush of dopamine when someone likes you back. ‘The goal of the game is to have your egos boosted. If you swipe right and you match with a little celebration on the screen, sometimes that’s all that is needed. ‘We want to scoop up all our options immediately and then decide what we actually really want later.’25 On the other hand, ‘crippling social anxiety’ is when you match with somebody you are interested in, but you can’t bring yourself to send a message or respond to theirs ‘because oh god all I could think of was stupid responses or openers and she’ll think I’m an idiot and I am an idiot and…’
  • The metric to measure today’s symptoms would be time – or ‘attention’, as it is called in the industry. While for the archaic melancholic the past never passes, techno-sadness is caught in the perpetual now. Forward focused, we bet on acceleration and never mourn a lost object. The primary identification is there, in our hand. Everything is evident, on the screen, right in your face. Contrasted with the rich historical sources on melancholia, our present condition becomes immediately apparent. Whereas melancholy in the past was defined by separation from others, reduced contacts and reflection on oneself, today’s tristesse plays itself out amidst busy social (media) interactions. In Sherry Turkle’s phrase, we are alone together, as part of the crowd – a form of loneliness that is particularly cruel, frantic and tiring.
  • What we see today are systems that constantly disrupt the timeless aspect of melancholy.31 There’s no time for contemplation, or Weltschmerz. Social reality does not allow us to retreat.32 Even in our deepest state of solitude we’re surrounded by (online) others that babble on and on, demanding our attention
  • distraction does not pull us away, but instead draws us back into the social
  • The purpose of sadness by design is, as Paul B. Preciado calls it, ‘the production of frustrating satisfaction’.39 Should we have an opinion about internet-induced sadness? How can we address this topic without looking down on the online billions, without resorting to fast-food comparisons or patronizingly viewing people as fragile beings that need to be liberated and taken care of.
  • We overcome sadness not through happiness, but rather, as media theorist Andrew Culp has insisted, through a hatred of this world. Sadness occurs in situations where stagnant ‘becoming’ has turned into a blatant lie. We suffer, and there’s no form of absurdism that can offer an escape. Public access to a 21st-century version of dadaism has been blocked. The absence of surrealism hurts. What could our social fantasies look like? Are legal constructs such as creative commons and cooperatives all we can come up with? It seems we’re trapped in smoothness, skimming a surface littered with impressions and notifications. The collective imaginary is on hold. What’s worse, this banality itself is seamless, offering no indicators of its dangers and distortions. As a result, we’ve become subdued. Has the possibility of myth become technologically impossible?
  • We can neither return to mysticism nor to positivism. The naive act of communication is lost – and this is why we cry
Ed Webb

The Digital Maginot Line - 0 views

  • The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless). In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.
  • There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.
  • There’s very little incentive not to try everything: this is a revolution that is being A/B tested.
  • ...17 more annotations...
  • The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.
  • Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target Russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure
  • Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle. But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.
  • The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead
  • Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble. But ultimately the information war is about territory — just not the geographic kind. In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.
  • If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it.
  • The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.
  • This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”
  • Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.
  • Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.
  • The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare
  • Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups
  • platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors
  • What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation. We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.
  • We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.
  • Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.
  • Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets.
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
« First ‹ Previous 61 - 70 of 70
Showing 20 items per page