Skip to main content

Home/ Groups/ Dystopias
Ed Webb

Sad by design | Eurozine - 0 views

  • ‘technological sadness’ – the default mental state of the online billions
  • If only my phone could gently weep. McLuhan’s ‘extensions of man’ has imploded right into the exhausted self.
  • Social reality is a corporate hybrid between handheld media and the psychic structure of the user. It’s a distributed form of social ranking that can no longer be reduced to the interests of state and corporate platforms. As online subjects, we too are implicit, far too deeply involved
  • ...20 more annotations...
  • Google and Facebook know how to utilize negative emotions, leading to the new system-wide goal: find personalized ways to make you feel bad
  • in Adam Greenfield’s Radical Technologies, where he notices that ‘it seems strange to assert that anything as broad as a class of technologies might have an emotional tenor, but the internet of things does. That tenor is sadness… a melancholy that rolls off it in waves and sheets. The entire pretext on which it depends is a milieu of continuously shattered attention, of overloaded awareness, and of gaps between people just barely annealed with sensors, APIs and scripts.’ It is a life ‘savaged by bullshit jobs, over-cranked schedules and long commutes, of intimacy stifled by exhaustion and the incapacity by exhaustion and the incapacity or unwillingness to be emotionally present.’
  • Omnipresent social media places a claim on our elapsed time, our fractured lives. We’re all sad in our very own way.4 As there are no lulls or quiet moments anymore, the result is fatigue, depletion and loss of energy. We’re becoming obsessed with waiting. How long have you been forgotten by your love ones? Time, meticulously measured on every app, tells us right to our face. Chronos hurts. Should I post something to attract attention and show I’m still here? Nobody likes me anymore. As the random messages keep relentlessly piling in, there’s no way to halt them, to take a moment and think it all through.
  • Unlike the blog entries of the Web 2.0 era, social media have surpassed the summary stage of the diary in a desperate attempt to keep up with real-time regime. Instagram Stories, for example, bring back the nostalgia of an unfolding chain of events – and then disappear at the end of the day, like a revenge act, a satire of ancient sentiments gone by. Storage will make the pain permanent. Better forget about it and move on
  • By browsing through updates, we’re catching up with machine time – at least until we collapse under the weight of participation fatigue. Organic life cycles are short-circuited and accelerated up to a point where the personal life of billions has finally caught up with cybernetics
  • The price of self-control in an age of instant gratification is high. We long to revolt against the restless zombie inside us, but we don’t know how.
  • Sadness arises at the point we’re exhausted by the online world.6 After yet another app session in which we failed to make a date, purchased a ticket and did a quick round of videos, the post-dopamine mood hits us hard. The sheer busyness and self-importance of the world makes you feel joyless. After a dive into the network we’re drained and feel socially awkward. The swiping finger is tired and we have to stop.
  • Much like boredom, sadness is not a medical condition (though never say never because everything can be turned into one). No matter how brief and mild, sadness is the default mental state of the online billions. Its original intensity gets dissipated, it seeps out, becoming a general atmosphere, a chronic background condition. Occasionally – for a brief moment – we feel the loss. A seething rage emerges. After checking for the tenth time what someone said on Instagram, the pain of the social makes us feel miserable, and we put the phone away. Am I suffering from the phantom vibration syndrome? Wouldn’t it be nice if we were offline? Why is life so tragic? He blocked me. At night, you read through the thread again. Do we need to quit again, to go cold turkey again? Others are supposed to move us, to arouse us, and yet we don’t feel anything anymore. The heart is frozen
  • If experience is the ‘habit of creating isolated moments within raw occurrence in order to save and recount them,’11 the desire to anaesthetize experience is a kind of immune response against ‘the stimulations of another modern novelty, the total aesthetic environment’.
  • unlike burn-out, sadness is a continuous state of mind. Sadness pops up the second events start to fade away – and now you’re down the rabbit hole once more. The perpetual now can no longer be captured and leaves us isolated, a scattered set of online subjects. What happens when the soul is caught in the permanent present? Is this what Franco Berardi calls the ‘slow cancellation of the future’? By scrolling, swiping and flipping, we hungry ghosts try to fill the existential emptiness, frantically searching for a determining sign – and failing
  • Millennials, as one recently explained to me, have grown up talking more openly about their state of mind. As work/life distinctions disappear, subjectivity becomes their core content. Confessions and opinions are externalized instantly. Individuation is no longer confined to the diary or small group of friends, but is shared out there, exposed for all to see.
  • Snapstreaks, the ‘best friends’ fire emoji next to a friend’s name indicating that ‘you and that special person in your life have snapped one another within 24 hours for at least two days in a row.’19 Streaks are considered a proof of friendship or commitment to someone. So it’s heartbreaking when you lose a streak you’ve put months of work into. The feature all but destroys the accumulated social capital when users are offline for a few days. The Snap regime forces teenagers, the largest Snapchat user group, to use the app every single day, making an offline break virtually impossible.20 While relationships amongst teens are pretty much always in flux, with friendships being on the edge and always questioned, Snap-induced feelings sync with the rapidly changing teenage body, making puberty even more intense
  • The bare-all nature of social media causes rifts between lovers who would rather not have this information. But in the information age, this does not bode well with the social pressure to participate in social networks.
  • dating apps like Tinder. These are described as time-killing machines – the reality game that overcomes boredom, or alternatively as social e-commerce – shopping my soul around. After many hours of swiping, suddenly there’s a rush of dopamine when someone likes you back. ‘The goal of the game is to have your egos boosted. If you swipe right and you match with a little celebration on the screen, sometimes that’s all that is needed. ‘We want to scoop up all our options immediately and then decide what we actually really want later.’25 On the other hand, ‘crippling social anxiety’ is when you match with somebody you are interested in, but you can’t bring yourself to send a message or respond to theirs ‘because oh god all I could think of was stupid responses or openers and she’ll think I’m an idiot and I am an idiot and…’
  • The metric to measure today’s symptoms would be time – or ‘attention’, as it is called in the industry. While for the archaic melancholic the past never passes, techno-sadness is caught in the perpetual now. Forward focused, we bet on acceleration and never mourn a lost object. The primary identification is there, in our hand. Everything is evident, on the screen, right in your face. Contrasted with the rich historical sources on melancholia, our present condition becomes immediately apparent. Whereas melancholy in the past was defined by separation from others, reduced contacts and reflection on oneself, today’s tristesse plays itself out amidst busy social (media) interactions. In Sherry Turkle’s phrase, we are alone together, as part of the crowd – a form of loneliness that is particularly cruel, frantic and tiring.
  • What we see today are systems that constantly disrupt the timeless aspect of melancholy.31 There’s no time for contemplation, or Weltschmerz. Social reality does not allow us to retreat.32 Even in our deepest state of solitude we’re surrounded by (online) others that babble on and on, demanding our attention
  • distraction does not pull us away, but instead draws us back into the social
  • The purpose of sadness by design is, as Paul B. Preciado calls it, ‘the production of frustrating satisfaction’.39 Should we have an opinion about internet-induced sadness? How can we address this topic without looking down on the online billions, without resorting to fast-food comparisons or patronizingly viewing people as fragile beings that need to be liberated and taken care of.
  • We overcome sadness not through happiness, but rather, as media theorist Andrew Culp has insisted, through a hatred of this world. Sadness occurs in situations where stagnant ‘becoming’ has turned into a blatant lie. We suffer, and there’s no form of absurdism that can offer an escape. Public access to a 21st-century version of dadaism has been blocked. The absence of surrealism hurts. What could our social fantasies look like? Are legal constructs such as creative commons and cooperatives all we can come up with? It seems we’re trapped in smoothness, skimming a surface littered with impressions and notifications. The collective imaginary is on hold. What’s worse, this banality itself is seamless, offering no indicators of its dangers and distortions. As a result, we’ve become subdued. Has the possibility of myth become technologically impossible?
  • We can neither return to mysticism nor to positivism. The naive act of communication is lost – and this is why we cry
Ed Webb

Could fully automated luxury communism ever work? - 0 views

  • Having achieved a seamless, pervasive commodification of online sociality, Big Tech companies have turned their attention to infrastructure. Attempts by Google, Amazon and Facebook to achieve market leadership, in everything from AI to space exploration, risk a future defined by the battle for corporate monopoly.
  • The technologies are coming. They’re already here in certain instances. It’s the politics that surrounds them. We have alternatives: we can have public ownership of data in the citizen’s interest or it could be used as it is in China where you have a synthesis of corporate and state power
  • the two alternatives that big data allows is an all-consuming surveillance state where you have a deep synthesis of capitalism with authoritarian control, or a reinvigorated welfare state where more and more things are available to everyone for free or very low cost
  • ...4 more annotations...
  • we can’t begin those discussions until we say, as a society, we want to at least try subordinating these potentials to the democratic project, rather than allow capitalism to do what it wants
  • I say in FALC that this isn’t a blueprint for utopia. All I’m saying is that there is a possibility for the end of scarcity, the end of work, a coming together of leisure and labour, physical and mental work. What do we want to do with it? It’s perfectly possible something different could emerge where you have this aggressive form of social value.
  • I think the thing that’s been beaten out of everyone since 2010 is one of the prevailing tenets of neoliberalism: work hard, you can be whatever you want to be, that you’ll get a job, be well paid and enjoy yourself.  In 2010, that disappeared overnight, the rules of the game changed. For the status quo to continue to administer itself,  it had to change common sense. You see this with Jordan Peterson; he’s saying you have to know your place and that’s what will make you happy. To me that’s the only future for conservative thought, how else do you mediate the inequality and unhappiness?
  • I don’t think we can rapidly decarbonise our economies without working people understanding that it’s in their self-interest. A green economy means better quality of life. It means more work. Luxury populism feeds not only into the green transition, but the rollout of Universal Basic Services and even further.
Ed Webb

Scientific blinders: Learning from the moral failings of Nazi physicists - Bulletin of ... - 0 views

  • As the evening progressed, more and more questions concerning justice and ethics occurred to the physicists: Are atomic weapons inherently inhumane, and should they never be used? If the Germans had come to possess such weapons, what would be the world’s fate? What constitutes real patriotism in Nazi Germany—working for the regime’s success, or its defeat? The scientists expressed surprise and bafflement at their colleagues’ opinions, and their own views sometimes evolved from one moment to the next. The scattered, changing opinions captured in the Farm Hall transcripts highlight that, in their five years on the Nazi nuclear program, the German physicists had likely failed to wrestle meaningfully with these critical questions.
  • looking back at the Uranium Club serves to remind us scientists of how easy it is to focus on technical matters and avoid considering moral ones. This is especially true when the moral issues are perplexing, when any negative impacts seem distant, and when the science is exciting.
  • engineers who develop tracking or facial-recognition systems may be creating tools that can be purchased by repressive regimes intent on spying on and suppressing dissent. Accordingly, those researchers have a certain obligation to consider their role and the impact of their work.
  • ...2 more annotations...
  • reflecting seriously on the societal context of a research position may prompt a scientist to accept the job—and to take it upon herself or himself to help restrain unthinking innovation at work, by raising questions about whether every feature that can be added should in fact be implemented. (The same goes for whether certain lines of research should be pursued and particular findings published.)
  • The challenge for each of us, moving forward, is to ask ourselves and one another, hopefully far earlier in the research process than did Germany’s Walther Gerlach: “What are we working for?”
  •  
    If you get the opportunity see, or at least read, the plays The Physicists (Die Physiker) by Friedrich Dürrenmatt and Copenhagen by Michael Frayn.
Ed Webb

If billionaires take up geoengineering on their own | Bryan Alexander - 0 views

  • I watch the majority of Americans acclimating to what looks ever more clearly like plutocracy, some quite enthusiastically.  I track the global failure to take climate change seriously.
Ed Webb

The Digital Maginot Line - 0 views

  • The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless). In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.
  • There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.
  • There’s very little incentive not to try everything: this is a revolution that is being A/B tested.
  • ...17 more annotations...
  • The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.
  • Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target Russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure
  • Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle. But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.
  • The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead
  • Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble. But ultimately the information war is about territory — just not the geographic kind. In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.
  • If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it.
  • The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.
  • This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”
  • Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.
  • Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.
  • The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare
  • Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups
  • platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors
  • What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation. We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.
  • We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.
  • Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.
  • Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets.
Ed Webb

Review: Google Chrome has become surveillance software. It's time to switch. - Silicon ... - 0 views

  • There are ways to defang Chrome, which is much more complicated than just using “Incognito Mode.” But it’s much easier to switch to a browser not owned by an advertising company.
Ed Webb

America's Forgotten Mass Imprisonment of Women Believed to Be Sexually Immoral - HISTORY - 0 views

  • the “American Plan.” From the 1910s through the 1950s, and in some places into the 1960s and 1970s, tens of thousands—perhaps hundreds of thousands—of American women were detained and forcibly examined for STIs. The program was modeled after similar ones in Europe, under which authorities stalked “suspicious” women, arresting, testing and imprisoning them.
  • If the women tested positive, U.S. officials locked them away in penal institutions with no due process. While many records of the program have since been lost or destroyed, women’s forced internment could range from a few days to many months. Inside these institutions, records show, the women were often injected with mercury and forced to ingest arsenic-based drugs, the most common treatments for syphilis in the early part of the century. If they misbehaved, or if they failed to show “proper” ladylike deference, these women could be beaten, doused with cold water, thrown into solitary confinement—or even sterilized.
  • beginning in 1918, federal officials began pushing every state in the nation to pass a “model law,” which enabled officials to forcibly examine any person “reasonably suspected” of having an STI. Under this statute, those who tested positive for an STI could be held in detention for as long as it took to render him or her noninfectious. (On paper, the law was gender-neutral; in practice, it almost exclusively focused on regulating women and their bodies.)
  • ...4 more annotations...
  • In all, the morals squad arrested 22 women on February 25, all for the crime of suspicion of STIs. But because Margaret Hennessey alone of these women gave a statement to the newspapers, it is her story that exemplifies the rest.The STI examinations showed that neither Hennessey nor Bradich had an STI, and officers released them at about 8:00 pm, with orders to appear for court the next morning. At 9:30 a.m., Hennessey stormed into court—ready, she declared to the Sacramento Bee, to “defend myself,” but “I would have no chance.” She was informed the charges had been dismissed. Nonetheless, the arrest left a mark. “I dare not venture on the streets,” she told the Bee later that day, “for fear I will be arrested again.”
  • Nearly every person examined and locked up under these laws was a woman. And the vague standard of “reasonable suspicion” enabled officials to pretty much detain any woman they wanted. Records exist in archives that document women being detained and examined for sitting at a restaurant alone; for changing jobs; for being with a man; for walking down a street in a way a male official found suspicious; and, often, for no reason at all.
  • Many women were also detained if they refused to have sex with police or health officers, contemporaneous exposés reveal. In the late 1940s, San Francisco police officers sometimes threatened to have women “vagged”—vaginally examined—if they didn’t accede to sexual demands. Women of color and immigrant women, in particular, were targeted—and subjected to a higher degree of abuse once they were locked up.
  • the American Plan laws—the ones passed in the late 1910s, enabling officials to examine people merely “reasonably suspected” of having STIs—are still on the books, in some form, in every state in the nation. Some of these laws have been altered or amended, and some have been absorbed into broader public-health statutes, but each state still has the power to examine “reasonably suspected” people and isolate the infected ones, if health officials deem such isolation necessary.
Ed Webb

On coming out as transgender in Donald Trump's America - Vox - 0 views

  • I used to think I wanted to see June and the other women on the show persevere in the face of suffering, because on some level, I believed that to embrace my own womanhood was to embrace suffering. Now I realize that I do want to see Gilead burn. I don’t want suffering anymore. I want catharsis. And not just for me.
  • As soon as I came out, an entire lifetime of unrealistic expectations for women’s beauty came crashing down on my head
  • What I believed for too long, and what you might believe too, is that your body is not a gift but an obligation. That it is not who you are but a series of tasks assigned to you by the accident of your birth. This is not true. The best obligations — the only real obligations — are chosen. Your life is your life. It is worth fighting for.
  • ...1 more annotation...
  • I recognize this scene from a lifetime spent among men who are angry and women who know precisely how to handle that anger. The men get to feel things, sometimes clumsily, sometimes eloquently. But the women are so often defined not by who they are but by what they have been asked to handle
Ed Webb

How ethical is it for advertisers to target your mood? | Emily Bell | Opinion | The Gua... - 0 views

  • The effectiveness of psychographic targeting is one bet being made by an increasing number of media companies when it comes to interrupting your viewing experience with advertising messages.
  • “Across the board, articles that were in top emotional categories, such as love, sadness and fear, performed significantly better than articles that were not.”
  • ESPN and USA Today are also using psychographic rather than demographic targeting to sell to advertisers, including in ESPN’s case, the decision to not show you advertising at all if your team is losing.
  • ...9 more annotations...
  • Media companies using this technology claim it is now possible for the “mood” of the reader or viewer to be tracked in real time and the content of the advertising to be changed accordingly
  • ads targeted at readers based on their predicted moods rather than their previous behaviour improved the click-through rate by 40%.
  • Given that the average click through rate (the number of times anyone actually clicks on an ad) is about 0.4%, this number (in gross terms) is probably less impressive than it sounds.
  • Cambridge Analytica, the company that misused Facebook data and, according to its own claims, helped Donald Trump win the 2016 election, used psychographic segmentation.
  • For many years “contextual” ads served by not very intelligent algorithms were the bane of digital editors’ lives. Improvements in machine learning should help eradicate the horrible business of showing insurance advertising to readers in the middle of an article about a devastating fire.
  • The words “brand safety” are increasingly used by publishers when demonstrating products such as Project Feels. It is a way publishers can compete on micro-targeting with platforms such as Facebook and YouTube by pointing out that their targeting will not land you next to a conspiracy theory video about the dangers of chemtrails.
  • the exploitation of psychographics is not limited to the responsible and transparent scientists at the NYT. While publishers were showing these shiny new tools to advertisers, Amazon was advertising for a managing editor for its surveillance doorbell, Ring, which contacts your device when someone is at your door. An editor for a doorbell, how is that going to work? In all kinds of perplexing ways according to the ad. It’s “an exciting new opportunity within Ring to manage a team of news editors who deliver breaking crime news alerts to our neighbours. This position is best suited for a candidate with experience and passion for journalism, crime reporting, and people management.” So if instead of thinking about crime articles inspiring fear and advertising doorbells in the middle of them, what if you took the fear that the surveillance-device-cum-doorbell inspires and layered a crime reporting newsroom on top of it to make sure the fear is properly engaging?
  • The media has arguably already played an outsized role in making sure that people are irrationally scared, and now that practice is being strapped to the considerably more powerful engine of an Amazon product.
  • This will not be the last surveillance-based newsroom we see. Almost any product that produces large data feeds can also produce its own “news”. Imagine the Fitbit newsroom or the managing editor for traffic reports from dashboard cams – anything that has a live data feed emanating from it, in the age of the Internet of Things, can produce news.
Ed Webb

The future of stupid fears | Bryan Alexander - 0 views

  • Culture of Fear argues that media and political fear-mongering teaches consumers and voters to see problems in terms of stories about heroic individuals, rather than about social or political factors.  The contexts get set aside, replaced with more relatable tales of villainous criminals and virtuous victims, which Glassner calls “neurologizing social problems” (217). There is also a curious, quietly conservative politics of the family involved.  Such fears emphasize stranger danger, which is actually statistically very rare.  Instead, they minimize the far more likely source of harm most American face: our family members (31).
  • fake fears reveal cultural anxieties, much as horror stories do
  • “news is what happens to your editors.”  By that he means “editors – and their bosses… [and] their families, friends, and business associates”(201)
  • ...5 more annotations...
  • Our politics clearly adore fear, notably from the Trump administration and its emphasis on immigrant-driven carnage.  Our news media continue to worship at the altar of “if it bleeds, it leads.”
  • that CNN is the opposite of a fringe news service.  Between Fox and MSNBC it occupies a neutral, middle ground.  It is, putatively, the sober center.  And it simply adores scaring the hell out of us
  • What does the likelihood of even more stupid fear-mongering mean for education?  It simply means, as I said years ago, we have to teach people to resist this stuff.  In our quest to teach digital literacy we should encourage students – of all ages – to avoid tv news, or to sample it judiciously, with great skepticism.  We should assist them in recognizing when politicians fire up fear campaigns based on poor facts.
  • politicians peddle terror because it often works
  • the negative impacts of such fear – the misdirection of resources, the creation of bad policy, the encouragement of mean world syndrome, the furtherance of racism – the promulgation of real damage
Ed Webb

A woman first wrote the prescient ideas Huxley and Orwell made famous - Quartzy - 1 views

  • In 1919, a British writer named Rose Macaulay published What Not, a novel about a dystopian future—a brave new world if you will—where people are ranked by intelligence, the government mandates mind training for all citizens, and procreation is regulated by the state.You’ve probably never heard of Macaulay or What Not. However, Aldous Huxley, author of the science fiction classic Brave New World, hung out in the same London literary circles as her and his 1932 book contains many concepts that Macaulay first introduced in her work. In 2019, you’ll be able to read Macaulay’s book yourself and compare the texts as the British publisher Handheld Press is planning to re- release the forgotten novel in March. It’s been out of print since the year it was first released.
  • The resurfacing of What Not also makes this a prime time to consider another work that influenced Huxley’s Brave New World, the 1923 novel We by Yvgeny Zamyatin. What Not and We are lost classics about a future that foreshadows our present. Notably, they are also hidden influences on some of the most significant works of 20th century fiction, Brave New World and George Orwell’s 1984.
  • In Macaulay’s book—which is a hoot and well worth reading—a democratically elected British government has been replaced with a “United Council, five minds with but a single thought—if that,” as she put it. Huxley’s Brave New World is run by a similarly small group of elites known as “World Controllers.”
  • ...12 more annotations...
  • citizens of What Not are ranked based on their intelligence from A to C3 and can’t marry or procreate with someone of the same rank to ensure that intelligence is evenly distributed
  • Brave New World is more futuristic and preoccupied with technology than What Not. In Huxley’s world, procreation and education have become completely mechanized and emotions are strictly regulated pharmaceutically. Macaulay’s Britain is just the beginning of this process, and its characters are not yet completely indoctrinated into the new ways of the state—they resist it intellectually and question its endeavors, like the newly-passed Mental Progress Act. She writes:He did not like all this interfering, socialist what-not, which was both upsetting the domestic arrangements of his tenants and trying to put into their heads more learning than was suitable for them to have. For his part he thought every man had a right to be a fool if he chose, yes, and to marry another fool, and to bring up a family of fools too.
  • Where Huxley pairs dumb but pretty and “pneumatic” ladies with intelligent gentlemen, Macaulay’s work is decidedly less sexist.
  • We was published in French, Dutch, and German. An English version was printed and sold only in the US. When Orwell wrote about We in 1946, it was only because he’d managed to borrow a hard-to-find French translation.
  • While Orwell never indicated that he read Macaulay, he shares her subversive and subtle linguistic skills and satirical sense. His protagonist, Winston—like Kitty—works for the government in its Ministry of Truth, or Minitrue in Newspeak, where he rewrites historical records to support whatever Big Brother currently says is good for the regime. Macaulay would no doubt have approved of Orwell’s wit. And his state ministries bear a striking similarity to those she wrote about in What Not.
  • Orwell was familiar with Huxley’s novel and gave it much thought before writing his own blockbuster. Indeed, in 1946, before the release of 1984, he wrote a review of Zamyatin’s We (pdf), comparing the Russian novel with Huxley’s book. Orwell declared Huxley’s text derivative, writing in his review of We in The Tribune:The first thing anyone would notice about We is the fact—never pointed out, I believe—that Aldous Huxley’s Brave New World must be partly derived from it. Both books deal with the rebellion of the primitive human spirit against a rationalised, mechanized, painless world, and both stories are supposed to take place about six hundred years hence. The atmosphere of the two books is similar, and it is roughly speaking the same kind of society that is being described, though Huxley’s book shows less political awareness and is more influenced by recent biological and psychological theories.
  • In We, the story is told by D-503, a male engineer, while in Brave New World we follow Bernard Marx, a protagonist with a proper name. Both characters live in artificial worlds, separated from nature, and they recoil when they first encounter people who exist outside of the state’s constructed and controlled cities.
  • Although We is barely known compared to Orwell and Huxley’s later works, I’d argue that it’s among the best literary science fictions of all time, and it’s highly relevant, as it was when first written. Noam Chomsky calls it “more perceptive” than both 1984 and Brave New World. Zamyatin’s futuristic society was so on point, he was exiled from the Soviet Union because it was such an accurate description of life in a totalitarian regime, though he wrote it before Stalin took power.
  • Macaulay’s work is more subtle and funny than Huxley’s. Despite being a century old, What Not is remarkably relevant and readable, a satire that only highlights how little has changed in the years since its publication and how dangerous and absurd state policies can be. In this sense then, What Not reads more like George Orwell’s 1949 novel 1984 
  • Orwell was critical of Zamyatin’s technique. “[We] has a rather weak and episodic plot which is too complex to summarize,” he wrote. Still, he admired the work as a whole. “[Its] intuitive grasp of the irrational side of totalitarianism—human sacrifice, cruelty as an end in itself, the worship of a Leader who is credited with divine attributes—[…] makes Zamyatin’s book superior to Huxley’s,”
  • Like our own tech magnates and nations, the United State of We is obsessed with going to space.
  • Perhaps in 2019 Macaulay’s What Not, a clever and subversive book, will finally get its overdue recognition.
Ed Webb

Sci-Fi Author J.G. Ballard Predicts the Rise of Social Media (1977) | Open Culture - 0 views

  • Ballard was a brilliant futurist and his dystopian novels and short stories anticipated the 80s cyberpunk of William Gibson, exploring with a twisted sense of humor what Jean Lyotard famously dubbed in 1979 The Postmodern Condition: a state of ideological, scientific, personal, and social disintegration under the reign of a technocratic, hypercapitalist, “computerized society.” Ballard had his own term for it: “media landscape,” and his dark visions of the future often correspond to the virtual world we inhabit today.
  • Ballard made several disturbingly accurate predictions in interviews he gave over the decades (collected in a book titled Extreme Metaphors)
  • he gave an interview to I-D magazine in which he predicted the internet as “invisible streams of data pulsing down lines to produce an invisible loom of world commerce and information.” This may not seem especially prescient (see, for example, E.M. Forster’s 1909 “The Machine Stops” for a chilling futuristic scenario much further ahead of its time). But Ballard went on to describe in detail the rise of the Youtube celebrity: Every home will be transformed into its own TV studio. We'll all be simultaneously actor, director and screenwriter in our own soap opera. People will start screening themselves. They will become their own TV programmes.
  • ...4 more annotations...
  • ten years earlier, in an essay for Vogue, he described in detail the spread of social media and its totalizing effects on our lives. In the technological future, he wrote, “each of us will be both star and supporting player.” Every one of our actions during the day, across the entire spectrum of domestic life, will be instantly recorded on video-tape. In the evening we will sit back to scan the rushes, selected by a computer trained to pick out only our best profiles, our wittiest dialogue, our most affecting expressions filmed through the kindest filters, and then stitch these together into a heightened re-enactment of the day. Regardless of our place in the family pecking order, each of us within the privacy of our own rooms will be the star in a continually unfolding domestic saga, with parents, husbands, wives and children demoted to an appropriate supporting role.
  • this description almost perfectly captures the behavior of the average user of Facebook, Instagram, etc.
  • Ballard wrote a 1977 short story called “The Intensive Care Unit,” in which—writes the site Ballardian---“ordinances are in place to prevent people from meeting in person. All interaction is mediated through personal cameras and TV screens.”
  • “Now everybody can document themselves in a way that was inconceivable 30, 40, 50 years ago,” Ballard notes, “I think this reflects a tremendous hunger among people for ‘reality’—for ordinary reality. It’s very difficult to find the ‘real,’ because the environment is totally manufactured.” Like Jean Baudrillard, another prescient theorist of postmodernity, Ballard saw this loss of the "real" coming many decades ago. As he told I-D in 1987, “in the media landscape it’s almost impossible to separate fact from fiction.”
Ed Webb

Artificial Intelligence and the Future of Humans | Pew Research Center - 0 views

  • experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities
  • most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.
  • CONCERNS Human agency: Individuals are  experiencing a loss of control over their lives Decision-making on key aspects of digital life is automatically ceded to code-driven, "black box" tools. People lack input and do not learn the context about how the tools work. They sacrifice independence, privacy and power over choice; they have no control over these processes. This effect will deepen as automated systems become more prevalent and complex. Data abuse: Data use and surveillance in complex systems is designed for profit or for exercising power Most AI tools are and will be in the hands of companies striving for profits or governments striving for power. Values and ethics are often not baked into the digital systems making people's decisions for them. These systems are globally networked and not easy to regulate or rein in. Job loss: The AI takeover of jobs will widen economic divides, leading to social upheaval The efficiencies and other economic advantages of code-based machine intelligence will continue to disrupt all aspects of human work. While some expect new jobs will emerge, others worry about massive job losses, widening economic divides and social upheavals, including populist uprisings. Dependence lock-in: Reduction of individuals’ cognitive, social and survival skills Many see AI as augmenting human capacities but some predict the opposite - that people's deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others. Mayhem: Autonomous weapons, cybercrime and weaponized information Some predict further erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications and the use of weaponized information, lies and propaganda to dangerously destabilize human groups. Some also fear cybercriminals' reach into economic systems.
  • ...18 more annotations...
  • AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons
  • “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”
  • SUGGESTED SOLUTIONS Global good is No. 1: Improve human collaboration across borders and stakeholder groups Digital cooperation to serve humanity's best interests is the top priority. Ways must be found for people around the world to come to common understandings and agreements - to join forces to facilitate the innovation of widely accepted approaches aimed at tackling wicked problems and maintaining control over complex human-digital networks. Values-based system: Develop policies to assure AI will be directed at ‘humanness’ and common good Adopt a 'moonshot mentality' to build inclusive, decentralized intelligent digital networks 'imbued with empathy' that help humans aggressively ensure that technology meets social and ethical responsibilities. Some new level of regulatory and certification process will be necessary. Prioritize people: Alter economic and political systems to better help humans ‘race with the robots’ Reorganize economic and political systems toward the goal of expanding humans' capacities and capabilities in order to heighten human/AI collaboration and staunch trends that would compromise human relevance in the face of programmed intelligence.
  • “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”
  • We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals?
  • AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment
  • The record to date is that convenience overwhelms privacy
  • As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging
  • AI will eventually cause a large number of people to be permanently out of work
  • Newer generations of citizens will become more and more dependent on networked AI structures and processes
  • there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control
  • As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified
  • Given historical precedent, one would have to assume it will be our worst qualities that are augmented
  • Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing
  • We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully
  • the Orwellian nightmare realised
  • “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”
  • The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education
Ed Webb

Where is the boundary between your phone and your mind? | US news | The Guardian - 1 views

  • Here’s a thought experiment: where do you end? Not your body, but you, the nebulous identity you think of as your “self”. Does it end at the limits of your physical form? Or does it include your voice, which can now be heard as far as outer space; your personal and behavioral data, which is spread out across the impossibly broad plane known as digital space; and your active online personas, which probably encompass dozens of different social media networks, text message conversations, and email exchanges? This is a question with no clear answer, and, as the smartphone grows ever more essential to our daily lives, that border’s only getting blurrier.
  • our minds have become even more radically extended than ever before
  • one of the essential differences between a smartphone and a piece of paper, which is that our relationship with our phones is reciprocal: we not only put information into the device, we also receive information from it, and, in that sense, it shapes our lives far more actively than would, say, a shopping list. The shopping list isn’t suggesting to us, based on algorithmic responses to our past and current shopping behavior, what we should buy; the phone is
  • ...10 more annotations...
  • American consumers spent five hours per day on their mobile devices, and showed a dizzying 69% year-over-year increase in time spent in apps like Facebook, Twitter, and YouTube. The prevalence of apps represents a concrete example of the movement away from the old notion of accessing the Internet through a browser and the new reality of the connected world and its myriad elements – news, social media, entertainment – being with us all the time
  • “In the 90s and even through the early 2000s, for many people, there was this way of thinking about cyberspace as a space that was somewhere else: it was in your computer. You went to your desktop to get there,” Weigel says. “One of the biggest shifts that’s happened and that will continue to happen is the undoing of a border that we used to perceive between the virtual and the physical world.”
  • While many of us think of the smartphone as a portal for accessing the outside world, the reciprocity of the device, as well as the larger pattern of our behavior online, means the portal goes the other way as well: it’s a means for others to access us
  • Weigel sees the unfettered access to our data, through our smartphone and browser use, of what she calls the big five tech companies – Apple, Alphabet (the parent company of Google), Microsoft, Facebook, and Amazon – as a legitimate problem for notions of democracy
  • an unfathomable amount of wealth, power, and direct influence on the consumer in the hands of just a few individuals – individuals who can affect billions of lives with a tweak in the code of their products
  • “This is where the fundamental democracy deficit comes from: you have this incredibly concentrated private power with zero transparency or democratic oversight or accountability, and then they have this unprecedented wealth of data about their users to work with,”
  • the rhetoric around the Internet was that the crowd would prevent the spread of misinformation, filtering it out like a great big hive mind; it would also help to prevent the spread of things like hate speech. Obviously, this has not been the case, and even the relatively successful experiments in this, such as Wikipedia, have a great deal of human governance that allows them to function properly
  • We should know and be aware of how these companies work, how they track our behavior, and how they make recommendations to us based on our behavior and that of others. Essentially, we need to understand the fundamental difference between our behavior IRL and in the digital sphere – a difference that, despite the erosion of boundaries, still stands
  • “Whether we know it or not, the connections that we make on the Internet are being used to cultivate an identity for us – an identity that is then sold to us afterward,” Lynch says. “Google tells you what questions to ask, and then it gives you the answers to those questions.”
  • It isn’t enough that the apps in our phone flatten all of the different categories of relationships we have into one broad group: friends, followers, connections. They go one step further than that. “You’re being told who you are all the time by Facebook and social media because which posts are coming up from your friends are due to an algorithm that is trying to get you to pay more attention to Facebook,” Lynch says. “That’s affecting our identity, because it affects who you think your friends are, because they’re the ones who are popping up higher on your feed.”
Ed Webb

We Are Drowning in a Devolved World: An Open Letter from Devo - Noisey - 0 views

  • When Devo formed more than 40 years ago, we never dreamed that two decades into the 21st century, everything we had theorized would not only be proven, but also become worse than we had imagined
  • May 4 changed my life, and I truly believe Devo would not exist without that horror. It made me realize that all the Quasar color TVs, Swanson TV dinners, Corvettes, and sofa beds in the world didn't mean we were actually making progress. It meant the future could be not only as barbaric as the past, but that it most likely would be. The dystopian novels 1984, Animal Farm, and Brave New World suddenly seemed less like cautionary tales about the encroaching fusion of technological advances with the centralized, authoritarian power of the state, and more like subversive road maps to condition the intelligentsia for what was to come.
  • a philosophy emerged, fueled by the revelations that linear progress in a consumer society was a lie
  • ...8 more annotations...
  • There were no flying cars and domed cities, as promised in Popular Science; rather, there was a dumbing down of the population engineered by right-wing politicians, televangelists, and Madison Avenue. I called what we saw “De-evolution,” based upon the tendency toward entropy across all human endeavors. Borrowing the tactics of the Mad Men-era of our childhood, we shortened the name of the idea to the marketing-friendly “Devo.”
  • we witnessed an America where the capacity for critical thought and reasoning were eroding fast. People mindlessly repeating slogans from political propaganda and ad campaigns: “America, Love It or leave It”; “Don’t Ask Why, Drink Bud Dry”; “You’ve Come A Long Way, Baby”; even risk-free, feel-good slogans like “Give Peace a Chance.” Here was an emerging Corporate Feudal State
  • it seemed like the only real threat to consumer society at our disposal was meaning: turning sloganeering on its head for sarcastic or subversive means, and making people notice that they were being moved and manipulated by marketing, not by well-meaning friends disguised as mom-and-pop. And so creative subversion seemed the only viable course of action
  • Presently, the fabric that holds a society together has shredded in the wind. Everyone has their own facts, their own private Idaho stored in their expensive cellular phones
  • Social media provides the highway straight back to Plato’s Allegory of the Cave. The restless natives react to digital shadows on the wall, reduced to fear, hate, and superstition
  • The rise of authoritarian leadership around the globe, fed by ill-informed populism, is well-documented at this point. And with it, we see the ugly specter of increased racism and anti-Semitism. It’s open season on those who gladly vote against their own self-interests. The exponential increase in suffering for more and more of the population is heartbreaking to see. “Freedom of choice is what you got / Freedom from choice is what you want,” those Devo clowns said in 1980.
  • the hour is getting late. Perhaps the reason Devo was even nominated after 15 years of eligibility is because Western society seems locked in a death wish. Devo doesn’t skew so outside the box anymore. Maybe people are a bit nostalgic for our DIY originality and substance. We were the canaries in the coalmine warning our fans and foes of things to come in the guise of the Court Jester, examples of conformity in extremis in order to warn against conformity
  • Devo is merely the house band on the Titanic
Ed Webb

Do Not Pass Go | On the Media | WNYC Studios - 0 views

  •  
    How the multies win
Ed Webb

Chimene Keitner on Twitter: "People burned alive or asphyxiated while fleeing in their ... - 0 views

  • People burned alive or asphyxiated while fleeing in their cars. Tent cities of displaced families whose homes are destroyed. Schools under "shelter in place" protocols due to unhealthy air. Pervasive use of respirator masks. Climate change dystopia isn't coming--it's here.
« First ‹ Previous 61 - 80 of 604 Next › Last »
Showing 20 items per page