Skip to main content

Home/ Dystopias/ Group items tagged propaganda

Rss Feed Group items tagged

Ed Webb

Neurocinematics: When Neuroscience Meets Filmmaking | Material for thought - 0 views

  • it is very easy to predict what Hollywood studios will make of these researches, whatever their initial purpose was. They will use them to optimize trailers and films so that they can generate the ‘optimum’ effect on the brains of the audience. They already do so, but through approximate methods, such as asking a test audience to explain what they experienced when watching a movie (technique used for Harry Potter films). Hollywood studios have always been obsessed in controlling their audience, for economic reasons, or American propaganda. It is very frightening to think of what they will do with this new technique.
  • The problem lies more in education and the audience’s capacity to remain critical
Ed Webb

Sinclair tells stations to air media-bashing promos - and the criticism goes viral - Ap... - 0 views

  • they're seeing these people they've trusted for decades tell them things they know are essentially propaganda
Ed Webb

Endtime for Hitler: On the Downfall of the Downfall Parodies - Mark Dery - Doom Patrol:... - 1 views

  • Endtime for Hitler: On the Downfall of the Downfall Parodies
  • Hitler left an inexhaustible fund of unforgettable images; Riefenstahl’sTriumph of the Will alone is enough to make him a household deity of the TV age.
  • The Third Reich was the first thoroughly modern totalitarian horror, scripted by Hitler and mass-marketed by Goebbels, a tour de force of media spectacle and opinion management that America’s hidden persuaders—admen, P.R. flacks, political campaign managers—studied assiduously.  A Mad Man in both senses, Hitler sold the German volk on a racially cleansed utopia, a thousand-year empire whose kitschy grandeur was strictly Forest Lawn Parthenon.
  • ...13 more annotations...
  • Hitler, unlike Stalin or Mao, was an intuitive master of media stagecraft. David Bowie’s too-clever quip that Hitler was the first rock star, for which Bowie was widely reviled at the time, was spot-on.
  • the media like Hitler because Hitler liked the media
  • Perhaps that’s why he continues to mesmerize us: because he flickers, irresolvably, between the seemingly inhuman and the all too human.
  • His psychopathology is a queasy funhouse reflection, straight out of Nightmare Alley, of the instrumental rationality of the machine age. The genocidal assembly lines of Hitler’s death camps are a grotesque parody of Fordist mechanization, just as the Nazis’ fastidious recycling of every remnant of their victims but their smoke—their gold fillings melted down for bullion, their hair woven into socks for U-boat crewmen—is a depraved caricature of the Taylorist mania for workplace efficiency.
  • there’s something perversely comforting about Hitler’s unchallenged status as the metaphysical gravitational center of all our attempts at philosophizing evil
  • he prefigured postmodernity: the annexation of politics by Hollywood and Madison Avenue, the rise of the celebrity as a secular icon, the confusion of image and reality in a Matrix world. He regarded existence “as a kind of permanent parade before a gigantic audience” (Fest), calculating the visual impact of every histrionic pose, every propaganda tagline, every monumental building
  • By denying everyone’s capability, at least in theory, for Hitlerian evil, we let ourselves off the hook
  • Yet Hitler, paradoxically, is also a shriveled untermensch, the protypical nonentity; a face in the crowd in an age of crowds, instantly forgettable despite his calculated efforts to brand himself (the toothbrush mustache of the military man coupled with the flopping forelock of the art-school bohemian)
  • there was always a comic distance between the public image of the world-bestriding, godlike Fuhrer and his Inner Adolf, a nail-biting nebbish tormented by flatulence. Knowingly or not, the Downfall parodies dance in the gap between the two. More immediately, they rely on the tried-and-true gimmick of bathos. What makes the Downfall parodies so consistently hilarious is the incongruity of whatever viral topic is making the Fuhrer go ballistic and the outsized scale of his gotterdammerung-strength tirade
  • The Downfall meme dramatizes the cultural logic of our remixed, mashed-up times, when digital technology allows us to loot recorded history, prying loose any signifier that catches our magpie eyes and repurposing it to any end. The near-instantaneous speed with which parodists use these viral videos to respond to current events underscores the extent to which the social Web, unlike the media ecologies of Hitler’s day, is a many-to-many phenomenon, more collective cacophony than one-way rant. As well, the furor (forgive pun) over YouTube’s decision to capitulate to the movie studio’s takedown demand, rather than standing fast in defense of Fair Use (a provision in copyright law that protects the re-use of a work for purposes of parody), indicates the extent to which ordinary people feel that commercial culture is somehow theirs, to misread or misuse as the spirit moves them.
  • the closest thing we have to a folk culture, the connective tissue that binds us as a society
  • SPIEGEL: Can you also get your revenge on him by using comedy? Brooks: Yes, absolutely. Of course it is impossible to take revenge for 6 million murdered Jews. But by using the medium of comedy, we can try to rob Hitler of his posthumous power and myths. [...] We take away from him the holy seriousness that always surrounded him and protected him like a cordon.”
  • risking the noose, some Germans laughed off their fears and mocked the Orwellian boot stamping on the human face, giving vent to covert opposition through flüsterwitze (“whispered jokes”). Incredibly, even Jews joked about their plight, drawing on the absurdist humor that is quintessentially Jewish to mock the Nazis even as they lightened the intolerable burden of Jewish life in the shadow of the swastika. Rapaport offers a sample of Jewish humor in Hitler’s Germany: “A Jew is arrested during the war, having been denounced for killing a Nazi at 10 P.M. and even eating the brain of his victim. This is his defense: In the first place, a Nazi hasn’t got any brain. Secondly, a Jew doesn’t eat anything that comes from a pig. And thirdly, he could not have killed the Nazi at 10 P.M. because at that time everybody listens to the BBC broadcast.”
  •  
    Brilliant
Ed Webb

AFP: Beijing officials trained in social media: report - 2 views

  • Chinese web users frequently refer to the "50 cent army", rumoured to be a group of freelance propagandists who post pro-Communist Party entries on blogs and websites, posing as ordinary members of the public.
Ed Webb

The stakes of November: It doesn't matter that much | The Economist - 0 views

  • This is the great unspeakable fact of American politics: it doesn't matter all that much who wins.
  • Military suppliers, big Wall Street interests, and the economic middle-class may do better or worse, but they always do pretty well.
  • I think you'll find that political parties tend to reliably support policies that have nice distributional consequences for the interest groups that support them. And I think you'll find politicians and court intellectuals brilliant at framing pay-offs to party stalwarts as policies absolutely necessary to the common weal.
  • ...3 more annotations...
  • Democratic politics is to a great extent a war of coalitions over what the great political economist James M. Buchanan called "the fiscal commons". Think of government as a huge pool of money. Control of government means control over that pool of money. Parties gain control by putting together winning coalitions of interest groups. When a party has control, its coalition's interest groups get more from the pool and the losing coalition's interest groups get less. So, yeah, it matters who wins. When Democrats are in charge, that's great news for public-employees unions and General Electric's alternative energy division. When the Republicans are in charge, that's great news for rich people and Raytheon.
  • we shouldn't expect government with a moderate, centre-right House to look a lot different from the moderate, centre-left government we've got now.  
  • Nevertheless, people are going out of their minds stomping heads and warning of streets teeming with sexual predators because we are all phenomenal dupes willing to pick up the propaganda partisans put down. Our minds have been warped by relentless marketing designed to engender false consciousness of stark political brand contrasts. It's as if Crest is telling us that Colgate leads to socialism and Colgate is telling us that Crest leads to plutocracy and all of us believe half of it.
  •  
    Spider Jerusalem might recognize this world.
Ed Webb

Anti-piracy tool will harvest and market your emotions - Computerworld Blogs - 0 views

  • After being awarded a grant, Aralia Systems teamed up with Machine Vision Lab in what seems like a massive invasion of your privacy beyond "in the name of security." Building on existing cinema anti-piracy technology, these companies plan to add the ability to harvest your emotions. This is the part where it seems that filmgoers should be eligible to charge movie theater owners. At the very least, shouldn't it result in a significantly discounted movie ticket?  Machine Vision Lab's Dr Abdul Farooq told PhysOrg, "We plan to build on the capabilities of current technology used in cinemas to detect criminals making pirate copies of films with video cameras. We want to devise instruments that will be capable of collecting data that can be used by cinemas to monitor audience reactions to films and adverts and also to gather data about attention and audience movement. ... It is envisaged that once the technology has been fine tuned it could be used by market researchers in all kinds of settings, including monitoring reactions to shop window displays."  
  • The 3D camera data will "capture the audience as a whole as a texture."
  • the technology will enable companies to cash in on your emotions and sell that personal information as marketing data
  • ...4 more annotations...
  • "Within the cinema industry this tool will feed powerful marketing data that will inform film directors, cinema advertisers and cinemas with useful data about what audiences enjoy and what adverts capture the most attention. By measuring emotion and movement film companies and cinema advertising agencies can learn so much from their audiences that will help to inform creativity and strategy.” 
  • hey plan to fine-tune it to monitor our reactions to window displays and probably anywhere else the data can be used for surveillance and marketing.
  • Muslim women have got the right idea. Soon well all be wearing privacy tents.
  • In George Orwell's novel 1984, each home has a mandatory "telescreen," a large flat panel, something like a TV, but with the ability for the authorities to observer viewers in order to ensure they are watching all the required propaganda broadcasts and reacting with appropriate emotions. Problem viewers would be brought to the attention of the Thought Police. The telescreen, of course, could not be turned off. It is reassuring to know that our technology has finally caught up with Oceania's.
Ed Webb

We Are Drowning in a Devolved World: An Open Letter from Devo - Noisey - 0 views

  • When Devo formed more than 40 years ago, we never dreamed that two decades into the 21st century, everything we had theorized would not only be proven, but also become worse than we had imagined
  • May 4 changed my life, and I truly believe Devo would not exist without that horror. It made me realize that all the Quasar color TVs, Swanson TV dinners, Corvettes, and sofa beds in the world didn't mean we were actually making progress. It meant the future could be not only as barbaric as the past, but that it most likely would be. The dystopian novels 1984, Animal Farm, and Brave New World suddenly seemed less like cautionary tales about the encroaching fusion of technological advances with the centralized, authoritarian power of the state, and more like subversive road maps to condition the intelligentsia for what was to come.
  • a philosophy emerged, fueled by the revelations that linear progress in a consumer society was a lie
  • ...8 more annotations...
  • There were no flying cars and domed cities, as promised in Popular Science; rather, there was a dumbing down of the population engineered by right-wing politicians, televangelists, and Madison Avenue. I called what we saw “De-evolution,” based upon the tendency toward entropy across all human endeavors. Borrowing the tactics of the Mad Men-era of our childhood, we shortened the name of the idea to the marketing-friendly “Devo.”
  • we witnessed an America where the capacity for critical thought and reasoning were eroding fast. People mindlessly repeating slogans from political propaganda and ad campaigns: “America, Love It or leave It”; “Don’t Ask Why, Drink Bud Dry”; “You’ve Come A Long Way, Baby”; even risk-free, feel-good slogans like “Give Peace a Chance.” Here was an emerging Corporate Feudal State
  • it seemed like the only real threat to consumer society at our disposal was meaning: turning sloganeering on its head for sarcastic or subversive means, and making people notice that they were being moved and manipulated by marketing, not by well-meaning friends disguised as mom-and-pop. And so creative subversion seemed the only viable course of action
  • Presently, the fabric that holds a society together has shredded in the wind. Everyone has their own facts, their own private Idaho stored in their expensive cellular phones
  • Social media provides the highway straight back to Plato’s Allegory of the Cave. The restless natives react to digital shadows on the wall, reduced to fear, hate, and superstition
  • The rise of authoritarian leadership around the globe, fed by ill-informed populism, is well-documented at this point. And with it, we see the ugly specter of increased racism and anti-Semitism. It’s open season on those who gladly vote against their own self-interests. The exponential increase in suffering for more and more of the population is heartbreaking to see. “Freedom of choice is what you got / Freedom from choice is what you want,” those Devo clowns said in 1980.
  • the hour is getting late. Perhaps the reason Devo was even nominated after 15 years of eligibility is because Western society seems locked in a death wish. Devo doesn’t skew so outside the box anymore. Maybe people are a bit nostalgic for our DIY originality and substance. We were the canaries in the coalmine warning our fans and foes of things to come in the guise of the Court Jester, examples of conformity in extremis in order to warn against conformity
  • Devo is merely the house band on the Titanic
Ed Webb

Artificial Intelligence and the Future of Humans | Pew Research Center - 0 views

  • experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities
  • most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.
  • CONCERNS Human agency: Individuals are  experiencing a loss of control over their lives Decision-making on key aspects of digital life is automatically ceded to code-driven, "black box" tools. People lack input and do not learn the context about how the tools work. They sacrifice independence, privacy and power over choice; they have no control over these processes. This effect will deepen as automated systems become more prevalent and complex. Data abuse: Data use and surveillance in complex systems is designed for profit or for exercising power Most AI tools are and will be in the hands of companies striving for profits or governments striving for power. Values and ethics are often not baked into the digital systems making people's decisions for them. These systems are globally networked and not easy to regulate or rein in. Job loss: The AI takeover of jobs will widen economic divides, leading to social upheaval The efficiencies and other economic advantages of code-based machine intelligence will continue to disrupt all aspects of human work. While some expect new jobs will emerge, others worry about massive job losses, widening economic divides and social upheavals, including populist uprisings. Dependence lock-in: Reduction of individuals’ cognitive, social and survival skills Many see AI as augmenting human capacities but some predict the opposite - that people's deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others. Mayhem: Autonomous weapons, cybercrime and weaponized information Some predict further erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications and the use of weaponized information, lies and propaganda to dangerously destabilize human groups. Some also fear cybercriminals' reach into economic systems.
  • ...18 more annotations...
  • AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons
  • “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”
  • SUGGESTED SOLUTIONS Global good is No. 1: Improve human collaboration across borders and stakeholder groups Digital cooperation to serve humanity's best interests is the top priority. Ways must be found for people around the world to come to common understandings and agreements - to join forces to facilitate the innovation of widely accepted approaches aimed at tackling wicked problems and maintaining control over complex human-digital networks. Values-based system: Develop policies to assure AI will be directed at ‘humanness’ and common good Adopt a 'moonshot mentality' to build inclusive, decentralized intelligent digital networks 'imbued with empathy' that help humans aggressively ensure that technology meets social and ethical responsibilities. Some new level of regulatory and certification process will be necessary. Prioritize people: Alter economic and political systems to better help humans ‘race with the robots’ Reorganize economic and political systems toward the goal of expanding humans' capacities and capabilities in order to heighten human/AI collaboration and staunch trends that would compromise human relevance in the face of programmed intelligence.
  • “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”
  • We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals?
  • AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment
  • The record to date is that convenience overwhelms privacy
  • As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging
  • AI will eventually cause a large number of people to be permanently out of work
  • Newer generations of citizens will become more and more dependent on networked AI structures and processes
  • there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control
  • As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified
  • Given historical precedent, one would have to assume it will be our worst qualities that are augmented
  • Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing
  • We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully
  • the Orwellian nightmare realised
  • “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”
  • The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education
Ed Webb

How white male victimhood got monetised | The Independent - 0 views

  • I also learned a metric crap-tonne about how online communities of angry young nerd dudes function. Which is, to put it simply, around principles of pure toxicity. And now that toxicity has bled into wider society.
  • In a twist on the "1,000 true fans" principle worthy of Black Mirror, any alt-right demagogue who can gather 1,000 whining, bitter, angry men with zero self-awareness now has a self-sustaining full time job as an online sh*tposter.
  • Social media has been assailed by one toxic "movement" after another, from Gamergate to Incel terrorism. But the "leaders" of these movements, a ragtag band of demagogues, profiteers and charlatans, seem less interested in political change than in racking up Patreon backers.
  • ...5 more annotations...
  • Making a buck from the alt-right is quite simple. Get a blog or a YouTube channel. Then under the guise of political dialogue or pseudo-science, start spouting hate speech. You'll soon find followers flocking to your banner.
  • Publish a crappy ebook explaining why SJWs Always Lie. Or teach your followers how to “think like a silverback gorilla” (surely an arena where the far right already triumph?) via a pricey seminar. Launch a Kickstarter for a badly drawn comic packed with anti-diversity propaganda. They'll sell by the bucketload to followers eager to virtue-signal their membership in the rank and file of the alt-right
  • the seemingly bottomless reservoirs of white male victimhood
  • nowhere is there a better supply of the credulous than among the angry white men who flock to the far right. Embittered by their own life failures, the alt-right follower is eager to believe they have a genetically superior IQ and are simply the victim of a libtard conspiracy to keep them down
  • We're barely in the foothills of the mountains of madness that the internet and social media are unleashing into our political process. If you think petty demagogues like Jordan Peterson are good at milking cash from the crowd, you ain’t seen nothing yet. Because he was just the beginning – and his ideology of the white male victim is rapidly spiralling into something that even he can no longer control
Ed Webb

The Digital Maginot Line - 0 views

  • The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless). In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.
  • There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.
  • There’s very little incentive not to try everything: this is a revolution that is being A/B tested.
  • ...17 more annotations...
  • The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.
  • Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target Russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure
  • Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle. But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.
  • The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead
  • Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble. But ultimately the information war is about territory — just not the geographic kind. In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.
  • If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it.
  • The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.
  • This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”
  • Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.
  • Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.
  • The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare
  • Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups
  • platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors
  • What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation. We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.
  • We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.
  • Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.
  • Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets.
Ed Webb

Narrative Napalm | Noah Kulwin - 0 views

  • there are books whose fusion of factual inaccuracy and moral sophistry is so total that they can only be written by Malcolm Gladwell
  • Malcolm Gladwell’s decades-long shtick has been to launder contrarian thought and corporate banalities through his positions as a staff writer at The New Yorker and author at Little, Brown and Company. These insitutitions’ disciplining effect on Gladwell’s prose, getting his rambling mind to conform to clipped sentences and staccato revelations, has belied his sly maliciousness and explosive vacuity: the two primary qualities of Gladwell’s oeuvre.
  • as is typical with Gladwell’s books and with many historical podcasts, interrogation of the actual historical record and the genuine moral dilemmas it poses—not the low-stakes bait that he trots out as an MBA case study in War—is subordinated to fluffy bullshit and biographical color
  • ...13 more annotations...
  • by taking up military history, Gladwell’s half-witted didacticism threatens to convince millions of people that the only solution to American butchery is to continue shelling out for sharper and larger knives
  • Although the phrase “Bomber Mafia” traditionally refers to the pre-World War II staff and graduates of the Air Corps Tactical School, Gladwell’s book expands the term to include both kooky tinkerers and buttoned-down military men. Wild, far-seeing mavericks, they understood that the possibilities of air power had only just been breached. They were also, as Gladwell insists at various points, typical Gladwellian protagonists: secluded oddballs whose technical zealotry and shared mission gave them a sense of community that propelled them beyond any station they could have achieved on their own.
  • Gladwell’s narrative is transmitted as seamlessly as the Wall Street or Silicon Valley koans that appear atop LinkedIn profiles, Clubhouse accounts, and Substack missives.
  • Gladwell has built a career out of making banality seem fresh
  • Drawing a false distinction between the Bomber Mafia and the British and American military leaders who preceded them allows Gladwell to make the case that a few committed brainiacs developed a humane, “tactical” kind of air power that has built the security of the world we live in today.
  • By now, the press cycle for every Gladwell book release is familiar: experts and critics identify logical flaws and factual errors, they are ignored, Gladwell sells a zillion books, and the world gets indisputably dumber for it.
  • “What actually happened?” Gladwell asks of the Blitz. “Not that much! The panic never came,” he answers, before favorably referring to an unnamed “British government film from 1940,” which is in actuality the Academy Award-nominated propaganda short London Can Take It!, now understood to be emblematic of how the myth of the stoic Brit was manufactured.
  • Gladwell goes to great pains to portray Curtis “Bombs Away” LeMay as merely George Patton-like: a prima donna tactician with some masculinity issues. In reality, LeMay bears a closer resemblance to another iconic George C. Scott performance, one that LeMay directly inspired: Dr. Strangelove’s General Buck Turgidson, who at every turn attempts to force World War III and, at the movie’s close, when global annihilation awaits, soberly warns of a “mineshaft gap” between the United States and the Commies. That, as Gladwell might phrase it, was the “real” Curtis LeMay: a violent reactionary who was never killed or tried because he had the luck to wear the brass of the correct country on his uniform. “I suppose if I had lost the war, I would have been tried as a war criminal,” LeMay once told an Air Force cadet. “Fortunately, we were on the winning side.”
  • Why would Malcolm Gladwell, who seems to admire LeMay so much, talk at such great length about the lethality of LeMay’s Japanese firebombing? The answer lies in what this story leaves out. Mentioned only glancingly in Gladwell’s story are the atomic bombs dropped on Japan. The omission allows for a stupid and classically Gladwell argument: that indiscriminate firebombing brought a swift end to the war, and its attendant philosophical innovations continue to envelop us in a blanket of security that has not been adequately appreciated
  • While LeMay’s 1945 firebombing campaign was certainly excessive—and represented the same base indifference to human life that got Nazis strung up at Nuremberg—it did not end the war. The Japanese were not solely holding out because their military men were fanatical in ways that the Americans weren’t, as Gladwell seems to suggest, citing Conrad Crane, an Army staff historian and hagiographer of LeMay’s[1]; they were holding out because they wanted better terms of surrender—terms they had the prospect of negotiating with the Soviet Union. The United States, having already developed an atomic weapon—and having made the Soviet Union aware of it—decided to drop it as it became clear the Soviet Union was readying to invade Japan. On August 6, the United States dropped a bomb on Hiroshima. Three days later, and mere hours after the Soviet Union formally declared war on the morning of August 9, the Americans dropped the second atomic bomb on Nagasaki. An estimated 210,000 people were killed, the majority of them on the days of the bombings. It was the detonation of these bombs that forced the end of the war. The Japanese unconditional surrender to the Americans was announced on August 15 and formalized on the deck of the USS Missouri on September 2. As historians like Martin Sherwin and Tsuyoshi Hasegawa have pointed out, by dropping the bombs, the Truman administration had kept the Communist threat out of Japan. Imperial Japan was staunchly anticommunist, and under American post-war dominion, the country would remain that way. But Gladwell is unequipped to supply the necessary geopolitical context that could meaningfully explain why the American government would force an unconditional surrender when the possibility of negotiation remained totally live.
  • In 1968, he would join forces with segregationist George Wallace as the vice-presidential candidate on his “American Independent Party” ticket, a fact literally relegated to a footnote in Gladwell’s book. This kind of omission is par for the course in The Bomber Mafia. While Gladwell constantly reminds the reader that the air force leadership was trying to wage more effective wars so as to end all wars, he cannot help but shove under the rug that which is inconvenient
  • This is truly a lesson for the McKinsey set and passive-income crowd for whom The Bomber Mafia is intended: doing bad things is fine, so long as you privately feel bad about it.
  • The British advocacy group Action on Armed Violence just this month estimated that between 2016 and 2020 in Afghanistan, there were more than 2,100 civilians killed and 1,800 injured by air strikes; 37 percent of those killed were children.
  •  
    An appropriately savage review of Gladwell's foray into military history. Contrast with the elegance of KSR's The Lucky Strike which actually wrestles with the moral issues.
Ed Webb

Elise Armani with Piotr Szyhalski - The Brooklyn Rail - 0 views

  • During the entire history of America, the US has not been at war for 17 years. That's incredible, mainly because if you talk to people who maybe aren't that much that interested in history, they would say, “That’s crazy. What are you talking about? There’s no war.”
  • Our relationship with war and how our country functions in the world is so warped and twisted. Every time the word “war” is introduced into the cultural discourse, you know that it is already corrupted. That's why it’s paired here with “back to normal,” because it's another combination of phrases that stood out…Everybody keeps talking about things getting back to normal. Then the pronouncements that this is a war and we’re fighting an invisible enemy. It just seems so disturbing really because what that means is that we're about to start doing things that are ethically questionable. To me, what was happening is that the pronouncement was made so that anything goes, and there's no culpability, nobody will be held responsible for making any decisions whatsoever because it was war and things had to be done.
  • if I think about how “war” has been used strategically in this context of COVID-19, it doesn’t feel like rhetoric that was raised to be alarmist, but almost to be comforting. That this is a familiar experience. We have a handle on it. We are attacking it like a war. War is our normal.
  • ...8 more annotations...
  • The funny thing about history is that you always look back from the luxury of time and you can see these massive events taking place, you can understand the dynamics. We don't have that perspective when we are in it, so my idea of studying history was to remap the past onto the present, so that we might gain insight into what’s happening now.
  • This weird concept that we have developed, essential, non-essential work, this arbitrary division of what will matter and what will not matter. It wasn't until I was swept up in the uprising and really asking myself what's happening—there was this amazing video of this silent moment of people with their fists up, thousands of people on their knees, and it was just incredibly moving—I had this realization that this was the essential work, the work that we need to be doing.
  • I think we all lack a historical distance right now to make sense of this moment. But art can provide us with an abstracted or a historical lens, that gives us distance or a sense of a time bigger than the moment we're in.
  • When you were describing this 1990s utopian idea of the internet as public space, I was struck by the contrast with how things turned out. In both our physical and digital reality, we have lost the commons. Is Instagram a public space?
  • there's something virus-like about social media anyway, in the way that it operates, in the way that it taps into our physiology on a chemical level in our brain. It’s designed to function that way. And combining that kind of functionality with our addiction to or the dominance of visual culture, it's just sort of like a deadly combination. We get addicted and we just consume incredible amounts of visual information every day.
  • It really is true that very rarely you will see a photograph of a dead body in the printed newspaper. It reminds me, for example, of the absence of the flag-draped coffins that come when soldiers return from war. Because we're taught to think about COVID-19 as a kind of war, maybe it makes sense to really think about that.
  • If you don't see the picture of the dead body, there's no dead body. It's the reason why if I teach a foundation class, I always show the students this famous Stan Brakhage film, The Act of Seeing with One's Own Eyes (1971). It’s a half hour silent film of multiple autopsies. We have this extended conversation about just how completely absent images of our bodies like that are from our cultural experience.
  • those are the images that people wanted to see. They were widely distributed; you could buy postcards of lynched men and women because people wanted to celebrate that. There is a completely different attitude at work. One could say they're doing the work of the same ideology, but in opposite directions. It would be hard to talk about a history of photography in this country and not talk about lynching photographs.
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
1 - 19 of 19
Showing 20 items per page