Skip to main content

Home/ Dystopias/ Group items tagged climate

Rss Feed Group items tagged

Ed Webb

Piper at the Gates of Hell: An Interview with Cyberpunk Legend John Shirley | Motherboard - 0 views

    • Ed Webb
       
      City Come A Walking is one of the most punk of the cyberpunk novels and short stories I have ever read, and I have read quite a few...
  • I'll press your buttons here by positing that if "we" (humankind) are too dumb to self-regulate our own childbirth output, too dim to recognize that we are polluting ourselves and neighbors out of sustainable existence, we are, in fact, a ridiculous parasite on this Earth and that the planet on which we live will simply slough us off—as it well should—and will bounce back without evidence of we even being here, come two or three thousand years. Your thoughts (in as much detail as you wish)?I would recommend reading my "the next 50 years" piece here. Basically I think that
 climate change, which in this case genuinely is caused mostly by humanity, 
is just one part of the environmental problem. Overfishing, toxification of 
the seas, pesticide use, weedkillers, prescription drugs in water,
 fracking, continued air pollution, toxicity in food, destruction of animal
 habitat, attrition on bee colonies—all this is converging. And we'll be 
facing the consequences for several hundred years.
  • I believe humanity will
 survive, and it won't be surviving like Road Warrior or the Morlocks from The Time Machine, but I think we'll have some cruelly ugly social consequences. We'll have famines the like of which we've never seen before, along with higher risk of wars—I do predict a third world war in the second half of this century but I don't think it will be a nuclear war—and I think we'll suffer so hugely we'll be forced to have a change in consciousness to adapt. 
  • ...1 more annotation...
  • We may end up having to "terraform" the Earth itself, to some extent.
Ed Webb

Why the Islamic State is the minor leagues of terror | Middle East Eye - 0 views

  •  
    "The sole advantage the Islamic State has when it comes to this country is that it turns out to be so easy to spook us."
Ed Webb

The Digital Maginot Line - 0 views

  • The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless). In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.
  • There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.
  • There’s very little incentive not to try everything: this is a revolution that is being A/B tested.
  • ...17 more annotations...
  • The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.
  • Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target Russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure
  • Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle. But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.
  • The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead
  • Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble. But ultimately the information war is about territory — just not the geographic kind. In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.
  • If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it.
  • The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.
  • This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”
  • Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.
  • Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.
  • The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare
  • Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups
  • platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors
  • What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation. We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.
  • We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.
  • Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.
  • Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets.
Ed Webb

Border Patrol, Israel's Elbit Put Reservation Under Surveillance - 0 views

  • The vehicle is parked where U.S. Customs and Border Protection will soon construct a 160-foot surveillance tower capable of continuously monitoring every person and vehicle within a radius of up to 7.5 miles. The tower will be outfitted with high-definition cameras with night vision, thermal sensors, and ground-sweeping radar, all of which will feed real-time data to Border Patrol agents at a central operating station in Ajo, Arizona. The system will store an archive with the ability to rewind and track individuals’ movements across time — an ability known as “wide-area persistent surveillance.” CBP plans 10 of these towers across the Tohono O’odham reservation, which spans an area roughly the size of Connecticut. Two will be located near residential areas, including Rivas’s neighborhood, which is home to about 50 people. To build them, CBP has entered a $26 million contract with the U.S. division of Elbit Systems, Israel’s largest military company.
  • U.S. borderlands have become laboratories for new systems of enforcement and control
  • these same systems often end up targeting other marginalized populations as well as political dissidents
  • ...16 more annotations...
  • the spread of persistent surveillance technologies is particularly worrisome because they remove any limit on how much information police can gather on a person’s movements. “The border is the natural place for the government to start using them, since there is much more public support to deploy these sorts of intrusive technologies there,”
  • the company’s ultimate goal is to build a “layer” of electronic surveillance equipment across the entire perimeter of the U.S. “Over time, we’ll expand not only to the northern border, but to the ports and harbors across the country,”
  • In addition to fixed and mobile surveillance towers, other technology that CBP has acquired and deployed includes blimps outfitted with high-powered ground and air radar, sensors buried underground, and facial recognition software at ports of entry. CBP’s drone fleet has been described as the largest of any U.S. agency outside the Department of Defense
  • Nellie Jo David, a Tohono O’odham tribal member who is writing her dissertation on border security issues at the University of Arizona, says many younger people who have been forced by economic circumstances to work in nearby cities are returning home less and less, because they want to avoid the constant surveillance and harassment. “It’s especially taken a toll on our younger generations.”
  • Border militarism has been spreading worldwide owing to neoliberal economic policies, wars, and the onset of the climate crisis, all of which have contributed to the uprooting of increasingly large numbers of people, notes Reece Jones
  • In the U.S., leading companies with border security contracts include long-established contractors such as Lockheed Martin in addition to recent upstarts such as Anduril Industries, founded by tech mogul Palmer Luckey to feed the growing market for artificial intelligence and surveillance sensors — primarily in the borderlands. Elbit Systems has frequently touted a major advantage over these competitors: the fact that its products are “field-proven” on Palestinians
  • Verlon Jose, then-tribal vice chair, said that many nation members calculated that the towers would help dissuade the federal government from building a border wall across their lands. The Tohono O’odham are “only as sovereign as the federal government allows us to be,”
  • Leading Democrats have argued for the development of an ever-more sophisticated border surveillance state as an alternative to Trump’s border wall. “The positive, shall we say, almost technological wall that can be built is what we should be doing,” House Speaker Nancy Pelosi said in January. But for those crossing the border, the development of this surveillance apparatus has already taken a heavy toll. In January, a study published by researchers from the University of Arizona and Earlham College found that border surveillance towers have prompted migrants to cross along more rugged and circuitous pathways, leading to greater numbers of deaths from dehydration, exhaustion, and exposure.
  • “Walls are not only a question of blocking people from moving, but they are also serving as borders or frontiers between where you enter the surveillance state,” she said. “The idea is that at the very moment you step near the border, Elbit will catch you. Something similar happens in Palestine.”
  • CBP is by far the largest law enforcement entity in the U.S., with 61,400 employees and a 2018 budget of $16.3 billion — more than the militaries of Iran, Mexico, Israel, and Pakistan. The Border Patrol has jurisdiction 100 miles inland from U.S. borders, making roughly two-thirds of the U.S. population theoretically subject to its operations, including the entirety of the Tohono O’odham reservation
  • Between 2013 and 2016, for example, roughly 40 percent of Border Patrol seizures at immigration enforcement checkpoints involved 1 ounce or less of marijuana confiscated from U.S. citizens.
  • the agency uses its sprawling surveillance apparatus for purposes other than border enforcement
  • documents obtained via public records requests suggest that CBP drone flights included surveillance of Dakota Access pipeline protests
  • CBP’s repurposing of the surveillance tower and drones to surveil dissidents hints at other possible abuses. “It’s a reminder that technologies that are sold for one purpose, such as protecting the border or stopping terrorists — or whatever the original justification may happen to be — so often get repurposed for other reasons, such as targeting protesters.”
  • The impacts of the U.S. border on Tohono O’odham people date to the mid-19th century. The tribal nation’s traditional land extended 175 miles into Mexico before being severed by the 1853 Gadsden Purchase, a U.S. acquisition of land from the Mexican government. As many as 2,500 of the tribe’s more than 30,000 members still live on the Mexican side. Tohono O’odham people used to travel between the United States and Mexico fairly easily on roads without checkpoints to visit family, perform ceremonies, or obtain health care. But that was before the Border Patrol arrived en masse in the mid-2000s, turning the reservation into something akin to a military occupation zone. Residents say agents have administered beatings, used pepper spray, pulled people out of vehicles, shot two Tohono O’odham men under suspicious circumstances, and entered people’s homes without warrants. “It is apartheid here,” Ofelia Rivas says. “We have to carry our papers everywhere. And everyone here has experienced the Border Patrol’s abuse in some way.”
  • Tohono O’odham people have developed common cause with other communities struggling against colonization and border walls. David is among numerous activists from the U.S. and Mexican borderlands who joined a delegation to the West Bank in 2017, convened by Stop the Wall, to build relationships and learn about the impacts of Elbit’s surveillance systems. “I don’t feel safe with them taking over my community, especially if you look at what’s going on in Palestine — they’re bringing the same thing right over here to this land,” she says. “The U.S. government is going to be able to surveil basically anybody on the nation.”
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
‹ Previous 21 - 32 of 32
Showing 20 items per page