Skip to main content

Home/ Dystopias/ Group items matching "dystopia" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Ed Webb

How ethical is it for advertisers to target your mood? | Emily Bell | Opinion | The Guardian - 0 views

  • The effectiveness of psychographic targeting is one bet being made by an increasing number of media companies when it comes to interrupting your viewing experience with advertising messages.
  • “Across the board, articles that were in top emotional categories, such as love, sadness and fear, performed significantly better than articles that were not.”
  • ESPN and USA Today are also using psychographic rather than demographic targeting to sell to advertisers, including in ESPN’s case, the decision to not show you advertising at all if your team is losing.
  • ...9 more annotations...
  • Media companies using this technology claim it is now possible for the “mood” of the reader or viewer to be tracked in real time and the content of the advertising to be changed accordingly
  • ads targeted at readers based on their predicted moods rather than their previous behaviour improved the click-through rate by 40%.
  • Given that the average click through rate (the number of times anyone actually clicks on an ad) is about 0.4%, this number (in gross terms) is probably less impressive than it sounds.
  • Cambridge Analytica, the company that misused Facebook data and, according to its own claims, helped Donald Trump win the 2016 election, used psychographic segmentation.
  • For many years “contextual” ads served by not very intelligent algorithms were the bane of digital editors’ lives. Improvements in machine learning should help eradicate the horrible business of showing insurance advertising to readers in the middle of an article about a devastating fire.
  • The words “brand safety” are increasingly used by publishers when demonstrating products such as Project Feels. It is a way publishers can compete on micro-targeting with platforms such as Facebook and YouTube by pointing out that their targeting will not land you next to a conspiracy theory video about the dangers of chemtrails.
  • the exploitation of psychographics is not limited to the responsible and transparent scientists at the NYT. While publishers were showing these shiny new tools to advertisers, Amazon was advertising for a managing editor for its surveillance doorbell, Ring, which contacts your device when someone is at your door. An editor for a doorbell, how is that going to work? In all kinds of perplexing ways according to the ad. It’s “an exciting new opportunity within Ring to manage a team of news editors who deliver breaking crime news alerts to our neighbours. This position is best suited for a candidate with experience and passion for journalism, crime reporting, and people management.” So if instead of thinking about crime articles inspiring fear and advertising doorbells in the middle of them, what if you took the fear that the surveillance-device-cum-doorbell inspires and layered a crime reporting newsroom on top of it to make sure the fear is properly engaging?
  • The media has arguably already played an outsized role in making sure that people are irrationally scared, and now that practice is being strapped to the considerably more powerful engine of an Amazon product.
  • This will not be the last surveillance-based newsroom we see. Almost any product that produces large data feeds can also produce its own “news”. Imagine the Fitbit newsroom or the managing editor for traffic reports from dashboard cams – anything that has a live data feed emanating from it, in the age of the Internet of Things, can produce news.
Ed Webb

John Lanchester reviews 'The Attention Merchants' by Tim Wu, 'Chaos Monkeys' by Antonio García Martínez and 'Move Fast and Break Things' by Jonathan Taplin · LRB 17 August 2017 - 1 views

  •  
    Excellent. Really excellent.
Ed Webb

China's New "Social Credit Score" Brings Dystopian Science Fiction to Life - 1 views

  • The Chinese government is taking a controversial step in security, with plans to implement a system that gives and collects financial, social, political, and legal credit ratings of citizens into a social credit score
  • Proponents of the idea are already testing various aspects of the system — gathering digital records of citizens, specifically financial behavior. These will then be used to create a social credit score system, which will determine if a citizen can avail themselves of certain services based on his or her social credit rating
  • it’s going to be like an episode from Black Mirror — the social credit score of citizens will be the basis for access to services ranging from travel and education to loans and insurance coverage.
Ed Webb

The Sci-Fi Roots of the Far Right-From 'Lucifer's Hammer' to Newt's Moon Base to Donald's Wall - 0 views

  • Strong leader Senator Jellison (who is white) then asks former Shire founder Hugo Beck what went wrong, and Beck says his fellow hippies just never realized how great technology and laissez-faire economics were, and now all his old friends are dining on human flesh under the thumb of a scary black communist.
  • Today, Lucifer’s Hammer reads as a depiction of a post-apocalyptic war between Trump counties and Clinton counties, simultaneously promising American renewal even as it depicts unavoidable catastrophe. The comet acts as a cleansing, wiping away so much dead wood of civilization. (Feminism, too, comes in for repeated knocks.)
  • SDI was only one part of a larger right-wing techno-futurist project. SDI historian Edward Linenthal cites a 1983 interview with Newt Gingrich in which the young conservative Congressman predicted that SDI would not just destroy Russia’s Communists but liberalism, too. SDI would be “a dagger at the heart of the liberal welfare state” because it destroys “the liberal myth of scarcity,” leaving only “the limits of a free people’s ingenuity, daring, and courage.”
  • ...8 more annotations...
  • Gingrich subsequently secured a job for Pournelle’s son with Congressman Dana Rohrabacher in 1994, who like Gingrich is now a stalwart space booster and Trump supporter.
  • What Trump does is less important than the fact that he kicks over the table, strengthening America’s military state while demolishing bureaucracy and ignoring niceties. Democracy and law matter less than security and innovation
  • In their science fiction as in life, Gingrich and Pournelle shared an optimistic belief in power of technology—and an equally powerful insistence on the inevitability of conflict. They believed this required a robust, authoritarian state apparatus to preserve order and bind citizens together. Indeed, while backing Reagan, Gingrich had promoted a techno-futurism that was less conservative than it was authoritarian: he called for pruning inefficiency while aggressively promoting expansion and military technology. For his part, Pournelle published anthologies of science-fiction and techno-military essays through the 1980s under the name There Will Be War.
  • Gingrich and Pournelle’s enthusiasm had less to do with Trump’s particular ambitions than with his capacity for destruction of the status quo. Much of the chaos Trump foments is, to Gingrich and Pournelle, a key feature to induce the future they want—the one where the feminists and “eco-terrorists” and university professors are soundly defeated
  • with communism a fading threat by the late 80’s, Gingrich shifted his focus to the specter of a new enemy, arguing in 1989 that “Islamic extremism may well be the greatest threat to Western values and Western security in the world.” Such fear-mongering—Islamic extremism remains a fraction as destructive as the nuclear Soviet Union—may seem ill-suited to optimism in mankind’s future, but as a political project it can be uncannily effective. Pournelle wrote that Islam demands adherence to a principle of “Islam or the sword,” and that an aggressive military response is not only justified but demanded: we are at war with the Caliphate.
  • No science-fiction writer since has exerted as significant a political influence as Pournelle. But Pournelle does have a spiritual successor in Castalia House, the independent science-fiction publisher run by white nationalist Theodore Beale, aka Vox Day. Beale, like Gingrich, has said that his job is to save Western Civilization—and that it is in dire need of saving. Beale, however, is far more explicit about race.
  • Pournelle has dissociated himself from Beale’s politics, but Castalia House’s republishing of Pournelle’s 1980s There Will Be War series (as well as publishing a new volume 10) is no mere coincidence. Rather, they are indications of a shared worldview. To these writers, civil rights, equality, and civil liberties are irritants and impediments to progress at best. At worst, they are impositions on the holy forces of the market and social Darwinism (“evolution in action”) that sort out the best from the rest. And to all of them, the best tend to be white (with a bit of space for “the good ones” of other races). If there has been a shift in thought between the 1970s and today, it’s that the expected separation of wheat from chaff hasn’t taken place, and so now more active measures need to be taken—building the border walls and deportations, for example. Trump is an agent of these active measures—an agent of revolution, or at least the destruction that precedes a revolution.
  • Trump was far from the first to eliminate the line between right-wing thought and outright bigotry.
Ed Webb

Saudi Crown Prince Asks: What if a City, But It's a 105-Mile Line - 0 views

  • Vicious Saudi autocrat Mohamed bin Salman has a new vision for Neom, his plan for a massive, $500 billion, AI-powered, nominally legally independent city-state of the future on the border with Egypt and Jordan. When we last left the crown prince, he had reportedly commissioned 2,300-pages’ worth of proposals from Boston Consulting Group, McKinsey & Co. and Oliver Wyman boasting of possible amenities like holographic schoolteachers, cloud seeding to create rain, flying taxis, glow-in-the-dark beaches, a giant NASA-built artificial moon, and lots of robots: maids, cage fighters, and dinosaurs.
  • Now Salman has a bold new idea: One of the cities in Neom is a line. A line roughly 105-miles (170-kilometers) long and a five-minute walk wide, to be exact. No, really, it’s a line. The proposed city is a line that stretches across all of Saudi Arabia. That’s the plan.
  • “With zero cars, zero streets, and zero carbon emissions, you can fulfill all your daily requirements within a five minute walk,” the crown prince continued. “And you can travel from end to end within 20 minutes.”AdvertisementThe end-to-end in 20 minutes boast likely refers to some form of mass transit that doesn’t yet exist. That works out to a transit system running at about 317 mph (510 kph). That would be much faster than Japan’s famous Shinkansen train network, which is capped at 200 mph (321 kph). Some Japanese rail companies have tested maglev trains that have gone up to 373 mph (600 kph), though it’s nowhere near ready for primetime.
  • ...3 more annotations...
  • According to Bloomberg, Saudi officials project the Line will cost around $100-$200 billion of the $500 billion planned to be spent on Neom and will have a population of 1 million with 380,000 jobs by the year 2030. It will have one of the biggest airports in the world for some reason, which seems like a strange addition to a supposedly climate-friendly city.
  • The site also makes numerous hand wavy and vaguely menacing claims, including that “all businesses and communities” will have “over 90%” of their data processed by AI and robots:
  • Don’t pay attention to Saudi war crimes in Yemen, the prince’s brutal crackdowns on dissent, the hit squad that tortured journalist Jamal Khashoggi to death, and the other habitual human rights abuses that allow the Saudi monarchy to remain in power. Also, ignore that obstacles facing Neom include budgetary constraints, the forced eviction of tens of thousands of existing residents such as the Huwaitat tribe, coronavirus and oil shock, investor flight over human rights concerns, and the lingering questions of whether the whole project is a distraction from pressing domestic issues and/or a mirage conjured up by consulting firms pandering to the crown prince’s ego and hungry for lucrative fees. Nevermind you that there are numerous ways we could ensure the cities people already live in are prepared for climate change rather than blowing billions of dollars on a vanity project.
Ed Webb

At age 13, I joined the alt-right, aided by Reddit and Google - 0 views

  • Now, I’m 16, and I’ve been able to reflect on how I got sucked into that void—and how others do, too. My brief infatuation with the alt-right has helped me understand the ways big tech companies and their algorithms are contributing to the problem of radicalization—and why it’s so important to be skeptical of what you read online.
  • while a quick burst of radiation probably won’t give you cancer, prolonged exposure is far more dangerous. The same is true for the alt-right. I knew that the messages I was seeing were wrong, but the more I saw them, the more curious I became. I was unfamiliar with most of the popular discussion topics on Reddit. And when you want to know more about something, what do you do? You probably don’t think to go to the library and check out a book on that subject, and then fact check and cross reference what you find. If you just google what you want to know, you can get the information you want within seconds.
  • I started googling things like “Illegal immigration,” “Sandy Hook actors,” and “Black crime rate.” And I found exactly what I was looking for.
  • ...11 more annotations...
  • The articles and videos I first found all backed up what I was seeing on Reddit—posts that asserted a skewed version of actual reality, using carefully selected, out-of-context, and dubiously sourced statistics that propped up a hateful world view. On top of that, my online results were heavily influenced by something called an algorithm. I understand algorithms to be secretive bits of code that a website like YouTube will use to prioritize content that you are more likely to click on first. Because all of the content I was reading or watching was from far-right sources, all of the links that the algorithms dangled on my screen for me to click were from far-right perspectives.
  • I spent months isolated in my room, hunched over my computer, removing and approving memes on Reddit and watching conservative “comedians” that YouTube served up to me.
  • The inflammatory language and radical viewpoints used by the alt-right worked to YouTube and Google’s favor—the more videos and links I clicked on, the more ads I saw, and in turn, the more ad revenue they generated.
  • the biggest step in my recovery came when I attended a pro-Trump rally in Washington, D.C., in September 2017, about a month after the “Unite the Right” rally in Charlottesville, Virginia
  • The difference between the online persona of someone who identifies as alt-right and the real thing is so extreme that you would think they are different people. Online, they have the power of fake and biased news to form their arguments. They sound confident and usually deliver their standard messages strongly. When I met them in person at the rally, they were awkward and struggled to back up their statements. They tripped over their own words, and when they were called out by any counter protestors in the crowd, they would immediately use a stock response such as “You’re just triggered.”
  • Seeing for myself that the people I was talking to online were weak, confused, and backwards was the turning point for me.
  • we’re too far gone to reverse the damage that the alt-right has done to the internet and to naive adolescents who don’t know any better—children like the 13-year-old boy I was. It’s convenient for a massive internet company like Google to deliberately ignore why people like me get misinformed in the first place, as their profit-oriented algorithms continue to steer ignorant, malleable people into the jaws of the far-right
  • Dylann Roof, the white supremacist who murdered nine people in a Charleston, South Carolina, church in 2015, was radicalized by far-right groups that spread misinformation with the aid of Google’s algorithms.
  • Over the past couple months, I’ve been getting anti-immigration YouTube ads that feature an incident presented as a “news” story, about two immigrants who raped an American girl. The ad offers no context or sources, and uses heated language to denounce immigration and call for our county to allow ICE to seek out illegal immigrants within our area. I wasn’t watching a video about immigration or even politics when those ads came on; I was watching the old Monty Python “Cheese Shop” sketch. How does British satire, circa 1972, relate to America’s current immigration debate? It doesn’t.
  • tech companies need to be held accountable for the radicalization that results from their systems and standards.
  • anyone can be manipulated like I was. It’s so easy to find information online that we collectively forget that so much of the content the internet offers us is biased
Ed Webb

Legendary Lands: Umberto Eco on the Greatest Maps of Imaginary Places and Why They Appeal to Us - Brain Pickings - 0 views

  • Eco sees in the imaginary a counterintuitive assurance of reality — fictional narratives, in a strange way, is the only place where we can become unmoored from our existential discomfort with uncertainty, for in fiction everything is precisely and unambiguously as it was intended
  • The possible world of narrative is the only universe in which we can be absolutely certain about something, and it gives us a very strong sense of truth. The credulous believe that El Dorado and Lemuria exist or existed somewhere or other, and skeptics are convinced that they never existed, but we all know that it is undeniably certain that Superman is Clark Kent and that Dr. Watson was never Nero Wolfe’s right-hand man, while it is equally certain that Anna Karenina died under a train and that she never married Prince Charming.
Ed Webb

Zoom urged by rights groups to rule out 'creepy' AI emotion tech - 0 views

  • Human rights groups have urged video-conferencing company Zoom to scrap research on integrating emotion recognition tools into its products, saying the technology can infringe users' privacy and perpetuate discrimination
  • "If Zoom advances with these plans, this feature will discriminate against people of certain ethnicities and people with disabilities, hardcoding stereotypes into millions of devices,"
  • The company has already built tools that purport to analyze the sentiment of meetings based on text transcripts of video calls
  • ...1 more annotation...
  • "This move to mine users for emotional data points based on the false idea that AI can track and analyze human emotions is a violation of privacy and human rights,"
Ed Webb

OpenAI's bot wrote my obituary. It was filled with bizarre lies. - 0 views

  • What I find so creepy about OpenAI’s bots is not that they seem to exhibit creativity; computers have been doing creative tasks such as generating original proofs in Euclidean geometry since the 1950s. It’s that I grew up with the idea of a computer as an automaton bound by its nature to follow its instructions precisely; barring a malfunction, it does exactly what its operator – and its program—tell it to do. On some level, this is still true; the bot is following its program and the instructions of its operator. But the way the program interprets the operator’s instructions are not the way the operator thinks. Computer programs are optimized not to solve problems, but instead to convince its operator that it has solved those problems. It was written on the package of the Turing test—it’s a game of imitation, of deception. For the first time, we’re forced to confront the consequences of that deception.
  • a computer program that would be sociopathic if it were alive
  • Even when it’s not supposed to, even when it has a way out, even when the truth is known to the computer and it’s easier to spit it out rather than fabricate something—the computer still lies
  • ...1 more annotation...
  • something that’s been so optimized for deception that it can’t do anything but deceive its operator
« First ‹ Previous 101 - 110 of 110
Showing 20 items per page