Skip to main content

Home/ Dystopias/ Group items tagged trust

Rss Feed Group items tagged

Ed Webb

The trust gap: how and why news on digital platforms is viewed more sceptically versus ... - 0 views

  • Levels of trust in news on social media, search engines, and messaging apps is consistently lower than audience trust in information in the news media more generally.
  • Many of the same people who lack trust in news encountered via digital media companies – who tend to be older, less educated, and less politically interested – also express less trust in the news regardless of whether found on platforms or through more traditional offline modes.
  • Many of the most common reasons people say they use platforms have little to do with news.
  • ...3 more annotations...
  • News about politics is viewed as particularly suspect and platforms are seen by many as contentious places for political conversation – at least for those most interested in politics. Rates of trust in news in general are comparatively higher than trust in news when it pertains to coverage of political affairs.
  • Negative perceptions about journalism are widespread and social media is one of the most often-cited places people say they see or hear criticism of news and journalism
  • Despite positive feelings towards most platforms, large majorities in all four countries agree that false and misleading information, harassment, and platforms using data irresponsibly are ‘big problems’ in their country for many platforms
Ed Webb

The enemy between us: how inequality erodes our mental health | openDemocracy - 1 views

  • Most people probably don’t think that broader, structural issues to do with politics and the economy have anything to do with their emotional health and wellbeing, but they do. We’ve known for a long time that inequality causes a wide range of health and social problems, including everything from reduced life expectancy and higher infant mortality to poor educational attainment, lower social mobility and increased levels of violence. Differences in these areas between more and less equal societies are large, and everyone is affected by them.
  • inequality eats into the heart of our immediate, personal world, and the vast majority of the population are affected by the ways in which inequality becomes the enemy between us. What gets between us and other people are all the things that make us feel ill at ease with one another, worried about how others see us, and shy and awkward in company—in short, all our social anxieties
  • An epidemic of distress seems to be gripping some of the richest nations in the world
  • ...9 more annotations...
  • Socioeconomic inequality matters because it strengthens the belief that some people are worth much more than others. Those at the top seem hugely important and those at the bottom are seen as almost worthless. In more unequal societies we come to judge each other more by status and worry more about how others judge us. Research on 28 European countries shows that inequality increases status anxiety in all income groups, from the poorest ten percent to the richest tenth. The poor are affected most but even the richest ten percent of the population are more worried about status in unequal societies
  • being at the bottom of the social ladder feels the same whether you live in the UK, Norway, Uganda or Pakistan. Therefore, simply raising material living standards is not enough to produce genuine wellbeing or quality of life in the face of inequality
  • Psychotic symptoms such as delusions of grandeur are more common in more unequal countries, as is schizophrenia. As the graph below shows, narcissism increases as income inequality rises, as measured by ‘Narcissistic Personality Inventory’ (NPI) scores from successive samples of the US population.
  • Those who live in more unequal places are more likely to spend money on expensive cars and shop for status goods; and they are more likely to have high levels of personal debt because they try to show that they are not ‘second-class people’ by owning ‘first-class things.’
    • Ed Webb
       
      We might consider this when we read J.G. Ballard's short story "The Subliminal Man"
  • by examining our evolutionary past and our history as egalitarian, cooperative, sharing hunter-gatherers, we dispel the false idea that humans are, in their very nature, competitive, aggressive and individualistic. Inequality is not inevitable and we humans have all the psychological and social aptitudes to live differently.
  • inequalities of outcome limit equality of opportunity; differences in achievement and attainment are driven by inequality, rather than being a consequence of it
  • inequality is a major roadblock to creating sustainable economies that serve to optimise the health and wellbeing of both people and planet.  Because consumerism is about self-enhancement and status competition, it is intensified by inequality. And as inequality leads to a societal breakdown in trust, solidarity and social cohesion, it reduces people’s willingness to act for the common good. This is shown in everything from the tendency for more unequal societies to do less recycling to surveys which show that business leaders in more unequal societies are less supportive of international environmental protection agreements.
  • The UK charity we founded, The Equality Trust, has resources for activists and a network of local groups. In the USA, check out inequality.org. Worldwide, the Fight Inequality Alliance works with more than 100 partners to work for a more equal world. And look out for the new global Wellbeing Economy Alliance this autumn.
  • Inequality creates the social and political divisions that isolate us from each other, so it’s time for us all to reach out, connect, communicate and act collectively. We really are all in this together. 
Ed Webb

The Digital Maginot Line - 0 views

  • The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless). In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.
  • There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.
  • There’s very little incentive not to try everything: this is a revolution that is being A/B tested.
  • ...17 more annotations...
  • The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.
  • Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target Russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure
  • Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle. But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.
  • The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead
  • Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble. But ultimately the information war is about territory — just not the geographic kind. In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.
  • If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it.
  • The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.
  • This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”
  • Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.
  • Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.
  • The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare
  • Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups
  • platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors
  • What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation. We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.
  • We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.
  • Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.
  • Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets.
Ed Webb

Interoperability And Privacy: Squaring The Circle | Techdirt - 0 views

  • if there's one thing we've learned from more than a decade of Facebook scandals, it's that there's little reason to believe that Facebook possesses the requisite will and capabilities. Indeed, it may be that there is no automated system or system of human judgments that could serve as a moderator and arbiter of the daily lives of billions of people. Given Facebook's ambition to put more and more of our daily lives behind its walled garden, it's hard to see why we would ever trust Facebook to be the one to fix all that's wrong with Facebook.
  • Facebook users are eager for alternatives to the service, but are held back by the fact that the people they want to talk with are all locked within the company's walled garden
  • rather than using standards to describe how a good voting machine should work, the industry pushed a standard that described how their existing, flawed machines did work with some small changes in configurations. Had they succeeded, they could have simply slapped a "complies with IEEE standard" label on everything they were already selling and declared themselves to have fixed the problem... without making the serious changes needed to fix their systems, including requiring a voter-verified paper ballot.
  • ...13 more annotations...
  • the risk of trusting competition to an interoperability mandate is that it will create a new ecosystem where everything that's not forbidden is mandatory, freezing in place the current situation, in which Facebook and the other giants dominate and new entrants are faced with onerous compliance burdens that make it more difficult to start a new service, and limit those new services to interoperating in ways that are carefully designed to prevent any kind of competitive challenge
  • Facebook is a notorious opponent of adversarial interoperability. In 2008, Facebook successfully wielded a radical legal theory that allowed it to shut down Power Ventures, a competitor that allowed Facebook's users to use multiple social networks from a single interface. Facebook argued that by allowing users to log in and display Facebook with a different interface, even after receipt of a cease and desist letter telling Power Ventures to stop, the company had broken a Reagan-era anti-hacking law called the Computer Fraud and Abuse Act (CFAA). In other words, upsetting Facebook's investors made their conduct illegal.
  • Today, Facebook is viewed as holding all the cards because it has corralled everyone who might join a new service within its walled garden. But legal reforms to safeguard the right to adversarial interoperability would turn this on its head: Facebook would be the place that had conveniently organized all the people whom you might tempt to leave Facebook, and even supply you with the tools you need to target those people.
  • Such a tool would allow someone to use Facebook while minimizing how they are used by Facebook. For people who want to leave Facebook but whose friends, colleagues or fellow travelers are not ready to join them, a service like this could let Facebook vegans get out of the Facebook pool while still leaving a toe in its waters.
  • In a competitive market (which adversarial interoperability can help to bring into existence), even very large companies can't afford to enrage their customers
  • the audience for a legitimate adversarial interoperability product are the customers of the existing service that it connects to.
  • anyone using a Facebook mobile app might be exposing themselves to incredibly intrusive data-gathering, including some surprisingly creepy and underhanded tactics.
  • If users could use a third-party service to exchange private messages with friends, or to participate in a group they're a member of, they can avoid much (but not all) of this surveillance.
  • Facebook users (and even non-Facebook users) who want more privacy have a variety of options, none of them very good. Users can tweak Facebook's famously hard-to-understand privacy dashboard to lock down their accounts and bet that Facebook will honor their settings (this has not always been a good bet). Everyone can use tracker-blockers, ad-blockers and script-blockers to prevent Facebook from tracking them when they're not on Facebook, by watching how they interact with pages that have Facebook "Like" buttons and other beacons that let Facebook monitor activity elsewhere on the Internet. We're rightfully proud of our own tracker blocker, Privacy Badger, but it doesn't stop Facebook from tracking you if you have a Facebook account and you're using Facebook's service.
  • As Facebook's market power dwindled, so would the pressure that web publishers feel to embed Facebook trackers on their sites, so that non-Facebook users would not be as likely to be tracked as they use the Web.
  • Today, Facebook's scandals do not trigger mass departures from the service, and when users do leave, they tend to end up on Instagram, which is also owned by Facebook.
  • For users who have privacy needs -- and other needs -- beyond those the big platforms are willing to fulfill, it's important that we keep the door open to competitors (for-profit, nonprofit, hobbyist and individuals) who are willing to fill those needs.
  • helping Facebook's own users, or the users of any big service, to configure their experience to make their lives better should be legal and encouraged even (and especially) if it provides a path for users to either diversify their social media experience or move away entirely from the big, concentrated services. Either way, we'd be on our way to a more pluralistic, decentralized, diverse Internet
Ed Webb

Does the Digital Classroom Enfeeble the Mind? - NYTimes.com - 0 views

  • My father would have been unable to “teach to the test.” He once complained about errors in a sixth-grade math textbook, so he had the class learn math by designing a spaceship. My father would have been spat out by today’s test-driven educational regime.
  • A career in computer science makes you see the world in its terms. You start to see money as a form of information display instead of as a store of value. Money flows are the computational output of a lot of people planning, promising, evaluating, hedging and scheming, and those behaviors start to look like a set of algorithms. You start to see the weather as a computer processing bits tweaked by the sun, and gravity as a cosmic calculation that keeps events in time and space consistent. This way of seeing is becoming ever more common as people have experiences with computers. While it has its glorious moments, the computational perspective can at times be uniquely unromantic. Nothing kills music for me as much as having some algorithm calculate what music I will want to hear. That seems to miss the whole point. Inventing your musical taste is the point, isn’t it? Bringing computers into the middle of that is like paying someone to program a robot to have sex on your behalf so you don’t have to. And yet it seems we benefit from shining an objectifying digital light to disinfect our funky, lying selves once in a while. It’s heartless to have music chosen by digital algorithms. But at least there are fewer people held hostage to the tastes of bad radio D.J.’s than there once were. The trick is being ambidextrous, holding one hand to the heart while counting on the digits of the other.
  • The future of education in the digital age will be determined by our judgment of which aspects of the information we pass between generations can be represented in computers at all. If we try to represent something digitally when we actually can’t, we kill the romance and make some aspect of the human condition newly bland and absurd. If we romanticize information that shouldn’t be shielded from harsh calculations, we’ll suffer bad teachers and D.J.’s and their wares.
  • ...5 more annotations...
  • Some of the top digital designs of the moment, both in school and in the rest of life, embed the underlying message that we understand the brain and its workings. That is false. We don’t know how information is represented in the brain. We don’t know how reason is accomplished by neurons. There are some vaguely cool ideas floating around, and we might know a lot more about these things any moment now, but at this moment, we don’t. You could spend all day reading literature about educational technology without being reminded that this frontier of ignorance lies before us. We are tempted by the demons of commercial and professional ambition to pretend we know more than we do.
  • Outside school, something similar happens. Students spend a lot of time acting as trivialized relays in giant schemes designed for the purposes of advertising and other revenue-minded manipulations. They are prompted to create databases about themselves and then trust algorithms to assemble streams of songs and movies and stories for their consumption. We see the embedded philosophy bloom when students assemble papers as mash-ups from online snippets instead of thinking and composing on a blank piece of screen. What is wrong with this is not that students are any lazier now or learning less. (It is probably even true, I admit reluctantly, that in the presence of the ambient Internet, maybe it is not so important anymore to hold an archive of certain kinds of academic trivia in your head.) The problem is that students could come to conceive of themselves as relays in a transpersonal digital structure. Their job is then to copy and transfer data around, to be a source of statistics, whether to be processed by tests at school or by advertising schemes elsewhere.
  • If students don’t learn to think, then no amount of access to information will do them any good.
  • To the degree that education is about the transfer of the known between generations, it can be digitized, analyzed, optimized and bottled or posted on Twitter. To the degree that education is about the self-invention of the human race, the gargantuan process of steering billions of brains into unforeseeable states and configurations in the future, it can continue only if each brain learns to invent itself. And that is beyond computation because it is beyond our comprehension.
  • Roughly speaking, there are two ways to use computers in the classroom. You can have them measure and represent the students and the teachers, or you can have the class build a virtual spaceship. Right now the first way is ubiquitous, but the virtual spaceships are being built only by tenacious oddballs in unusual circumstances. More spaceships, please.
  •  
    How do we get this right - use the tech for what it can do well, develop our brains for what the tech can't do? Who's up for building a spaceship?
Ed Webb

Internet Evolution - Rob Salkowitz - There Oughta Be a Law: A WTF Moment at the Supreme... - 0 views

  • In the future, the Court may need to rule on network neutrality, property rights in virtual worlds, anti-trust issues related to closed APIs and application stores, and many other sticky issues. In cases like those, it’s not enough to know the law. You actually need to know a little something about how technology works in the real world. Otherwise, to the uninformed and proudly ignorant, the issues themselves can seem bizarre, baffling, or merely trivial.
Ed Webb

Shareable: The Exterminator's Want-Ad - 1 views

  • So, this moldy jail I was in was this old dot-com McMansion, out in the Permanent Foreclosure Zone in the dead suburbs. That's where they cooped us up. This gated community was built for some vanished rich people. That was their low-intensity prison for us rehab detainees.
  • This place outside was a Beltway suburb before Washington was abandoned. The big hurricane ran right over it, and crushed it down pretty good, so now it was a big green hippie jungle. Our prison McMansion had termites, roaches, mold and fleas, but once it was a nice house. This rambling wreck of a town was half storm-debris. All the lawns were replaced with wet, weedy, towering patches of bamboo, or marijuana -- or hops, or kenaf, whatever (I never could tell those farm crops apart). The same goes for the "garden roofs," which were dirt piled on top of the dirty houses. There were smelly goats running loose, chickens cackling. Salvaged umbrellas and chairs toppled in the empty streets. No traffic signs, because there were no cars.
  • The rich elite just blew it totally. They dropped their globalized ball. They panicked. So they're in jail, like I was. Or they're in exile somewhere, or else they jumped out of penthouses screaming when the hyperinflation ate them alive.
  • ...12 more annotations...
  • So, my cellmate Claire was this forty-something career lobbyist who used to be my boss inside the Beltway. Claire was full of horror stories about the cruelty of the socialist regime. Because, in the old days before we got ourselves arrested, alarmist tales of this kind were Claire's day-job. Claire peddled political spin to the LameStream Media to make sure that corporations stayed in command, so that situations like our present world stayed impossible.
  • Claire and I hated the sharing networks, because we were paid to hate them. We hated all social networks, like Facebook, because they destroyed the media that we owned. We certainly hated free software, because it was like some ever-growing anti-commercial fungus. We hated search engines and network aggregators, people like Google -- not because Google was evil, but because they weren't. We really hated "file-sharers" -- the swarming pirates who were chewing up the wealth of our commercial sponsors.
  • We despised green power networks because climate change was a myth. Until the climate actually changed. Then the honchos who paid us started drinking themselves to death.
  • This prison game was diabolical. It was very entertaining, and compulsively playable. This game had been designed by left-wing interaction designers, the kind of creeps who built not-for-profit empires like Wikipedia. Except they'd designed it for losers like us. Everybody in rehab had to role-play. We had to build ourselves another identity, because this new pretend-identity was supposed to help us escape the stifling spiritual limits of our previous, unliberated, greedy individualist identities. In this game, I played an evil dwarf. With an axe. Which would have been okay, because that identity was pretty much me all along. Except that the game's reward system had been jiggered to reward elaborate acts of social collaboration. Of course we wanted to do raids and looting and cool fantasy fighting, but that wasn't on. We were very firmly judged on the way we played this rehab game. It was never about grabbing the gold. It was all about forming trust coalitions so as to collectively readjust our fantasy infrastructure.
  • Jean Paul Sartre (who was still under copyright, so I reckon they stole his work). I learned some things from him. That changed me. "Hell is other people." That is the sinister side of a social-software shared society: that people suck, that hell is other people. Sharing with people is hell. When you share, then no matter how much money you have, they just won't leave you alone. I quoted Jean-Paul Sartre to the parole board. A very serious left-wing philosopher: lots of girlfriends (even feminists), he ate speed all the time, he hung out with Maoists. Except for the Maoist part, Jean-Paul Sartre is my guru. My life today is all about my Existential authenticity. Because I'm a dissident in this society.
  • These Lifestyle of Health and Sustainability geeks were maybe seven percent of America's population. But the termite people had seized power. They were the Last Best Hope of a society on the skids. They owned all the hope because they had always been the ones who knew our civilization was hopeless. So, I was in their prison until I got my head around that new reality. Until I realized that this was inevitable. That it was the way forward. That I loved Little Brother. After that, I could go walkies.
  • I learned to sit still and read a lot. Because that looks like innocent behavior.
  • they were scanning us all the time. Nobody ever gets it about the tremendous power of network surveillance. That's how they ruled the world, though: by valuing every interaction, by counting every click. Every time one termite touched the feelers of another termite, they were adding that up. In a database. Everybody was broke: extremely poor, like preindustrial hard-scrabble poor, very modest, very "green." But still surviving. The one reason we weren't all chewing each other's cannibal thighbones (like the people on certain more disadvantaged continents), was because they'd stapled together this survival regime out of socialist software. It was very social. Ultra-social. No "privatization," no "private sector," and no "privacy." They pretended that it was all about happiness and kindliness and free-spirited cooperation and gay rainbow banners and all that. It was really a system that was firmly based on "social capital." Everything social was your only wealth. In a real "gift economy," you were the gift. You were living by your karma. Instead of a good old hundred-dollar bill, you just had a virtual facebooky thing with your own smiling picture on it, and that picture meant "Please Invest in the Bank of Me!"
  • social networks versus bandit mafias is like Ninjas Versus Pirates: it's a counterculture fight to the finish
  • the European Red Cross happened to show up during that episode (because they like gunfire). The Europeans are all prissy about the situation, of course. They are like: "What's with these illegal detainees in orange jumpsuits, and how come they don’'t have proper medical care?" So, I finally get paroled. I get amnestied.
  • in a network society, the power is ALL personal. "The personal is political." You mess with the tender feelings of a network maven, and she's not an objective bureaucrat following the rule of law. She's more like: "To the Bastille with this subhuman irritation!"
  • like "Heavy Weather" with a post-technology green catastrophe thrown in
Ed Webb

Sinclair tells stations to air media-bashing promos - and the criticism goes viral - Ap... - 0 views

  • they're seeing these people they've trusted for decades tell them things they know are essentially propaganda
Ed Webb

The Coronavirus and Our Future | The New Yorker - 0 views

  • I’ve spent my life writing science-fiction novels that try to convey some of the strangeness of the future. But I was still shocked by how much had changed, and how quickly.
  • the change that struck me seemed more abstract and internal. It was a change in the way we were looking at things, and it is still ongoing. The virus is rewriting our imaginations. What felt impossible has become thinkable. We’re getting a different sense of our place in history. We know we’re entering a new world, a new era. We seem to be learning our way into a new structure of feeling.
  • The Anthropocene, the Great Acceleration, the age of climate change—whatever you want to call it, we’ve been out of synch with the biosphere, wasting our children’s hopes for a normal life, burning our ecological capital as if it were disposable income, wrecking our one and only home in ways that soon will be beyond our descendants’ ability to repair. And yet we’ve been acting as though it were 2000, or 1990—as though the neoliberal arrangements built back then still made sense. We’ve been paralyzed, living in the world without feeling it.
  • ...24 more annotations...
  • We realize that what we do now, well or badly, will be remembered later on. This sense of enacting history matters. For some of us, it partly compensates for the disruption of our lives.
  • Actually, we’ve already been living in a historic moment. For the past few decades, we’ve been called upon to act, and have been acting in a way that will be scrutinized by our descendants. Now we feel it. The shift has to do with the concentration and intensity of what’s happening. September 11th was a single day, and everyone felt the shock of it, but our daily habits didn’t shift, except at airports; the President even urged us to keep shopping. This crisis is different. It’s a biological threat, and it’s global. Everyone has to change together to deal with it. That’s really history.
  • There are 7.8 billion people alive on this planet—a stupendous social and technological achievement that’s unnatural and unstable. It’s made possible by science, which has already been saving us. Now, though, when disaster strikes, we grasp the complexity of our civilization—we feel the reality, which is that the whole system is a technical improvisation that science keeps from crashing down
  • Today, in theory, everyone knows everything. We know that our accidental alteration of the atmosphere is leading us into a mass-extinction event, and that we need to move fast to dodge it. But we don’t act on what we know. We don’t want to change our habits. This knowing-but-not-acting is part of the old structure of feeling.
  • remember that you must die. Older people are sometimes better at keeping this in mind than younger people. Still, we’re all prone to forgetting death. It never seems quite real until the end, and even then it’s hard to believe. The reality of death is another thing we know about but don’t feel.
  • it is the first of many calamities that will likely unfold throughout this century. Now, when they come, we’ll be familiar with how they feel.
  • water shortages. And food shortages, electricity outages, devastating storms, droughts, floods. These are easy calls. They’re baked into the situation we’ve already created, in part by ignoring warnings that scientists have been issuing since the nineteen-sixties
  • Imagine what a food scare would do. Imagine a heat wave hot enough to kill anyone not in an air-conditioned space, then imagine power failures happening during such a heat wave.
  • science fiction is the realism of our time
  • Science-fiction writers don’t know anything more about the future than anyone else. Human history is too unpredictable; from this moment, we could descend into a mass-extinction event or rise into an age of general prosperity. Still, if you read science fiction, you may be a little less surprised by whatever does happen. Often, science fiction traces the ramifications of a single postulated change; readers co-create, judging the writers’ plausibility and ingenuity, interrogating their theories of history. Doing this repeatedly is a kind of training. It can help you feel more oriented in the history we’re making now. This radical spread of possibilities, good to bad, which creates such a profound disorientation; this tentative awareness of the emerging next stage—these are also new feelings in our time.
  • Do we believe in science? Go outside and you’ll see the proof that we do everywhere you look. We’re learning to trust our science as a society. That’s another part of the new structure of feeling.
  • This mixture of dread and apprehension and normality is the sensation of plague on the loose. It could be part of our new structure of feeling, too.
  • there are charismatic mega-ideas. “Flatten the curve” could be one of them. Immediately, we get it. There’s an infectious, deadly plague that spreads easily, and, although we can’t avoid it entirely, we can try to avoid a big spike in infections, so that hospitals won’t be overwhelmed and fewer people will die. It makes sense, and it’s something all of us can help to do. When we do it—if we do it—it will be a civilizational achievement: a new thing that our scientific, educated, high-tech species is capable of doing. Knowing that we can act in concert when necessary is another thing that will change us.
  • People who study climate change talk about “the tragedy of the horizon.” The tragedy is that we don’t care enough about those future people, our descendants, who will have to fix, or just survive on, the planet we’re now wrecking. We like to think that they’ll be richer and smarter than we are and so able to handle their own problems in their own time. But we’re creating problems that they’ll be unable to solve. You can’t fix extinctions, or ocean acidification, or melted permafrost, no matter how rich or smart you are. The fact that these problems will occur in the future lets us take a magical view of them. We go on exacerbating them, thinking—not that we think this, but the notion seems to underlie our thinking—that we will be dead before it gets too serious. The tragedy of the horizon is often something we encounter, without knowing it, when we buy and sell. The market is wrong; the prices are too low. Our way of life has environmental costs that aren’t included in what we pay, and those costs will be borne by our descendents. We are operating a multigenerational Ponzi scheme.
  • We’ve decided to sacrifice over these months so that, in the future, people won’t suffer as much as they would otherwise. In this case, the time horizon is so short that we are the future people.
  • Amid the tragedy and death, this is one source of pleasure. Even though our economic system ignores reality, we can act when we have to. At the very least, we are all freaking out together. To my mind, this new sense of solidarity is one of the few reassuring things to have happened in this century. If we can find it in this crisis, to save ourselves, then maybe we can find it in the big crisis, to save our children and theirs.
  • Thatcher said that “there is no such thing as society,” and Ronald Reagan said that “government is not the solution to our problem; government is the problem.” These stupid slogans marked the turn away from the postwar period of reconstruction and underpin much of the bullshit of the past forty years
  • We are individuals first, yes, just as bees are, but we exist in a larger social body. Society is not only real; it’s fundamental. We can’t live without it. And now we’re beginning to understand that this “we” includes many other creatures and societies in our biosphere and even in ourselves. Even as an individual, you are a biome, an ecosystem, much like a forest or a swamp or a coral reef. Your skin holds inside it all kinds of unlikely coöperations, and to survive you depend on any number of interspecies operations going on within you all at once. We are societies made of societies; there are nothing but societies. This is shocking news—it demands a whole new world view.
  • It’s as if the reality of citizenship has smacked us in the face.
  • The neoliberal structure of feeling totters. What might a post-capitalist response to this crisis include? Maybe rent and debt relief; unemployment aid for all those laid off; government hiring for contact tracing and the manufacture of necessary health equipment; the world’s militaries used to support health care; the rapid construction of hospitals.
  • If the project of civilization—including science, economics, politics, and all the rest of it—were to bring all eight billion of us into a long-term balance with Earth’s biosphere, we could do it. By contrast, when the project of civilization is to create profit—which, by definition, goes to only a few—much of what we do is actively harmful to the long-term prospects of our species.
  • Economics is a system for optimizing resources, and, if it were trying to calculate ways to optimize a sustainable civilization in balance with the biosphere, it could be a helpful tool. When it’s used to optimize profit, however, it encourages us to live within a system of destructive falsehoods. We need a new political economy by which to make our calculations. Now, acutely, we feel that need.
  • We’ll remember this even if we pretend not to. History is happening now, and it will have happened. So what will we do with that?
  • How we feel is shaped by what we value, and vice versa. Food, water, shelter, clothing, education, health care: maybe now we value these things more, along with the people whose work creates them. To survive the next century, we need to start valuing the planet more, too, since it’s our only home.
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
Ed Webb

Zoom urged by rights groups to rule out 'creepy' AI emotion tech - 0 views

  • Human rights groups have urged video-conferencing company Zoom to scrap research on integrating emotion recognition tools into its products, saying the technology can infringe users' privacy and perpetuate discrimination
  • "If Zoom advances with these plans, this feature will discriminate against people of certain ethnicities and people with disabilities, hardcoding stereotypes into millions of devices,"
  • The company has already built tools that purport to analyze the sentiment of meetings based on text transcripts of video calls
  • ...1 more annotation...
  • "This move to mine users for emotional data points based on the false idea that AI can track and analyze human emotions is a violation of privacy and human rights,"
1 - 11 of 11
Showing 20 items per page