Skip to main content

Home/ Dystopias/ Group items tagged risk

Rss Feed Group items tagged

Ed Webb

We, The Technocrats - blprnt - Medium - 2 views

  • Silicon Valley’s go-to linguistic dodge: the collective we
  • “What kind of a world do we want to live in?”
  • Big tech’s collective we is its ‘all lives matter’, a way to soft-pedal concerns about privacy while refusing to speak directly to dangerous inequalities.
  • ...7 more annotations...
  • One two-letter word cannot possibly hold all of the varied experiences of data, specifically those of the people are at the most immediate risk: visible minorities, LGBTQ+ people, indigenous communities, the elderly, the disabled, displaced migrants, the incarcerated
  • At least twenty-six states allow the FBI to perform facial recognition searches against their databases of images from drivers licenses and state IDs, despite the fact that the FBI’s own reports have indicated that facial recognition is less accurate for black people. Black people, already at a higher risk of arrest and incarceration than other Americans, feel these data systems in a much different way than I do
  • last week, the Department of Justice passed a brief to the Supreme Court arguing that sex discrimination protections do not extend to transgender people. If this ruling were to be supported, it would immediately put trans women and men at more risk than others from the surveillant data technologies that are becoming more and more common in the workplace. Trans people will be put in distinct danger — a reality that is lost when they are folded neatly into a communal we
  • I looked at the list of speakers for the conference in Brussels to get an idea of the particular we of Cook’s audience, which included Mark Zuckerberg, Google’s CEO Sundar Pichai and the King of Spain. Of the presenters, 57% were men and 83% where white. Only 4 of the 132 people on stage were black.
  • another we that Tim Cook necessarily speaks on the behalf of: privileged men in tech. This we includes Mark and Sundar; it includes 60% of Silicon Valley and 91% of its equity. It is this we who have reaped the most benefit from Big Data and carried the least risk, all while occupying the most time on stage
  • Here’s a more urgent question for us, one that doesn’t ask what we want but instead what they need:How can this new data world be made safer for the people who are facing real risks, right now?
  • “The act of listening has greater ethical potential than speaking” — Julietta Singh
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
Ed Webb

Face Recognition Moves From Sci-Fi to Social Media - NYTimes.com - 0 views

  • the democratization of surveillance — may herald the end of anonymity
    • Ed Webb
       
      Democratization means putting this at the command of citizens, not of unaccountable corporations.
  • facial recognition is proliferating so quickly that some regulators in the United States and Europe are playing catch-up. On the one hand, they say, the technology has great business potential. On the other, because facial recognition works by analyzing and storing people’s unique facial measurements, it also entails serious privacy risks
  • researchers also identified the interests and predicted partial Social Security numbers of some students.
  • ...4 more annotations...
  • marketers could someday use more invasive techniques to identify random people on the street along with, say, their credit scores
  • “You might think it’s cool, or you might think it’s creepy, depending on the context,”
  • many users do not understand that Facebook’s tag suggestion feature involves storing people’s biometric data to re-identify them in later photos
  • Mr. Caspar said last week that he was disappointed with the negotiations with Facebook and that his office was now preparing to take legal action over the company’s biometric database. Facebook told a German broadcaster that its tag suggestion feature complied with European data protection laws. “There are many risks,” Mr. Caspar says. “People should be able to choose if they want to accept these risks, or not accept them.” He offered a suggestion for Americans, “Users in the United States have good reason to raise their voices to get the same right.”
Ed Webb

The stories of Ray Bradbury. - By Nathaniel Rich - Slate Magazine - 0 views

  • Thanks to Fahrenheit 451, now required reading for every American middle-schooler, Bradbury is generally thought of as a writer of novels, but his talents—particularly his mastery of the diabolical premise and the brain-exploding revelation—are best suited to the short form.
  • The best stories have a strange familiarity about them. They're like long-forgotten acquaintances—you know you've met them somewhere before. There is, for instance, the tale of the time traveler who goes back into time and accidentally steps on a butterfly, thereby changing irrevocably the course of history ("A Sound of Thunder"). There's the one about the man who buys a robotic husband to live with his wife so that he can be free to travel and pursue adventure—that's "Marionettes, Inc." (Not to be confused with "I Sing the Body Electric!" about the man who buys a robotic grandmother to comfort his children after his wife dies.) Or "The Playground," about the father who changes places with his son so that he can spare his boy the cruelty of childhood—forgetting exactly how cruel childhood can be. The stories are familiar because they've been adapted, and plundered from, by countless other writers—in books, television shows, and films. To the extent that there is a mythology of our age, Bradbury is one of its creators.
  • "But Bradbury's skill is in evoking exactly how soul-annihilating that world is."    Of course, this also displays one of the key facts of Bradbury's work -- and a trend in science fiction that is often ignored. He's a reactionary of the first order, deeply distrustful of technology and even the notion of progress. Many science fiction writers had begun to rewrite the rules of women in space by the time Bradbury had women in long skirts hauling pots and pans over the Martian landscape. And even he wouldn't disagree. In his famous Playboy interview he responded to a question about predicting the future with, "It's 'prevent the future', that's the way I put it. Not predict it, prevent it."
  • ...5 more annotations...
  • And for the record, I've never understood why a writer who recognizes technology is labeled a "sci-fi writer", as if being a "sci-fi writer" were equal to being some sort of substandard, second-rate hack. The great Kurt Vonnegut managed to get stuck in that drawer after he recognized technolgy in his 1st novel "Player Piano". No matter that he turned out to be (imo) one of the greatest authors of the 20th century, perio
  • it's chilling how prescient he was about modern media culture in Fahrenheit 451. It's not a Luddite screed against TV. It's a speculative piece on what happens when we become divorced from the past and more attuned to images on the screen than we are to each other.
  • ite author of mine since I was in elementary school way back when mammoths roamed the earth. To me, he was an ardent enthusiast of technology, but also recognized its potential for seperating us from one another while at the same time seemingly making us more "connected" in a superficial and transitory way
  • Bradbury is undeniably skeptical of technology and the risks it brings, particularly the risk that what we'd now call "virtualization" will replace actual emotional, intellectual or physical experience. On the other hand, however, I don't think there's anybody who rhapsodizes about the imaginative possibilities of rocketships and robots the way Bradbury does, and he's built entire setpieces around the idea of technological wonders creating new experiences.    I'm not saying he doesn't have a Luddite streak, more that he has feet in both camps and is harder to pin down than a single label allows. And I'll also add that in his public pronouncements of late, the Luddite streak has come out more strongly--but I tend to put much of that down to the curmudgeonliness of a ninety-year-old man.
  • I don't think he is a luddite so much as he is the little voice that whispers "be careful what you wish for." We have been sold the beautiful myth that technology will buy us free time, but we are busier than ever. TV was supposed to enlighten the masses, instead we have "reality TV" and a news network that does not let facts get in the way of its ideological agenda. We romanticize childhood, ignoring children's aggressive impulses, then feed them on a steady diet of violent video games.  
Ed Webb

Artificial intelligence, immune to fear or favour, is helping to make China's foreign p... - 0 views

  • Several prototypes of a diplomatic system using artificial intelligence are under development in China, according to researchers involved or familiar with the projects. One early-stage machine, built by the Chinese Academy of Sciences, is already being used by the Ministry of Foreign Affairs.
  • China’s ambition to become a world leader has significantly increased the burden and challenge to its diplomats. The “Belt and Road Initiative”, for instance, involves nearly 70 countries with 65 per cent of the world’s population. The unprecedented development strategy requires up to a US$900 billion investment each year for infrastructure construction, some in areas with high political, economic or environmental risk
  • researchers said the AI “policymaker” was a strategic decision support system, with experts stressing that it will be humans who will make any final decision
  • ...10 more annotations...
  • “Human beings can never get rid of the interference of hormones or glucose.”
  • “It would not even consider the moral factors that conflict with strategic goals,”
  • “If one side of the strategic game has artificial intelligence technology, and the other side does not, then this kind of strategic game is almost a one-way, transparent confrontation,” he said. “The actors lacking the assistance of AI will be at an absolute disadvantage in many aspects such as risk judgment, strategy selection, decision making and execution efficiency, and decision-making reliability,” he said.
  • “The entire strategic game structure will be completely out of balance.”
  • “AI can think many steps ahead of a human. It can think deeply in many possible scenarios and come up with the best strategy,”
  • A US Department of State spokesman said the agency had “many technological tools” to help it make decisions. There was, however, no specific information on AI that could be shared with the public,
  • The system, also known as geopolitical environment simulation and prediction platform, was used to vet “nearly all foreign investment projects” in recent years
  • One challenge to the development of AI policymaker is data sharing among Chinese government agencies. The foreign ministry, for instance, had been unable to get some data sets it needed because of administrative barriers
  • China is aggressively pushing AI into many sectors. The government is building a nationwide surveillance system capable of identifying any citizen by face within seconds. Research is also under way to introduce AI in nuclear submarines to help commanders making faster, more accurate decision in battle.
  • “AI can help us get more prepared for unexpected events. It can help find a scientific, rigorous solution within a short time.
Ed Webb

Nine million logs of Brits' road journeys spill onto the internet from password-less nu... - 0 views

  • In a blunder described as "astonishing and worrying," Sheffield City Council's automatic number-plate recognition (ANPR) system exposed to the internet 8.6 million records of road journeys made by thousands of people
  • The Register learned of the unprotected dashboard from infosec expert and author Chris Kubecka, working with freelance writer Gerard Janssen, who stumbled across it using search engine Censys.io. She said: "Was the public ever told the system would be in place and that the risks were reasonable? Was there an opportunity for public discourse – or, like in Hitchhiker's Guide to the Galaxy, were the plans in a planning office at an impossible or undisclosed location?"
  • The dashboard was taken offline within a few hours of The Register alerting officials. Sheffield City Council and South Yorkshire Police added: "As soon as this was brought to our attention we took action to deal with the immediate risk and ensure the information was no longer viewable externally. Both Sheffield City Council and South Yorkshire Police have also notified the Information Commissioner's Office. We will continue to investigate how this happened and do everything we can to ensure it will not happen again."
Ed Webb

Endtime for Hitler: On the Downfall of the Downfall Parodies - Mark Dery - Doom Patrol:... - 1 views

  • Endtime for Hitler: On the Downfall of the Downfall Parodies
  • Hitler left an inexhaustible fund of unforgettable images; Riefenstahl’sTriumph of the Will alone is enough to make him a household deity of the TV age.
  • The Third Reich was the first thoroughly modern totalitarian horror, scripted by Hitler and mass-marketed by Goebbels, a tour de force of media spectacle and opinion management that America’s hidden persuaders—admen, P.R. flacks, political campaign managers—studied assiduously.  A Mad Man in both senses, Hitler sold the German volk on a racially cleansed utopia, a thousand-year empire whose kitschy grandeur was strictly Forest Lawn Parthenon.
  • ...13 more annotations...
  • Hitler, unlike Stalin or Mao, was an intuitive master of media stagecraft. David Bowie’s too-clever quip that Hitler was the first rock star, for which Bowie was widely reviled at the time, was spot-on.
  • the media like Hitler because Hitler liked the media
  • he prefigured postmodernity: the annexation of politics by Hollywood and Madison Avenue, the rise of the celebrity as a secular icon, the confusion of image and reality in a Matrix world. He regarded existence “as a kind of permanent parade before a gigantic audience” (Fest), calculating the visual impact of every histrionic pose, every propaganda tagline, every monumental building
  • His psychopathology is a queasy funhouse reflection, straight out of Nightmare Alley, of the instrumental rationality of the machine age. The genocidal assembly lines of Hitler’s death camps are a grotesque parody of Fordist mechanization, just as the Nazis’ fastidious recycling of every remnant of their victims but their smoke—their gold fillings melted down for bullion, their hair woven into socks for U-boat crewmen—is a depraved caricature of the Taylorist mania for workplace efficiency.
  • there’s something perversely comforting about Hitler’s unchallenged status as the metaphysical gravitational center of all our attempts at philosophizing evil
  • Perhaps that’s why he continues to mesmerize us: because he flickers, irresolvably, between the seemingly inhuman and the all too human.
  • By denying everyone’s capability, at least in theory, for Hitlerian evil, we let ourselves off the hook
  • Yet Hitler, paradoxically, is also a shriveled untermensch, the protypical nonentity; a face in the crowd in an age of crowds, instantly forgettable despite his calculated efforts to brand himself (the toothbrush mustache of the military man coupled with the flopping forelock of the art-school bohemian)
  • there was always a comic distance between the public image of the world-bestriding, godlike Fuhrer and his Inner Adolf, a nail-biting nebbish tormented by flatulence. Knowingly or not, the Downfall parodies dance in the gap between the two. More immediately, they rely on the tried-and-true gimmick of bathos. What makes the Downfall parodies so consistently hilarious is the incongruity of whatever viral topic is making the Fuhrer go ballistic and the outsized scale of his gotterdammerung-strength tirade
  • The Downfall meme dramatizes the cultural logic of our remixed, mashed-up times, when digital technology allows us to loot recorded history, prying loose any signifier that catches our magpie eyes and repurposing it to any end. The near-instantaneous speed with which parodists use these viral videos to respond to current events underscores the extent to which the social Web, unlike the media ecologies of Hitler’s day, is a many-to-many phenomenon, more collective cacophony than one-way rant. As well, the furor (forgive pun) over YouTube’s decision to capitulate to the movie studio’s takedown demand, rather than standing fast in defense of Fair Use (a provision in copyright law that protects the re-use of a work for purposes of parody), indicates the extent to which ordinary people feel that commercial culture is somehow theirs, to misread or misuse as the spirit moves them.
  • the closest thing we have to a folk culture, the connective tissue that binds us as a society
  • SPIEGEL: Can you also get your revenge on him by using comedy? Brooks: Yes, absolutely. Of course it is impossible to take revenge for 6 million murdered Jews. But by using the medium of comedy, we can try to rob Hitler of his posthumous power and myths. [...] We take away from him the holy seriousness that always surrounded him and protected him like a cordon.”
  • risking the noose, some Germans laughed off their fears and mocked the Orwellian boot stamping on the human face, giving vent to covert opposition through flüsterwitze (“whispered jokes”). Incredibly, even Jews joked about their plight, drawing on the absurdist humor that is quintessentially Jewish to mock the Nazis even as they lightened the intolerable burden of Jewish life in the shadow of the swastika. Rapaport offers a sample of Jewish humor in Hitler’s Germany: “A Jew is arrested during the war, having been denounced for killing a Nazi at 10 P.M. and even eating the brain of his victim. This is his defense: In the first place, a Nazi hasn’t got any brain. Secondly, a Jew doesn’t eat anything that comes from a pig. And thirdly, he could not have killed the Nazi at 10 P.M. because at that time everybody listens to the BBC broadcast.”
  •  
    Brilliant
Ed Webb

Piper at the Gates of Hell: An Interview with Cyberpunk Legend John Shirley | Motherboard - 0 views

    • Ed Webb
       
      City Come A Walking is one of the most punk of the cyberpunk novels and short stories I have ever read, and I have read quite a few...
  • I'll press your buttons here by positing that if "we" (humankind) are too dumb to self-regulate our own childbirth output, too dim to recognize that we are polluting ourselves and neighbors out of sustainable existence, we are, in fact, a ridiculous parasite on this Earth and that the planet on which we live will simply slough us off—as it well should—and will bounce back without evidence of we even being here, come two or three thousand years. Your thoughts (in as much detail as you wish)?I would recommend reading my "the next 50 years" piece here. Basically I think that
 climate change, which in this case genuinely is caused mostly by humanity, 
is just one part of the environmental problem. Overfishing, toxification of 
the seas, pesticide use, weedkillers, prescription drugs in water,
 fracking, continued air pollution, toxicity in food, destruction of animal
 habitat, attrition on bee colonies—all this is converging. And we'll be 
facing the consequences for several hundred years.
  • I believe humanity will
 survive, and it won't be surviving like Road Warrior or the Morlocks from The Time Machine, but I think we'll have some cruelly ugly social consequences. We'll have famines the like of which we've never seen before, along with higher risk of wars—I do predict a third world war in the second half of this century but I don't think it will be a nuclear war—and I think we'll suffer so hugely we'll be forced to have a change in consciousness to adapt. 
  • ...1 more annotation...
  • We may end up having to "terraform" the Earth itself, to some extent.
Ed Webb

The Web Means the End of Forgetting - NYTimes.com - 1 views

  • for a great many people, the permanent memory bank of the Web increasingly means there are no second chances — no opportunities to escape a scarlet letter in your digital past. Now the worst thing you’ve done is often the first thing everyone knows about you.
  • a collective identity crisis. For most of human history, the idea of reinventing yourself or freely shaping your identity — of presenting different selves in different contexts (at home, at work, at play) — was hard to fathom, because people’s identities were fixed by their roles in a rigid social hierarchy. With little geographic or social mobility, you were defined not as an individual but by your village, your class, your job or your guild. But that started to change in the late Middle Ages and the Renaissance, with a growing individualism that came to redefine human identity. As people perceived themselves increasingly as individuals, their status became a function not of inherited categories but of their own efforts and achievements. This new conception of malleable and fluid identity found its fullest and purest expression in the American ideal of the self-made man, a term popularized by Henry Clay in 1832.
  • the dawning of the Internet age promised to resurrect the ideal of what the psychiatrist Robert Jay Lifton has called the “protean self.” If you couldn’t flee to Texas, you could always seek out a new chat room and create a new screen name. For some technology enthusiasts, the Web was supposed to be the second flowering of the open frontier, and the ability to segment our identities with an endless supply of pseudonyms, avatars and categories of friendship was supposed to let people present different sides of their personalities in different contexts. What seemed within our grasp was a power that only Proteus possessed: namely, perfect control over our shifting identities. But the hope that we could carefully control how others view us in different contexts has proved to be another myth. As social-networking sites expanded, it was no longer quite so easy to have segmented identities: now that so many people use a single platform to post constant status updates and photos about their private and public activities, the idea of a home self, a work self, a family self and a high-school-friends self has become increasingly untenable. In fact, the attempt to maintain different selves often arouses suspicion.
  • ...20 more annotations...
  • All around the world, political leaders, scholars and citizens are searching for responses to the challenge of preserving control of our identities in a digital world that never forgets. Are the most promising solutions going to be technological? Legislative? Judicial? Ethical? A result of shifting social norms and cultural expectations? Or some mix of the above?
  • These approaches share the common goal of reconstructing a form of control over our identities: the ability to reinvent ourselves, to escape our pasts and to improve the selves that we present to the world.
  • many technological theorists assumed that self-governing communities could ensure, through the self-correcting wisdom of the crowd, that all participants enjoyed the online identities they deserved. Wikipedia is one embodiment of the faith that the wisdom of the crowd can correct most mistakes — that a Wikipedia entry for a small-town mayor, for example, will reflect the reputation he deserves. And if the crowd fails — perhaps by turning into a digital mob — Wikipedia offers other forms of redress
  • In practice, however, self-governing communities like Wikipedia — or algorithmically self-correcting systems like Google — often leave people feeling misrepresented and burned. Those who think that their online reputations have been unfairly tarnished by an isolated incident or two now have a practical option: consulting a firm like ReputationDefender, which promises to clean up your online image. ReputationDefender was founded by Michael Fertik, a Harvard Law School graduate who was troubled by the idea of young people being forever tainted online by their youthful indiscretions. “I was seeing articles about the ‘Lord of the Flies’ behavior that all of us engage in at that age,” he told me, “and it felt un-American that when the conduct was online, it could have permanent effects on the speaker and the victim. The right to new beginnings and the right to self-definition have always been among the most beautiful American ideals.”
  • In the Web 3.0 world, Fertik predicts, people will be rated, assessed and scored based not on their creditworthiness but on their trustworthiness as good parents, good dates, good employees, good baby sitters or good insurance risks.
  • “Our customers include parents whose kids have talked about them on the Internet — ‘Mom didn’t get the raise’; ‘Dad got fired’; ‘Mom and Dad are fighting a lot, and I’m worried they’ll get a divorce.’ ”
  • as facial-recognition technology becomes more widespread and sophisticated, it will almost certainly challenge our expectation of anonymity in public
  • Ohm says he worries that employers would be able to use social-network-aggregator services to identify people’s book and movie preferences and even Internet-search terms, and then fire or refuse to hire them on that basis. A handful of states — including New York, California, Colorado and North Dakota — broadly prohibit employers from discriminating against employees for legal off-duty conduct like smoking. Ohm suggests that these laws could be extended to prevent certain categories of employers from refusing to hire people based on Facebook pictures, status updates and other legal but embarrassing personal information. (In practice, these laws might be hard to enforce, since employers might not disclose the real reason for their hiring decisions, so employers, like credit-reporting agents, might also be required by law to disclose to job candidates the negative information in their digital files.)
  • research group’s preliminary results suggest that if rumors spread about something good you did 10 years ago, like winning a prize, they will be discounted; but if rumors spread about something bad that you did 10 years ago, like driving drunk, that information has staying power
  • many people aren’t worried about false information posted by others — they’re worried about true information they’ve posted about themselves when it is taken out of context or given undue weight. And defamation law doesn’t apply to true information or statements of opinion. Some legal scholars want to expand the ability to sue over true but embarrassing violations of privacy — although it appears to be a quixotic goal.
  • Researchers at the University of Washington, for example, are developing a technology called Vanish that makes electronic data “self-destruct” after a specified period of time. Instead of relying on Google, Facebook or Hotmail to delete the data that is stored “in the cloud” — in other words, on their distributed servers — Vanish encrypts the data and then “shatters” the encryption key. To read the data, your computer has to put the pieces of the key back together, but they “erode” or “rust” as time passes, and after a certain point the document can no longer be read.
  • Plenty of anecdotal evidence suggests that young people, having been burned by Facebook (and frustrated by its privacy policy, which at more than 5,000 words is longer than the U.S. Constitution), are savvier than older users about cleaning up their tagged photos and being careful about what they post.
  • norms are already developing to recreate off-the-record spaces in public, with no photos, Twitter posts or blogging allowed. Milk and Honey, an exclusive bar on Manhattan’s Lower East Side, requires potential members to sign an agreement promising not to blog about the bar’s goings on or to post photos on social-networking sites, and other bars and nightclubs are adopting similar policies. I’ve been at dinners recently where someone has requested, in all seriousness, “Please don’t tweet this” — a custom that is likely to spread.
  • There’s already a sharp rise in lawsuits known as Twittergation — that is, suits to force Web sites to remove slanderous or false posts.
  • strategies of “soft paternalism” that might nudge people to hesitate before posting, say, drunken photos from Cancún. “We could easily think about a system, when you are uploading certain photos, that immediately detects how sensitive the photo will be.”
  • It’s sobering, now that we live in a world misleadingly called a “global village,” to think about privacy in actual, small villages long ago. In the villages described in the Babylonian Talmud, for example, any kind of gossip or tale-bearing about other people — oral or written, true or false, friendly or mean — was considered a terrible sin because small communities have long memories and every word spoken about other people was thought to ascend to the heavenly cloud. (The digital cloud has made this metaphor literal.) But the Talmudic villages were, in fact, far more humane and forgiving than our brutal global village, where much of the content on the Internet would meet the Talmudic definition of gossip: although the Talmudic sages believed that God reads our thoughts and records them in the book of life, they also believed that God erases the book for those who atone for their sins by asking forgiveness of those they have wronged. In the Talmud, people have an obligation not to remind others of their past misdeeds, on the assumption they may have atoned and grown spiritually from their mistakes. “If a man was a repentant [sinner],” the Talmud says, “one must not say to him, ‘Remember your former deeds.’ ” Unlike God, however, the digital cloud rarely wipes our slates clean, and the keepers of the cloud today are sometimes less forgiving than their all-powerful divine predecessor.
  • On the Internet, it turns out, we’re not entitled to demand any particular respect at all, and if others don’t have the empathy necessary to forgive our missteps, or the attention spans necessary to judge us in context, there’s nothing we can do about it.
  • Gosling is optimistic about the implications of his study for the possibility of digital forgiveness. He acknowledged that social technologies are forcing us to merge identities that used to be separate — we can no longer have segmented selves like “a home or family self, a friend self, a leisure self, a work self.” But although he told Facebook, “I have to find a way to reconcile my professor self with my having-a-few-drinks self,” he also suggested that as all of us have to merge our public and private identities, photos showing us having a few drinks on Facebook will no longer seem so scandalous. “You see your accountant going out on weekends and attending clown conventions, that no longer makes you think that he’s not a good accountant. We’re coming to terms and reconciling with that merging of identities.”
  • a humane society values privacy, because it allows people to cultivate different aspects of their personalities in different contexts; and at the moment, the enforced merging of identities that used to be separate is leaving many casualties in its wake.
  • we need to learn new forms of empathy, new ways of defining ourselves without reference to what others say about us and new ways of forgiving one another for the digital trails that will follow us forever
Ed Webb

We Are Drowning in a Devolved World: An Open Letter from Devo - Noisey - 0 views

  • When Devo formed more than 40 years ago, we never dreamed that two decades into the 21st century, everything we had theorized would not only be proven, but also become worse than we had imagined
  • May 4 changed my life, and I truly believe Devo would not exist without that horror. It made me realize that all the Quasar color TVs, Swanson TV dinners, Corvettes, and sofa beds in the world didn't mean we were actually making progress. It meant the future could be not only as barbaric as the past, but that it most likely would be. The dystopian novels 1984, Animal Farm, and Brave New World suddenly seemed less like cautionary tales about the encroaching fusion of technological advances with the centralized, authoritarian power of the state, and more like subversive road maps to condition the intelligentsia for what was to come.
  • a philosophy emerged, fueled by the revelations that linear progress in a consumer society was a lie
  • ...8 more annotations...
  • There were no flying cars and domed cities, as promised in Popular Science; rather, there was a dumbing down of the population engineered by right-wing politicians, televangelists, and Madison Avenue. I called what we saw “De-evolution,” based upon the tendency toward entropy across all human endeavors. Borrowing the tactics of the Mad Men-era of our childhood, we shortened the name of the idea to the marketing-friendly “Devo.”
  • we witnessed an America where the capacity for critical thought and reasoning were eroding fast. People mindlessly repeating slogans from political propaganda and ad campaigns: “America, Love It or leave It”; “Don’t Ask Why, Drink Bud Dry”; “You’ve Come A Long Way, Baby”; even risk-free, feel-good slogans like “Give Peace a Chance.” Here was an emerging Corporate Feudal State
  • it seemed like the only real threat to consumer society at our disposal was meaning: turning sloganeering on its head for sarcastic or subversive means, and making people notice that they were being moved and manipulated by marketing, not by well-meaning friends disguised as mom-and-pop. And so creative subversion seemed the only viable course of action
  • Presently, the fabric that holds a society together has shredded in the wind. Everyone has their own facts, their own private Idaho stored in their expensive cellular phones
  • Social media provides the highway straight back to Plato’s Allegory of the Cave. The restless natives react to digital shadows on the wall, reduced to fear, hate, and superstition
  • The rise of authoritarian leadership around the globe, fed by ill-informed populism, is well-documented at this point. And with it, we see the ugly specter of increased racism and anti-Semitism. It’s open season on those who gladly vote against their own self-interests. The exponential increase in suffering for more and more of the population is heartbreaking to see. “Freedom of choice is what you got / Freedom from choice is what you want,” those Devo clowns said in 1980.
  • the hour is getting late. Perhaps the reason Devo was even nominated after 15 years of eligibility is because Western society seems locked in a death wish. Devo doesn’t skew so outside the box anymore. Maybe people are a bit nostalgic for our DIY originality and substance. We were the canaries in the coalmine warning our fans and foes of things to come in the guise of the Court Jester, examples of conformity in extremis in order to warn against conformity
  • Devo is merely the house band on the Titanic
Ed Webb

Could fully automated luxury communism ever work? - 0 views

  • Having achieved a seamless, pervasive commodification of online sociality, Big Tech companies have turned their attention to infrastructure. Attempts by Google, Amazon and Facebook to achieve market leadership, in everything from AI to space exploration, risk a future defined by the battle for corporate monopoly.
  • The technologies are coming. They’re already here in certain instances. It’s the politics that surrounds them. We have alternatives: we can have public ownership of data in the citizen’s interest or it could be used as it is in China where you have a synthesis of corporate and state power
  • the two alternatives that big data allows is an all-consuming surveillance state where you have a deep synthesis of capitalism with authoritarian control, or a reinvigorated welfare state where more and more things are available to everyone for free or very low cost
  • ...4 more annotations...
  • we can’t begin those discussions until we say, as a society, we want to at least try subordinating these potentials to the democratic project, rather than allow capitalism to do what it wants
  • I say in FALC that this isn’t a blueprint for utopia. All I’m saying is that there is a possibility for the end of scarcity, the end of work, a coming together of leisure and labour, physical and mental work. What do we want to do with it? It’s perfectly possible something different could emerge where you have this aggressive form of social value.
  • I think the thing that’s been beaten out of everyone since 2010 is one of the prevailing tenets of neoliberalism: work hard, you can be whatever you want to be, that you’ll get a job, be well paid and enjoy yourself.  In 2010, that disappeared overnight, the rules of the game changed. For the status quo to continue to administer itself,  it had to change common sense. You see this with Jordan Peterson; he’s saying you have to know your place and that’s what will make you happy. To me that’s the only future for conservative thought, how else do you mediate the inequality and unhappiness?
  • I don’t think we can rapidly decarbonise our economies without working people understanding that it’s in their self-interest. A green economy means better quality of life. It means more work. Luxury populism feeds not only into the green transition, but the rollout of Universal Basic Services and even further.
Ed Webb

Supreme court cellphone case puts free speech - not just privacy - at risk | Opinion | ... - 0 views

  • scholars are watching Carpenter’s case closely because it may require the supreme court to address the scope and continuing relevance of the “third-party-records doctrine”, a judicially developed rule that has sometimes been understood to mean that a person surrenders her constitutional privacy interest in information that she turns over to a third party. The government contends that Carpenter lacks a constitutionally protected privacy interest in his location data because his cellphone was continually sharing that data with his cellphone provider.
  • Privacy advocates are rightly alarmed by this argument. Much of the digital technology all of us rely on today requires us to share information passively with third parties. Visiting a website, sending an email, buying a book online – all of these things require sharing sensitive data with internet service providers, merchants, banks and others. If this kind of commonplace and unavoidable information-sharing is sufficient to extinguish constitutional privacy rights, the digital-age fourth amendment will soon be a dead letter.
  • “Awareness that the government may be watching chills associational and expressive freedoms,” Chief Justice John Roberts wrote. Left unchecked, he warned, new forms of surveillance could “alter the relationship between citizen and government in a way that is inimical to democratic society”.
Ed Webb

Clear backpacks, monitored emails: life for US students under constant surveillance | E... - 0 views

  • This level of surveillance is “not too over-the-top”, Ingrid said, and she feels her classmates are generally “accepting” of it.
  • One leading student privacy expert estimated that as many as a third of America’s roughly 15,000 school districts may already be using technology that monitors students’ emails and documents for phrases that might flag suicidal thoughts, plans for a school shooting, or a range of other offenses.
  • When Dapier talks with other teen librarians about the issue of school surveillance, “we’re very alarmed,” he said. “It sort of trains the next generation that [surveillance] is normal, that it’s not an issue. What is the next generation’s Mark Zuckerberg going to think is normal?
  • ...13 more annotations...
  • Some parents said they were alarmed and frightened by schools’ new monitoring technologies. Others said they were conflicted, seeing some benefits to schools watching over what kids are doing online, but uncertain if their schools were striking the right balance with privacy concerns. Many said they were not even sure what kind of surveillance technology their schools might be using, and that the permission slips they had signed when their kids brought home school devices had told them almost nothing
  • “They’re so unclear that I’ve just decided to cut off the research completely, to not do any of it.”
  • As of 2018, at least 60 American school districts had also spent more than $1m on separate monitoring technology to track what their students were saying on public social media accounts, an amount that spiked sharply in the wake of the 2018 Parkland school shooting, according to the Brennan Center for Justice, a progressive advocacy group that compiled and analyzed school contracts with a subset of surveillance companies.
  • “They are all mandatory, and the accounts have been created before we’ve even been consulted,” he said. Parents are given almost no information about how their children’s data is being used, or the business models of the companies involved. Any time his kids complete school work through a digital platform, they are generating huge amounts of very personal, and potentially very valuable, data. The platforms know what time his kids do their homework, and whether it’s done early or at the last minute. They know what kinds of mistakes his kids make on math problems.
  • Felix, now 12, said he is frustrated that the school “doesn’t really [educate] students on what is OK and what is not OK. They don’t make it clear when they are tracking you, or not, or what platforms they track you on. “They don’t really give you a list of things not to do,” he said. “Once you’re in trouble, they act like you knew.”
  • “It’s the school as panopticon, and the sweeping searchlight beams into homes, now, and to me, that’s just disastrous to intellectual risk-taking and creativity.”
  • Many parents also said that they wanted more transparency and more parental control over surveillance. A few years ago, Ben, a tech professional from Maryland, got a call from his son’s principal to set up an urgent meeting. His son, then about nine or 10-years old, had opened up a school Google document and typed “I want to kill myself.” It was not until he and his son were in a serious meeting with school officials that Ben found out what happened: his son had typed the words on purpose, curious about what would happen. “The smile on his face gave away that he was testing boundaries, and not considering harming himself,” Ben said. (He asked that his last name and his son’s school district not be published, to preserve his son’s privacy.) The incident was resolved easily, he said, in part because Ben’s family already had close relationships with the school administrators.
  • there is still no independent evaluation of whether this kind of surveillance technology actually works to reduce violence and suicide.
  • Certain groups of students could easily be targeted by the monitoring more intensely than others, she said. Would Muslim students face additional surveillance? What about black students? Her daughter, who is 11, loves hip-hop music. “Maybe some of that language could be misconstrued, by the wrong ears or the wrong eyes, as potentially violent or threatening,” she said.
  • The Parent Coalition for Student Privacy was founded in 2014, in the wake of parental outrage over the attempt to create a standardized national database that would track hundreds of data points about public school students, from their names and social security numbers to their attendance, academic performance, and disciplinary and behavior records, and share the data with education tech companies. The effort, which had been funded by the Gates Foundation, collapsed in 2014 after fierce opposition from parents and privacy activists.
  • “More and more parents are organizing against the onslaught of ed tech and the loss of privacy that it entails. But at the same time, there’s so much money and power and political influence behind these groups,”
  • some privacy experts – and students – said they are concerned that surveillance at school might actually be undermining students’ wellbeing
  • “I do think the constant screen surveillance has affected our anxiety levels and our levels of depression.” “It’s over-guarding kids,” she said. “You need to let them make mistakes, you know? That’s kind of how we learn.”
Ed Webb

Interoperability And Privacy: Squaring The Circle | Techdirt - 0 views

  • if there's one thing we've learned from more than a decade of Facebook scandals, it's that there's little reason to believe that Facebook possesses the requisite will and capabilities. Indeed, it may be that there is no automated system or system of human judgments that could serve as a moderator and arbiter of the daily lives of billions of people. Given Facebook's ambition to put more and more of our daily lives behind its walled garden, it's hard to see why we would ever trust Facebook to be the one to fix all that's wrong with Facebook.
  • Facebook users are eager for alternatives to the service, but are held back by the fact that the people they want to talk with are all locked within the company's walled garden
  • rather than using standards to describe how a good voting machine should work, the industry pushed a standard that described how their existing, flawed machines did work with some small changes in configurations. Had they succeeded, they could have simply slapped a "complies with IEEE standard" label on everything they were already selling and declared themselves to have fixed the problem... without making the serious changes needed to fix their systems, including requiring a voter-verified paper ballot.
  • ...13 more annotations...
  • the risk of trusting competition to an interoperability mandate is that it will create a new ecosystem where everything that's not forbidden is mandatory, freezing in place the current situation, in which Facebook and the other giants dominate and new entrants are faced with onerous compliance burdens that make it more difficult to start a new service, and limit those new services to interoperating in ways that are carefully designed to prevent any kind of competitive challenge
  • Facebook is a notorious opponent of adversarial interoperability. In 2008, Facebook successfully wielded a radical legal theory that allowed it to shut down Power Ventures, a competitor that allowed Facebook's users to use multiple social networks from a single interface. Facebook argued that by allowing users to log in and display Facebook with a different interface, even after receipt of a cease and desist letter telling Power Ventures to stop, the company had broken a Reagan-era anti-hacking law called the Computer Fraud and Abuse Act (CFAA). In other words, upsetting Facebook's investors made their conduct illegal.
  • Today, Facebook is viewed as holding all the cards because it has corralled everyone who might join a new service within its walled garden. But legal reforms to safeguard the right to adversarial interoperability would turn this on its head: Facebook would be the place that had conveniently organized all the people whom you might tempt to leave Facebook, and even supply you with the tools you need to target those people.
  • Such a tool would allow someone to use Facebook while minimizing how they are used by Facebook. For people who want to leave Facebook but whose friends, colleagues or fellow travelers are not ready to join them, a service like this could let Facebook vegans get out of the Facebook pool while still leaving a toe in its waters.
  • In a competitive market (which adversarial interoperability can help to bring into existence), even very large companies can't afford to enrage their customers
  • the audience for a legitimate adversarial interoperability product are the customers of the existing service that it connects to.
  • anyone using a Facebook mobile app might be exposing themselves to incredibly intrusive data-gathering, including some surprisingly creepy and underhanded tactics.
  • If users could use a third-party service to exchange private messages with friends, or to participate in a group they're a member of, they can avoid much (but not all) of this surveillance.
  • Facebook users (and even non-Facebook users) who want more privacy have a variety of options, none of them very good. Users can tweak Facebook's famously hard-to-understand privacy dashboard to lock down their accounts and bet that Facebook will honor their settings (this has not always been a good bet). Everyone can use tracker-blockers, ad-blockers and script-blockers to prevent Facebook from tracking them when they're not on Facebook, by watching how they interact with pages that have Facebook "Like" buttons and other beacons that let Facebook monitor activity elsewhere on the Internet. We're rightfully proud of our own tracker blocker, Privacy Badger, but it doesn't stop Facebook from tracking you if you have a Facebook account and you're using Facebook's service.
  • As Facebook's market power dwindled, so would the pressure that web publishers feel to embed Facebook trackers on their sites, so that non-Facebook users would not be as likely to be tracked as they use the Web.
  • Today, Facebook's scandals do not trigger mass departures from the service, and when users do leave, they tend to end up on Instagram, which is also owned by Facebook.
  • For users who have privacy needs -- and other needs -- beyond those the big platforms are willing to fulfill, it's important that we keep the door open to competitors (for-profit, nonprofit, hobbyist and individuals) who are willing to fill those needs.
  • helping Facebook's own users, or the users of any big service, to configure their experience to make their lives better should be legal and encouraged even (and especially) if it provides a path for users to either diversify their social media experience or move away entirely from the big, concentrated services. Either way, we'd be on our way to a more pluralistic, decentralized, diverse Internet
Ed Webb

TSA is adding face recognition at big airports. Here's how to opt out. - The Washington... - 0 views

  • Any time data gets collected somewhere, it could also be stolen — and you only get one face. The TSA says all its databases are encrypted to reduce hacking risk. But in 2019, the Department of Homeland Security disclosed that photos of travelers were taken in a data breach, accessed through the network of one of its subcontractors.
  • “What we often see with these biometric programs is they are only optional in the introductory phases — and over time we see them becoming standardized and nationalized and eventually compulsory,” said Cahn. “There is no place more coercive to ask people for their consent than an airport.”
  • Those who have the privilege of not having to worry their face will be misread can zip right through — whereas people who don’t consent to it pay a tax with their time. At that point, how voluntary is it, really?
1 - 17 of 17
Showing 20 items per page