Skip to main content

Home/ Dystopias/ Group items tagged digital

Rss Feed Group items tagged

Ed Webb

Does the Digital Classroom Enfeeble the Mind? - NYTimes.com - 0 views

  • My father would have been unable to “teach to the test.” He once complained about errors in a sixth-grade math textbook, so he had the class learn math by designing a spaceship. My father would have been spat out by today’s test-driven educational regime.
  • A career in computer science makes you see the world in its terms. You start to see money as a form of information display instead of as a store of value. Money flows are the computational output of a lot of people planning, promising, evaluating, hedging and scheming, and those behaviors start to look like a set of algorithms. You start to see the weather as a computer processing bits tweaked by the sun, and gravity as a cosmic calculation that keeps events in time and space consistent. This way of seeing is becoming ever more common as people have experiences with computers. While it has its glorious moments, the computational perspective can at times be uniquely unromantic. Nothing kills music for me as much as having some algorithm calculate what music I will want to hear. That seems to miss the whole point. Inventing your musical taste is the point, isn’t it? Bringing computers into the middle of that is like paying someone to program a robot to have sex on your behalf so you don’t have to. And yet it seems we benefit from shining an objectifying digital light to disinfect our funky, lying selves once in a while. It’s heartless to have music chosen by digital algorithms. But at least there are fewer people held hostage to the tastes of bad radio D.J.’s than there once were. The trick is being ambidextrous, holding one hand to the heart while counting on the digits of the other.
  • The future of education in the digital age will be determined by our judgment of which aspects of the information we pass between generations can be represented in computers at all. If we try to represent something digitally when we actually can’t, we kill the romance and make some aspect of the human condition newly bland and absurd. If we romanticize information that shouldn’t be shielded from harsh calculations, we’ll suffer bad teachers and D.J.’s and their wares.
  • ...5 more annotations...
  • Some of the top digital designs of the moment, both in school and in the rest of life, embed the underlying message that we understand the brain and its workings. That is false. We don’t know how information is represented in the brain. We don’t know how reason is accomplished by neurons. There are some vaguely cool ideas floating around, and we might know a lot more about these things any moment now, but at this moment, we don’t. You could spend all day reading literature about educational technology without being reminded that this frontier of ignorance lies before us. We are tempted by the demons of commercial and professional ambition to pretend we know more than we do.
  • Outside school, something similar happens. Students spend a lot of time acting as trivialized relays in giant schemes designed for the purposes of advertising and other revenue-minded manipulations. They are prompted to create databases about themselves and then trust algorithms to assemble streams of songs and movies and stories for their consumption. We see the embedded philosophy bloom when students assemble papers as mash-ups from online snippets instead of thinking and composing on a blank piece of screen. What is wrong with this is not that students are any lazier now or learning less. (It is probably even true, I admit reluctantly, that in the presence of the ambient Internet, maybe it is not so important anymore to hold an archive of certain kinds of academic trivia in your head.) The problem is that students could come to conceive of themselves as relays in a transpersonal digital structure. Their job is then to copy and transfer data around, to be a source of statistics, whether to be processed by tests at school or by advertising schemes elsewhere.
  • If students don’t learn to think, then no amount of access to information will do them any good.
  • To the degree that education is about the transfer of the known between generations, it can be digitized, analyzed, optimized and bottled or posted on Twitter. To the degree that education is about the self-invention of the human race, the gargantuan process of steering billions of brains into unforeseeable states and configurations in the future, it can continue only if each brain learns to invent itself. And that is beyond computation because it is beyond our comprehension.
  • Roughly speaking, there are two ways to use computers in the classroom. You can have them measure and represent the students and the teachers, or you can have the class build a virtual spaceship. Right now the first way is ubiquitous, but the virtual spaceships are being built only by tenacious oddballs in unusual circumstances. More spaceships, please.
  •  
    How do we get this right - use the tech for what it can do well, develop our brains for what the tech can't do? Who's up for building a spaceship?
Ed Webb

BBC News - Cult of less: Living out of a hard drive - 0 views

  • The DJ has now replaced his bed with friends' couches, paper bills with online banking, and a record collection containing nearly 2,000 albums with an external hard drive with DJ software and nearly 13,000 MP3s
    • Ed Webb
       
      MP3s are convenient, of course, but they don't sound even half as good as vinyl. Seriously.
  • Mr Klein says the lifestyle can become loathsome because "you never know where you will sleep". And Mr Yurista says he frequently worries he may lose his new digital life to a hard drive crash or downed server. "You have to really make sure you have back-ups of your digital goods everywhere," he said.
  • like a house fire that rips through a family's prized possessions, when someone loses their digital goods to a computer crash, they can be devastated. Kelly Chessen, a 36-year-old former suicide hotline counsellor with a soothing voice and reassuring personality, is Drive Savers official "data crisis counsellor". Part-psychiatrist and part-tech enthusiast, Ms Chessen's role is to try to calm people down when they lose their digital possessions to failed drives. Ms Chessen says some people have gone as far as to threaten suicide over their lost digital possessions and data. "It's usually indirect threats like, 'I'm not sure what I'm going to do if I can't get the data back,' but sometimes it will be a direct threat such as, 'I may just have to end it if I can't get to the information',"
  • ...4 more annotations...
  • Dr Sandberg believes we could be living on hard drives along with our digital possessions in the not too distant future, which would allow us to shed the trouble of owning a body. The concept is called "mind uploading", and it suggests that when our bodies age and begin to fail like a worn or snapped record, we may be able to continue living consciously inside a computer as our own virtual substitutes. "It's the idea that we can copy or transfer the information inside the brain into a form that can be run on the computer," said Dr Sandberg. He added: "That would mean that your consciousness or a combination of that would continue in the computer." Dr Sandberg says although it's just a theory now, researchers and engineers are working on super computers that could one day handle a map of all the networks of neurons and synapses in our brains - and that map could produce human consciousness outside of the body.
  • Mr Sutton is the founder of CultofLess.com, a website which has helped him sell or give away his possessions - apart from his laptop, an iPad, an Amazon Kindle, two external hard drives, a "few" articles of clothing and bed sheets for a mattress that was left in his newly rented apartment. This 21st-Century minimalist says he got rid of much of his clutter because he felt the ever-increasing number of available digital goods have provided adequate replacements for his former physical possessions
  • The tech-savvy Los Angeles "transplant" credits his external hard drives and online services like iTunes, Hulu, Flickr, Facebook, Skype and Google Maps for allowing him to lead a minimalist life.
  • - the internet has replaced my need for an address
Ed Webb

Goodbye petabytes, hello zettabytes | Technology | The Guardian - 0 views

  • Every man, woman and child on the planet using micro-blogging site Twitter for a century. For many people that may sound like a vision of hell, but for watchers of the tremendous growth of digital communications it is a neat way of presenting the sheer scale of the so-called digital universe.
  • the growing desire of corporations and governments to know and store ever more data about everyone
  • experts estimate that all human language used since the dawn of time would take up about 5,000 petabytes if stored in digital form, which is less than 1% of the digital content created since someone first switched on a computer.
  • ...6 more annotations...
  • A zettabyte, incidentally, is roughly half a million times the entire collections of all the academic libraries in the United States.
  • Mobile phones have dramatically widened the range of people who can create, store and share digital information."China now has more visible devices out on the streets being used by individuals than the US does," said McDonald. "We are seeing the democratisation and commoditisation of the use and creation of information."
  • About 70% of the digital universe is generated by individuals, but its storage is then predominantly the job of corporations. From emails and blogs to mobile phone calls, it is corporations that are storing information on behalf of consumers.
  • actions in the offline world that individuals carry out which result in digital content being created by organisations – from cashpoint transactions which a bank must record to walking along the pavement, which is likely to result in CCTV footage
  • "unstructured"
  • "You talk to a kid these days and they have no idea what a kilobyte is. The speed things progress, we are going to need many words beyond zettabyte."
Ed Webb

The Web Means the End of Forgetting - NYTimes.com - 1 views

  • for a great many people, the permanent memory bank of the Web increasingly means there are no second chances — no opportunities to escape a scarlet letter in your digital past. Now the worst thing you’ve done is often the first thing everyone knows about you.
  • a collective identity crisis. For most of human history, the idea of reinventing yourself or freely shaping your identity — of presenting different selves in different contexts (at home, at work, at play) — was hard to fathom, because people’s identities were fixed by their roles in a rigid social hierarchy. With little geographic or social mobility, you were defined not as an individual but by your village, your class, your job or your guild. But that started to change in the late Middle Ages and the Renaissance, with a growing individualism that came to redefine human identity. As people perceived themselves increasingly as individuals, their status became a function not of inherited categories but of their own efforts and achievements. This new conception of malleable and fluid identity found its fullest and purest expression in the American ideal of the self-made man, a term popularized by Henry Clay in 1832.
  • the dawning of the Internet age promised to resurrect the ideal of what the psychiatrist Robert Jay Lifton has called the “protean self.” If you couldn’t flee to Texas, you could always seek out a new chat room and create a new screen name. For some technology enthusiasts, the Web was supposed to be the second flowering of the open frontier, and the ability to segment our identities with an endless supply of pseudonyms, avatars and categories of friendship was supposed to let people present different sides of their personalities in different contexts. What seemed within our grasp was a power that only Proteus possessed: namely, perfect control over our shifting identities. But the hope that we could carefully control how others view us in different contexts has proved to be another myth. As social-networking sites expanded, it was no longer quite so easy to have segmented identities: now that so many people use a single platform to post constant status updates and photos about their private and public activities, the idea of a home self, a work self, a family self and a high-school-friends self has become increasingly untenable. In fact, the attempt to maintain different selves often arouses suspicion.
  • ...20 more annotations...
  • All around the world, political leaders, scholars and citizens are searching for responses to the challenge of preserving control of our identities in a digital world that never forgets. Are the most promising solutions going to be technological? Legislative? Judicial? Ethical? A result of shifting social norms and cultural expectations? Or some mix of the above?
  • These approaches share the common goal of reconstructing a form of control over our identities: the ability to reinvent ourselves, to escape our pasts and to improve the selves that we present to the world.
  • many technological theorists assumed that self-governing communities could ensure, through the self-correcting wisdom of the crowd, that all participants enjoyed the online identities they deserved. Wikipedia is one embodiment of the faith that the wisdom of the crowd can correct most mistakes — that a Wikipedia entry for a small-town mayor, for example, will reflect the reputation he deserves. And if the crowd fails — perhaps by turning into a digital mob — Wikipedia offers other forms of redress
  • In practice, however, self-governing communities like Wikipedia — or algorithmically self-correcting systems like Google — often leave people feeling misrepresented and burned. Those who think that their online reputations have been unfairly tarnished by an isolated incident or two now have a practical option: consulting a firm like ReputationDefender, which promises to clean up your online image. ReputationDefender was founded by Michael Fertik, a Harvard Law School graduate who was troubled by the idea of young people being forever tainted online by their youthful indiscretions. “I was seeing articles about the ‘Lord of the Flies’ behavior that all of us engage in at that age,” he told me, “and it felt un-American that when the conduct was online, it could have permanent effects on the speaker and the victim. The right to new beginnings and the right to self-definition have always been among the most beautiful American ideals.”
  • In the Web 3.0 world, Fertik predicts, people will be rated, assessed and scored based not on their creditworthiness but on their trustworthiness as good parents, good dates, good employees, good baby sitters or good insurance risks.
  • “Our customers include parents whose kids have talked about them on the Internet — ‘Mom didn’t get the raise’; ‘Dad got fired’; ‘Mom and Dad are fighting a lot, and I’m worried they’ll get a divorce.’ ”
  • as facial-recognition technology becomes more widespread and sophisticated, it will almost certainly challenge our expectation of anonymity in public
  • Ohm says he worries that employers would be able to use social-network-aggregator services to identify people’s book and movie preferences and even Internet-search terms, and then fire or refuse to hire them on that basis. A handful of states — including New York, California, Colorado and North Dakota — broadly prohibit employers from discriminating against employees for legal off-duty conduct like smoking. Ohm suggests that these laws could be extended to prevent certain categories of employers from refusing to hire people based on Facebook pictures, status updates and other legal but embarrassing personal information. (In practice, these laws might be hard to enforce, since employers might not disclose the real reason for their hiring decisions, so employers, like credit-reporting agents, might also be required by law to disclose to job candidates the negative information in their digital files.)
  • research group’s preliminary results suggest that if rumors spread about something good you did 10 years ago, like winning a prize, they will be discounted; but if rumors spread about something bad that you did 10 years ago, like driving drunk, that information has staying power
  • many people aren’t worried about false information posted by others — they’re worried about true information they’ve posted about themselves when it is taken out of context or given undue weight. And defamation law doesn’t apply to true information or statements of opinion. Some legal scholars want to expand the ability to sue over true but embarrassing violations of privacy — although it appears to be a quixotic goal.
  • Researchers at the University of Washington, for example, are developing a technology called Vanish that makes electronic data “self-destruct” after a specified period of time. Instead of relying on Google, Facebook or Hotmail to delete the data that is stored “in the cloud” — in other words, on their distributed servers — Vanish encrypts the data and then “shatters” the encryption key. To read the data, your computer has to put the pieces of the key back together, but they “erode” or “rust” as time passes, and after a certain point the document can no longer be read.
  • Plenty of anecdotal evidence suggests that young people, having been burned by Facebook (and frustrated by its privacy policy, which at more than 5,000 words is longer than the U.S. Constitution), are savvier than older users about cleaning up their tagged photos and being careful about what they post.
  • norms are already developing to recreate off-the-record spaces in public, with no photos, Twitter posts or blogging allowed. Milk and Honey, an exclusive bar on Manhattan’s Lower East Side, requires potential members to sign an agreement promising not to blog about the bar’s goings on or to post photos on social-networking sites, and other bars and nightclubs are adopting similar policies. I’ve been at dinners recently where someone has requested, in all seriousness, “Please don’t tweet this” — a custom that is likely to spread.
  • There’s already a sharp rise in lawsuits known as Twittergation — that is, suits to force Web sites to remove slanderous or false posts.
  • strategies of “soft paternalism” that might nudge people to hesitate before posting, say, drunken photos from Cancún. “We could easily think about a system, when you are uploading certain photos, that immediately detects how sensitive the photo will be.”
  • It’s sobering, now that we live in a world misleadingly called a “global village,” to think about privacy in actual, small villages long ago. In the villages described in the Babylonian Talmud, for example, any kind of gossip or tale-bearing about other people — oral or written, true or false, friendly or mean — was considered a terrible sin because small communities have long memories and every word spoken about other people was thought to ascend to the heavenly cloud. (The digital cloud has made this metaphor literal.) But the Talmudic villages were, in fact, far more humane and forgiving than our brutal global village, where much of the content on the Internet would meet the Talmudic definition of gossip: although the Talmudic sages believed that God reads our thoughts and records them in the book of life, they also believed that God erases the book for those who atone for their sins by asking forgiveness of those they have wronged. In the Talmud, people have an obligation not to remind others of their past misdeeds, on the assumption they may have atoned and grown spiritually from their mistakes. “If a man was a repentant [sinner],” the Talmud says, “one must not say to him, ‘Remember your former deeds.’ ” Unlike God, however, the digital cloud rarely wipes our slates clean, and the keepers of the cloud today are sometimes less forgiving than their all-powerful divine predecessor.
  • On the Internet, it turns out, we’re not entitled to demand any particular respect at all, and if others don’t have the empathy necessary to forgive our missteps, or the attention spans necessary to judge us in context, there’s nothing we can do about it.
  • Gosling is optimistic about the implications of his study for the possibility of digital forgiveness. He acknowledged that social technologies are forcing us to merge identities that used to be separate — we can no longer have segmented selves like “a home or family self, a friend self, a leisure self, a work self.” But although he told Facebook, “I have to find a way to reconcile my professor self with my having-a-few-drinks self,” he also suggested that as all of us have to merge our public and private identities, photos showing us having a few drinks on Facebook will no longer seem so scandalous. “You see your accountant going out on weekends and attending clown conventions, that no longer makes you think that he’s not a good accountant. We’re coming to terms and reconciling with that merging of identities.”
  • a humane society values privacy, because it allows people to cultivate different aspects of their personalities in different contexts; and at the moment, the enforced merging of identities that used to be separate is leaving many casualties in its wake.
  • we need to learn new forms of empathy, new ways of defining ourselves without reference to what others say about us and new ways of forgiving one another for the digital trails that will follow us forever
Ed Webb

Artificial Intelligence and the Future of Humans | Pew Research Center - 0 views

  • experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities
  • most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.
  • CONCERNS Human agency: Individuals are  experiencing a loss of control over their lives Decision-making on key aspects of digital life is automatically ceded to code-driven, "black box" tools. People lack input and do not learn the context about how the tools work. They sacrifice independence, privacy and power over choice; they have no control over these processes. This effect will deepen as automated systems become more prevalent and complex. Data abuse: Data use and surveillance in complex systems is designed for profit or for exercising power Most AI tools are and will be in the hands of companies striving for profits or governments striving for power. Values and ethics are often not baked into the digital systems making people's decisions for them. These systems are globally networked and not easy to regulate or rein in. Job loss: The AI takeover of jobs will widen economic divides, leading to social upheaval The efficiencies and other economic advantages of code-based machine intelligence will continue to disrupt all aspects of human work. While some expect new jobs will emerge, others worry about massive job losses, widening economic divides and social upheavals, including populist uprisings. Dependence lock-in: Reduction of individuals’ cognitive, social and survival skills Many see AI as augmenting human capacities but some predict the opposite - that people's deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others. Mayhem: Autonomous weapons, cybercrime and weaponized information Some predict further erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications and the use of weaponized information, lies and propaganda to dangerously destabilize human groups. Some also fear cybercriminals' reach into economic systems.
  • ...18 more annotations...
  • AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons
  • “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”
  • SUGGESTED SOLUTIONS Global good is No. 1: Improve human collaboration across borders and stakeholder groups Digital cooperation to serve humanity's best interests is the top priority. Ways must be found for people around the world to come to common understandings and agreements - to join forces to facilitate the innovation of widely accepted approaches aimed at tackling wicked problems and maintaining control over complex human-digital networks. Values-based system: Develop policies to assure AI will be directed at ‘humanness’ and common good Adopt a 'moonshot mentality' to build inclusive, decentralized intelligent digital networks 'imbued with empathy' that help humans aggressively ensure that technology meets social and ethical responsibilities. Some new level of regulatory and certification process will be necessary. Prioritize people: Alter economic and political systems to better help humans ‘race with the robots’ Reorganize economic and political systems toward the goal of expanding humans' capacities and capabilities in order to heighten human/AI collaboration and staunch trends that would compromise human relevance in the face of programmed intelligence.
  • “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”
  • We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals?
  • AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment
  • The record to date is that convenience overwhelms privacy
  • As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging
  • AI will eventually cause a large number of people to be permanently out of work
  • Newer generations of citizens will become more and more dependent on networked AI structures and processes
  • there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control
  • As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified
  • Given historical precedent, one would have to assume it will be our worst qualities that are augmented
  • Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing
  • We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully
  • the Orwellian nightmare realised
  • “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”
  • The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education
Ed Webb

The Digital Maginot Line - 0 views

  • The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless). In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.
  • There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.
  • There’s very little incentive not to try everything: this is a revolution that is being A/B tested.
  • ...17 more annotations...
  • The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.
  • Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target Russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure
  • Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle. But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.
  • The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead
  • Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble. But ultimately the information war is about territory — just not the geographic kind. In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.
  • This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”
  • The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.
  • If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it.
  • Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.
  • Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.
  • The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare
  • Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups
  • platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors
  • What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation. We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.
  • We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.
  • Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.
  • Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets.
Ed Webb

Our Digitally Undying Memories - The Chronicle Review - The Chronicle of Higher Education - 0 views

  • as Viktor Mayer-Schönberger argues convincingly in his book Delete: The Virtue of Forgetting in the Digital Age (Princeton University Press, 2009), the costs of such powerful collective memory are often higher than we assume.
  • "Total recall" renders context, time, and distance irrelevant. Something that happened 40 years ago—whether youthful or scholarly indiscretion—still matters and can come back to harm us as if it had happened yesterday.
  • an important "third wave" of work about the digital environment. In the late 1990s and early 2000s, we saw books like Nicholas Negroponte's Being Digital (Knopf, 1995) and Howard Rhein-gold's The Virtual Community: Homesteading on the Electronic Frontier (Addison-Wesley, 1993) and Smart Mobs: The Next Social Revolution (Perseus, 2002), which idealistically described the transformative powers of digital networks. Then we saw shallow blowback, exemplified by Susan Jacoby's The Age of American Unreason (Pantheon, 2008).
  • ...14 more annotations...
  • For most of human history, forgetting was the default and remembering the challenge.
  • Chants, songs, monasteries, books, libraries, and even universities were established primarily to overcome our propensity to forget over time. The physical and economic limitations of all of those technologies and institutions served us well. Each acted not just as memory aids but also as filters or editors. They helped us remember much by helping us discard even more.
    • Ed Webb
       
      Excellent point, well made.
  • Our use of the proliferating data and rudimentary filters in our lives renders us incapable of judging, discriminating, or engaging in deductive reasoning. And inductive reasoning, which one could argue is entering a golden age with the rise of huge databases and the processing power needed to detect patterns and anomalies, is beyond the reach of lay users of the grand collective database called the Internet.
  • Even 10 years ago, we did not consider that words written for a tiny audience could reach beyond, perhaps to someone unforgiving, uninitiated in a community, or just plain unkind.
  • Remembering to forget, as Elvis argued, is also essential to getting over heartbreak. And, as Jorge Luis Borges wrote in his 1942 (yep, I Googled it to find the date) story "Funes el memorioso," it is just as important to the act of thinking. Funes, the young man in the story afflicted with an inability to forget anything, can't make sense of it. He can't think abstractly. He can't judge facts by relative weight or seriousness. He is lost in the details. Painfully, Funes cannot rest.
  • Just because we have the vessels, we fill them.
  • the default habits of our species: to record, retain, and release as much information as possible
  • Perhaps we just have to learn to manage wisely how we digest, discuss, and publicly assess the huge archive we are building. We must engender cultural habits that ensure perspective, calm deliberation, and wisdom. That's hard work.
  • we choose the nature of technologies. They don't choose us. We just happen to choose unwisely with some frequency
  • surveillance as the chief function of electronic government
  • critical information studies
  • Siva Vaidhyanathan is an associate professor of media studies and law at the University of Virginia. His next book, The Googlization of Everything, is forthcoming from the University of California Press.
  • Nietzsche's _On the Use and Disadvantage of History for Life_
  • Google compresses, if not eliminates, temporal context. This is likely only to exacerbate the existing problem in politics of taking one's statements out of context. A politician whose views on a subject have evolved quite logically over decades in light of changing knowledge and/or circumstances is held up in attack ads as a flip-flopper because consecutive Google entries have him/her saying two opposite things about the same subject -- and never mind that between the two statements, the Berlin Wall may have fallen or the economy crashed harder than at any other time since 1929.
Ed Webb

Schools Urged To Teach Youth Digital Citizenship : NPR - 0 views

  • not being trained in digital citizenship never caused a problem for me. I knew what was right and wrong, and I did the right thing. Why is this being treated so differently???‎"Nobody has come out and said, 'This is how it's supposed to be.'" This is part of my issue with "education". Students are learning that they only need to do what they are told. If there's isn't a rule, it must be OK. There's no thought, no critical evaluation, no drawing of parallels that if something is wrong in this circumstance, then it must also be in this other situation. People need to be allowed (forced? certainly encouraged) to think for themselves -- and to be responsible for their own actions!
    • Ed Webb
       
      Do you agree with this comment? Are issues such as ethics, courtesy etc different in the digital domain, or can/should values cross over? Is there a need for training or education specific to the online rather than common to the offline and online?
  • "For the most part, kids who are in college today never received any form of digital citizenship or media training when they were in high school or middle school."
Ed Webb

Smartphones are making us stupid - and may be a 'gateway drug' | The Lighthouse - 0 views

  • rather than making us smarter, mobile devices reduce our cognitive ability in measurable ways
  • “There’s lots of evidence showing that the information you learn on a digital device, doesn’t get retained very well and isn’t transferred across to the real world,”
  • “You’re also quickly conditioned to attend to lots of attention-grabbing signals, beeps and buzzes, so you jump from one task to the other and you don’t concentrate.”
  • ...16 more annotations...
  • Not only do smartphones affect our memory and our concentration, research shows they are addictive – to the point where they could be a ‘gateway drug’ making users more vulnerable to other addictions.
  • Smartphones are also linked to reduced social interaction, inadequate sleep, poor real-world navigation, and depression.
  • “The more time that kids spend on digital devices, the less empathetic they are, and the less they are able to process and recognise facial expressions, so their ability to actually communicate with each other is decreased.”
  • “Casino-funded research is designed to keep people gambling, and app software developers use exactly the same techniques. They have lots of buzzes and icons so you attend to them, they have things that move and flash so you notice them and keep your attention on the device.”
  • Around 90 per cent of US university students are thought to experience ‘phantom vibrations', so the researcher took a group to a desert location with no cell reception – and found that even after four days, around half of the students still thought their pocket was buzzing with Facebook or text notifications.
  • “Collaboration is a buzzword with software companies who are targeting schools to get kids to use these collaboration tools on their iPads – but collaboration decreases when you're using these devices,”
  • “All addiction is based on the same craving for a dopamine response, whether it's drug, gambling, alcohol or phone addiction,” he says. “As the dopamine response drops off, you need to increase the amount you need to get the same result, you want a little bit more next time. Neurologically, they all look the same.“We know – there are lots of studies on this – that once we form an addiction to something, we become more vulnerable to other addictions. That’s why there’s concerns around heavy users of more benign, easily-accessed drugs like alcohol and marijuana as there’s some correlation with usage of more physically addictive drugs like heroin, and neurological responses are the same.”
  • parents can also fall victim to screens which distract from their child’s activities or conversations, and most adults will experience this with friends and family members too.
  • “We also know that if you learn something on an iPad you are less likely to be able to transfer that to another device or to the real world,”
  • a series of studies have tested this with children who learn to construct a project with ‘digital’ blocks and then try the project with real blocks. “They can’t do it - they start from zero again,”
  • “Our brains can’t actually multitask, we have to switch our attention from one thing to another, and each time you switch, there's a cost to your attentional resources. After a few hours of this, we become very stressed.” That also causes us to forget things
  • A study from Norway recently tested how well kids remembered what they learned on screens. One group of students received information on a screen and were asked to memorise it; the second group received the same information on paper. Both groups were tested on their recall.Unsurprisingly, the children who received the paper version remembered more of the material. But the children with the electronic version were also found to be more stressed,
  • The famous ‘London taxi driver experiments’ found that memorising large maps caused the hippocampus to expand in size. Williams says that the reverse is going to happen if we don’t use our brain and memory to navigate. “Our brains are just like our muscles. We ‘use it or lose it’ – in other words, if we use navigation devices for directions rather than our brains, we will lose that ability.”
  • numerous studies also link smartphone use with sleeplessness and anxiety. “Some other interesting research has shown that the more friends you have on social media, the less friends you are likely to have in real life, the less actual contacts you have and the greater likelihood you have of depression,”
  • 12-month-old children whose carers regularly use smartphones have poorer facial expression perception
  • turning off software alarms and notifications, putting strict time limits around screen use, keeping screens out of bedrooms, minimising social media and replacing screens with paper books, paper maps and other non-screen activities can all help minimise harm from digital devices including smartphones
Ed Webb

Retargeting Ads Follow Surfers to Other Sites - NYTimes.com - 0 views

  • it’s a little creepy, especially if you don’t know what’s going on
  • personalized retargeting or remarketing
  • the palpable feeling that they are being watched as they roam the virtual aisles of online stores
  • ...11 more annotations...
  • Others, though, find it disturbing. When a recent Advertising Age column noted the phenomenon, several readers chimed in to voice their displeasure.
  • she felt even worse when she was hounded recently by ads for a dieting service she had used online. “They are still following me around, and it makes me feel fat,” she said.
  • stalked by shoes
  • the technique is raising anew the threat of industry regulation
  • at there is a commercial surveillance system in place online that is sweeping in scope and raises privacy and civil liberties issues
  • Mr. Magness, of Zappos, said that consumers may be unnerved because they may feel that they are being tracked from site to site as they browse the Web. To reassure consumers, Zappos, which is using the ads to peddle items like shoes, handbags and women’s underwear, displays a message inside the banner ads that reads, “Why am I seeing these ads?” When users click on it, they are taken to the Web site of Criteo, the advertising technology company behind the Zappos ads, where the ads are explained.
  • “When you begin to give people a sense of how this is happening, they really don’t like it,”
  • Professor Turow, who studies digital media and recently testified at a Senate committee hearing on digital advertising, said he had a visceral negative reaction to the ads, even though he understands the technologies behind them. “It seemed so bold,” Professor Turow said. “I was not pleased, frankly.”
  • For Google, remarketing is a more specific form of behavioral targeting, the practice under which a person who has visited NBA.com, for instance, may be tagged as a basketball fan and later will be shown ads for related merchandise. Behavioral targeting has been hotly debated in Washington, and lawmakers are considering various proposals to regulate it. During the recent Senate hearing, Senator Claire McCaskill, Democrat of Missouri, said she found the technique troubling. “I understand that advertising supports the Internet, but I am a little spooked out,” Ms. McCaskill said of behavioral targeting. “This is creepy.”
  • being stalked by a pair of pants
  • “I don’t think that exposing all this detailed information you have about the customer is necessary,” said Alan Pearlstein, chief executive of Cross Pixel Media, a digital marketing agency. Mr. Pearlstein says he supports retargeting, but with more subtle ads that, for instance, could offer consumers a discount coupon if they return to an online store. “What is the benefit of freaking customers out?”
  •  
    Minority Report (movie)?
Ed Webb

The trust gap: how and why news on digital platforms is viewed more sceptically versus ... - 0 views

  • Levels of trust in news on social media, search engines, and messaging apps is consistently lower than audience trust in information in the news media more generally.
  • Many of the same people who lack trust in news encountered via digital media companies – who tend to be older, less educated, and less politically interested – also express less trust in the news regardless of whether found on platforms or through more traditional offline modes.
  • Many of the most common reasons people say they use platforms have little to do with news.
  • ...3 more annotations...
  • News about politics is viewed as particularly suspect and platforms are seen by many as contentious places for political conversation – at least for those most interested in politics. Rates of trust in news in general are comparatively higher than trust in news when it pertains to coverage of political affairs.
  • Negative perceptions about journalism are widespread and social media is one of the most often-cited places people say they see or hear criticism of news and journalism
  • Despite positive feelings towards most platforms, large majorities in all four countries agree that false and misleading information, harassment, and platforms using data irresponsibly are ‘big problems’ in their country for many platforms
Ed Webb

Google and Apple Digital Mapping | Data Collection - 0 views

  • There is a sense, in fact, in which mapping is the essence of what Google does. The company likes to talk about services such as Maps and Earth as if they were providing them for fun - a neat, free extra as a reward for using their primary offering, the search box. But a search engine, in some sense, is an attempt to map the world of information - and when you can combine that conceptual world with the geographical one, the commercial opportunities suddenly explode.
  • In a world of GPS-enabled smartphones, you're not just consulting Google's or Apple's data stores when you consult a map: you're adding to them.
  • There's no technical reason why, perhaps in return for a cheaper phone bill, you mightn't consent to be shown not the quickest route between two points, but the quickest route that passes at least one Starbucks. If you're looking at the world through Google glasses, who determines which aspects of "augmented reality" data you see - and did they pay for the privilege?
  • ...6 more annotations...
  • "The map is mapping us," says Martin Dodge, a senior lecturer in human geography at Manchester University. "I'm not paranoid, but I am quite suspicious and cynical about products that appear to be innocent and neutral, but that are actually vacuuming up all kinds of behavioural and attitudinal data."
  • it's hard to interpret the occasional aerial snapshot of your garden as a big issue when the phone in your pocket is assembling a real-time picture of your movements, preferences and behaviour
  • "There's kind of a fine line that you run," said Ed Parsons, Google's chief geospatial technologist, in a session at the Aspen Ideas Festival in Colorado, "between this being really useful, and it being creepy."
  • "Google and Apple are saying that they want control over people's real and imagined space."
  • It can be easy to assume that maps are objective: that the world is out there, and that a good map is one that represents it accurately. But that's not true. Any square kilometre of the planet can be described in an infinite number of ways: in terms of its natural features, its weather, its socio-economic profile, or what you can buy in the shops there. Traditionally, the interests reflected in maps have been those of states and their armies, because they were the ones who did the map-making, and the primary use of many such maps was military. (If you had the better maps, you stood a good chance of winning the battle. The logo of Britain's Ordnance Survey still includes a visual reference to the 18th-century War Department.) Now, the power is shifting. "Every map," the cartography curator Lucy Fellowes once said, "is someone's way of getting you to look at the world his or her way."
  • The question cartographers are always being asked at cocktail parties, says Heyman, is whether there's really any map-making still left to do: we've mapped the whole planet already, haven't we? The question could hardly be more misconceived. We are just beginning to grasp what it means to live in a world in which maps are everywhere - and in which, by using maps, we are mapped ourselves.
Ed Webb

Where is the boundary between your phone and your mind? | US news | The Guardian - 1 views

  • Here’s a thought experiment: where do you end? Not your body, but you, the nebulous identity you think of as your “self”. Does it end at the limits of your physical form? Or does it include your voice, which can now be heard as far as outer space; your personal and behavioral data, which is spread out across the impossibly broad plane known as digital space; and your active online personas, which probably encompass dozens of different social media networks, text message conversations, and email exchanges? This is a question with no clear answer, and, as the smartphone grows ever more essential to our daily lives, that border’s only getting blurrier.
  • our minds have become even more radically extended than ever before
  • one of the essential differences between a smartphone and a piece of paper, which is that our relationship with our phones is reciprocal: we not only put information into the device, we also receive information from it, and, in that sense, it shapes our lives far more actively than would, say, a shopping list. The shopping list isn’t suggesting to us, based on algorithmic responses to our past and current shopping behavior, what we should buy; the phone is
  • ...10 more annotations...
  • American consumers spent five hours per day on their mobile devices, and showed a dizzying 69% year-over-year increase in time spent in apps like Facebook, Twitter, and YouTube. The prevalence of apps represents a concrete example of the movement away from the old notion of accessing the Internet through a browser and the new reality of the connected world and its myriad elements – news, social media, entertainment – being with us all the time
  • “In the 90s and even through the early 2000s, for many people, there was this way of thinking about cyberspace as a space that was somewhere else: it was in your computer. You went to your desktop to get there,” Weigel says. “One of the biggest shifts that’s happened and that will continue to happen is the undoing of a border that we used to perceive between the virtual and the physical world.”
  • While many of us think of the smartphone as a portal for accessing the outside world, the reciprocity of the device, as well as the larger pattern of our behavior online, means the portal goes the other way as well: it’s a means for others to access us
  • Weigel sees the unfettered access to our data, through our smartphone and browser use, of what she calls the big five tech companies – Apple, Alphabet (the parent company of Google), Microsoft, Facebook, and Amazon – as a legitimate problem for notions of democracy
  • an unfathomable amount of wealth, power, and direct influence on the consumer in the hands of just a few individuals – individuals who can affect billions of lives with a tweak in the code of their products
  • “This is where the fundamental democracy deficit comes from: you have this incredibly concentrated private power with zero transparency or democratic oversight or accountability, and then they have this unprecedented wealth of data about their users to work with,”
  • the rhetoric around the Internet was that the crowd would prevent the spread of misinformation, filtering it out like a great big hive mind; it would also help to prevent the spread of things like hate speech. Obviously, this has not been the case, and even the relatively successful experiments in this, such as Wikipedia, have a great deal of human governance that allows them to function properly
  • We should know and be aware of how these companies work, how they track our behavior, and how they make recommendations to us based on our behavior and that of others. Essentially, we need to understand the fundamental difference between our behavior IRL and in the digital sphere – a difference that, despite the erosion of boundaries, still stands
  • “Whether we know it or not, the connections that we make on the Internet are being used to cultivate an identity for us – an identity that is then sold to us afterward,” Lynch says. “Google tells you what questions to ask, and then it gives you the answers to those questions.”
  • It isn’t enough that the apps in our phone flatten all of the different categories of relationships we have into one broad group: friends, followers, connections. They go one step further than that. “You’re being told who you are all the time by Facebook and social media because which posts are coming up from your friends are due to an algorithm that is trying to get you to pay more attention to Facebook,” Lynch says. “That’s affecting our identity, because it affects who you think your friends are, because they’re the ones who are popping up higher on your feed.”
Ed Webb

The future of stupid fears | Bryan Alexander - 0 views

  • Culture of Fear argues that media and political fear-mongering teaches consumers and voters to see problems in terms of stories about heroic individuals, rather than about social or political factors.  The contexts get set aside, replaced with more relatable tales of villainous criminals and virtuous victims, which Glassner calls “neurologizing social problems” (217). There is also a curious, quietly conservative politics of the family involved.  Such fears emphasize stranger danger, which is actually statistically very rare.  Instead, they minimize the far more likely source of harm most American face: our family members (31).
  • fake fears reveal cultural anxieties, much as horror stories do
  • “news is what happens to your editors.”  By that he means “editors – and their bosses… [and] their families, friends, and business associates”(201)
  • ...5 more annotations...
  • Our politics clearly adore fear, notably from the Trump administration and its emphasis on immigrant-driven carnage.  Our news media continue to worship at the altar of “if it bleeds, it leads.”
  • that CNN is the opposite of a fringe news service.  Between Fox and MSNBC it occupies a neutral, middle ground.  It is, putatively, the sober center.  And it simply adores scaring the hell out of us
  • What does the likelihood of even more stupid fear-mongering mean for education?  It simply means, as I said years ago, we have to teach people to resist this stuff.  In our quest to teach digital literacy we should encourage students – of all ages – to avoid tv news, or to sample it judiciously, with great skepticism.  We should assist them in recognizing when politicians fire up fear campaigns based on poor facts.
  • politicians peddle terror because it often works
  • the negative impacts of such fear – the misdirection of resources, the creation of bad policy, the encouragement of mean world syndrome, the furtherance of racism – the promulgation of real damage
Ed Webb

Supreme court cellphone case puts free speech - not just privacy - at risk | Opinion | ... - 0 views

  • scholars are watching Carpenter’s case closely because it may require the supreme court to address the scope and continuing relevance of the “third-party-records doctrine”, a judicially developed rule that has sometimes been understood to mean that a person surrenders her constitutional privacy interest in information that she turns over to a third party. The government contends that Carpenter lacks a constitutionally protected privacy interest in his location data because his cellphone was continually sharing that data with his cellphone provider.
  • Privacy advocates are rightly alarmed by this argument. Much of the digital technology all of us rely on today requires us to share information passively with third parties. Visiting a website, sending an email, buying a book online – all of these things require sharing sensitive data with internet service providers, merchants, banks and others. If this kind of commonplace and unavoidable information-sharing is sufficient to extinguish constitutional privacy rights, the digital-age fourth amendment will soon be a dead letter.
  • “Awareness that the government may be watching chills associational and expressive freedoms,” Chief Justice John Roberts wrote. Left unchecked, he warned, new forms of surveillance could “alter the relationship between citizen and government in a way that is inimical to democratic society”.
Ed Webb

Iran Says Face Recognition Will ID Women Breaking Hijab Laws | WIRED - 0 views

  • After Iranian lawmakers suggested last year that face recognition should be used to police hijab law, the head of an Iranian government agency that enforces morality law said in a September interview that the technology would be used “to identify inappropriate and unusual movements,” including “failure to observe hijab laws.” Individuals could be identified by checking faces against a national identity database to levy fines and make arrests, he said.
  • Iran’s government has monitored social media to identify opponents of the regime for years, Grothe says, but if government claims about the use of face recognition are true, it’s the first instance she knows of a government using the technology to enforce gender-related dress law.
  • Mahsa Alimardani, who researches freedom of expression in Iran at the University of Oxford, has recently heard reports of women in Iran receiving citations in the mail for hijab law violations despite not having had an interaction with a law enforcement officer. Iran’s government has spent years building a digital surveillance apparatus, Alimardani says. The country’s national identity database, built in 2015, includes biometric data like face scans and is used for national ID cards and to identify people considered dissidents by authorities.
  • ...5 more annotations...
  • Decades ago, Iranian law required women to take off headscarves in line with modernization plans, with police sometimes forcing women to do so. But hijab wearing became compulsory in 1979 when the country became a theocracy.
  • Shajarizadeh and others monitoring the ongoing outcry have noticed that some people involved in the protests are confronted by police days after an alleged incident—including women cited for not wearing a hijab. “Many people haven't been arrested in the streets,” she says. “They were arrested at their homes one or two days later.”
  • Some face recognition in use in Iran today comes from Chinese camera and artificial intelligence company Tiandy. Its dealings in Iran were featured in a December 2021 report from IPVM, a company that tracks the surveillance and security industry.
  • US Department of Commerce placed sanctions on Tiandy, citing its role in the repression of Uyghur Muslims in China and the provision of technology originating in the US to Iran’s Revolutionary Guard. The company previously used components from Intel, but the US chipmaker told NBC last month that it had ceased working with the Chinese company.
  • When Steven Feldstein, a former US State Department surveillance expert, surveyed 179 countries between 2012 and 2020, he found that 77 now use some form of AI-driven surveillance. Face recognition is used in 61 countries, more than any other form of digital surveillance technology, he says.
1 - 20 of 39 Next ›
Showing 20 items per page