Skip to main content

Home/ Dystopias/ Group items tagged politics

Rss Feed Group items tagged

Ed Webb

The Web Means the End of Forgetting - NYTimes.com - 1 views

  • for a great many people, the permanent memory bank of the Web increasingly means there are no second chances — no opportunities to escape a scarlet letter in your digital past. Now the worst thing you’ve done is often the first thing everyone knows about you.
  • a collective identity crisis. For most of human history, the idea of reinventing yourself or freely shaping your identity — of presenting different selves in different contexts (at home, at work, at play) — was hard to fathom, because people’s identities were fixed by their roles in a rigid social hierarchy. With little geographic or social mobility, you were defined not as an individual but by your village, your class, your job or your guild. But that started to change in the late Middle Ages and the Renaissance, with a growing individualism that came to redefine human identity. As people perceived themselves increasingly as individuals, their status became a function not of inherited categories but of their own efforts and achievements. This new conception of malleable and fluid identity found its fullest and purest expression in the American ideal of the self-made man, a term popularized by Henry Clay in 1832.
  • the dawning of the Internet age promised to resurrect the ideal of what the psychiatrist Robert Jay Lifton has called the “protean self.” If you couldn’t flee to Texas, you could always seek out a new chat room and create a new screen name. For some technology enthusiasts, the Web was supposed to be the second flowering of the open frontier, and the ability to segment our identities with an endless supply of pseudonyms, avatars and categories of friendship was supposed to let people present different sides of their personalities in different contexts. What seemed within our grasp was a power that only Proteus possessed: namely, perfect control over our shifting identities. But the hope that we could carefully control how others view us in different contexts has proved to be another myth. As social-networking sites expanded, it was no longer quite so easy to have segmented identities: now that so many people use a single platform to post constant status updates and photos about their private and public activities, the idea of a home self, a work self, a family self and a high-school-friends self has become increasingly untenable. In fact, the attempt to maintain different selves often arouses suspicion.
  • ...20 more annotations...
  • All around the world, political leaders, scholars and citizens are searching for responses to the challenge of preserving control of our identities in a digital world that never forgets. Are the most promising solutions going to be technological? Legislative? Judicial? Ethical? A result of shifting social norms and cultural expectations? Or some mix of the above?
  • These approaches share the common goal of reconstructing a form of control over our identities: the ability to reinvent ourselves, to escape our pasts and to improve the selves that we present to the world.
  • many technological theorists assumed that self-governing communities could ensure, through the self-correcting wisdom of the crowd, that all participants enjoyed the online identities they deserved. Wikipedia is one embodiment of the faith that the wisdom of the crowd can correct most mistakes — that a Wikipedia entry for a small-town mayor, for example, will reflect the reputation he deserves. And if the crowd fails — perhaps by turning into a digital mob — Wikipedia offers other forms of redress
  • In practice, however, self-governing communities like Wikipedia — or algorithmically self-correcting systems like Google — often leave people feeling misrepresented and burned. Those who think that their online reputations have been unfairly tarnished by an isolated incident or two now have a practical option: consulting a firm like ReputationDefender, which promises to clean up your online image. ReputationDefender was founded by Michael Fertik, a Harvard Law School graduate who was troubled by the idea of young people being forever tainted online by their youthful indiscretions. “I was seeing articles about the ‘Lord of the Flies’ behavior that all of us engage in at that age,” he told me, “and it felt un-American that when the conduct was online, it could have permanent effects on the speaker and the victim. The right to new beginnings and the right to self-definition have always been among the most beautiful American ideals.”
  • In the Web 3.0 world, Fertik predicts, people will be rated, assessed and scored based not on their creditworthiness but on their trustworthiness as good parents, good dates, good employees, good baby sitters or good insurance risks.
  • “Our customers include parents whose kids have talked about them on the Internet — ‘Mom didn’t get the raise’; ‘Dad got fired’; ‘Mom and Dad are fighting a lot, and I’m worried they’ll get a divorce.’ ”
  • as facial-recognition technology becomes more widespread and sophisticated, it will almost certainly challenge our expectation of anonymity in public
  • Ohm says he worries that employers would be able to use social-network-aggregator services to identify people’s book and movie preferences and even Internet-search terms, and then fire or refuse to hire them on that basis. A handful of states — including New York, California, Colorado and North Dakota — broadly prohibit employers from discriminating against employees for legal off-duty conduct like smoking. Ohm suggests that these laws could be extended to prevent certain categories of employers from refusing to hire people based on Facebook pictures, status updates and other legal but embarrassing personal information. (In practice, these laws might be hard to enforce, since employers might not disclose the real reason for their hiring decisions, so employers, like credit-reporting agents, might also be required by law to disclose to job candidates the negative information in their digital files.)
  • There’s already a sharp rise in lawsuits known as Twittergation — that is, suits to force Web sites to remove slanderous or false posts.
  • many people aren’t worried about false information posted by others — they’re worried about true information they’ve posted about themselves when it is taken out of context or given undue weight. And defamation law doesn’t apply to true information or statements of opinion. Some legal scholars want to expand the ability to sue over true but embarrassing violations of privacy — although it appears to be a quixotic goal.
  • Researchers at the University of Washington, for example, are developing a technology called Vanish that makes electronic data “self-destruct” after a specified period of time. Instead of relying on Google, Facebook or Hotmail to delete the data that is stored “in the cloud” — in other words, on their distributed servers — Vanish encrypts the data and then “shatters” the encryption key. To read the data, your computer has to put the pieces of the key back together, but they “erode” or “rust” as time passes, and after a certain point the document can no longer be read.
  • Plenty of anecdotal evidence suggests that young people, having been burned by Facebook (and frustrated by its privacy policy, which at more than 5,000 words is longer than the U.S. Constitution), are savvier than older users about cleaning up their tagged photos and being careful about what they post.
  • norms are already developing to recreate off-the-record spaces in public, with no photos, Twitter posts or blogging allowed. Milk and Honey, an exclusive bar on Manhattan’s Lower East Side, requires potential members to sign an agreement promising not to blog about the bar’s goings on or to post photos on social-networking sites, and other bars and nightclubs are adopting similar policies. I’ve been at dinners recently where someone has requested, in all seriousness, “Please don’t tweet this” — a custom that is likely to spread.
  • research group’s preliminary results suggest that if rumors spread about something good you did 10 years ago, like winning a prize, they will be discounted; but if rumors spread about something bad that you did 10 years ago, like driving drunk, that information has staying power
  • strategies of “soft paternalism” that might nudge people to hesitate before posting, say, drunken photos from Cancún. “We could easily think about a system, when you are uploading certain photos, that immediately detects how sensitive the photo will be.”
  • It’s sobering, now that we live in a world misleadingly called a “global village,” to think about privacy in actual, small villages long ago. In the villages described in the Babylonian Talmud, for example, any kind of gossip or tale-bearing about other people — oral or written, true or false, friendly or mean — was considered a terrible sin because small communities have long memories and every word spoken about other people was thought to ascend to the heavenly cloud. (The digital cloud has made this metaphor literal.) But the Talmudic villages were, in fact, far more humane and forgiving than our brutal global village, where much of the content on the Internet would meet the Talmudic definition of gossip: although the Talmudic sages believed that God reads our thoughts and records them in the book of life, they also believed that God erases the book for those who atone for their sins by asking forgiveness of those they have wronged. In the Talmud, people have an obligation not to remind others of their past misdeeds, on the assumption they may have atoned and grown spiritually from their mistakes. “If a man was a repentant [sinner],” the Talmud says, “one must not say to him, ‘Remember your former deeds.’ ” Unlike God, however, the digital cloud rarely wipes our slates clean, and the keepers of the cloud today are sometimes less forgiving than their all-powerful divine predecessor.
  • On the Internet, it turns out, we’re not entitled to demand any particular respect at all, and if others don’t have the empathy necessary to forgive our missteps, or the attention spans necessary to judge us in context, there’s nothing we can do about it.
  • Gosling is optimistic about the implications of his study for the possibility of digital forgiveness. He acknowledged that social technologies are forcing us to merge identities that used to be separate — we can no longer have segmented selves like “a home or family self, a friend self, a leisure self, a work self.” But although he told Facebook, “I have to find a way to reconcile my professor self with my having-a-few-drinks self,” he also suggested that as all of us have to merge our public and private identities, photos showing us having a few drinks on Facebook will no longer seem so scandalous. “You see your accountant going out on weekends and attending clown conventions, that no longer makes you think that he’s not a good accountant. We’re coming to terms and reconciling with that merging of identities.”
  • a humane society values privacy, because it allows people to cultivate different aspects of their personalities in different contexts; and at the moment, the enforced merging of identities that used to be separate is leaving many casualties in its wake.
  • we need to learn new forms of empathy, new ways of defining ourselves without reference to what others say about us and new ways of forgiving one another for the digital trails that will follow us forever
Ed Webb

Why the Islamic State is the minor leagues of terror | Middle East Eye - 0 views

  •  
    "The sole advantage the Islamic State has when it comes to this country is that it turns out to be so easy to spook us."
Ed Webb

Can Economists and Humanists Ever Be Friends? | The New Yorker - 0 views

  • There is something thrilling about the intellectual audacity of thinking that you can explain ninety per cent of behavior in a society with one mental tool.
  • education, which they believe is a form of domestication
  • there is no moral dimension to this economic analysis: utility is a fundamentally amoral concept
  • ...11 more annotations...
  • intellectual overextension is often found in economics, as Gary Saul Morson and Morton Schapiro explain in their wonderful book “Cents and Sensibility: What Economics Can Learn from the Humanities” (Princeton). Morson and Schapiro—one a literary scholar and the other an economist—draw on the distinction between hedgehogs and foxes made by Isaiah Berlin in a famous essay from the nineteen-fifties, invoking an ancient Greek fragment: “The fox knows many things, but the hedgehog one big thing.” Economists tend to be hedgehogs, forever on the search for a single, unifying explanation of complex phenomena. They love to look at a huge, complicated mass of human behavior and reduce it to an equation: the supply-and-demand curves; the Phillips curve, which links unemployment and inflation; or mb=mc, which links a marginal benefit to a marginal cost—meaning that the fourth slice of pizza is worth less to you than the first. These are powerful tools, which can be taken too far. Morson and Schapiro cite the example of Gary Becker, the Nobel laureate in economics in 1992. Becker is a hero to many in the field, but, for all the originality of his thinking, to outsiders he can stand for intellectual overconfidence. He thought that “the economic approach is a comprehensive one that is applicable to all human behavior.” Not some, not most—all
  • Becker analyzed, in his own words, “fertility, education, the uses of time, crime, marriage, social interactions, and other ‘sociological,’ ‘legal,’ and ‘political problems,’ ” before concluding that economics explained everything
  • The issue here is one of overreach: taking an argument that has worthwhile applications and extending it further than it usefully goes. Our motives are often not what they seem: true. This explains everything: not true. After all, it’s not as if the idea that we send signals about ourselves were news; you could argue that there is an entire social science, sociology, dedicated to the subject. Classic practitioners of that discipline study the signals we send and show how they are interpreted by those around us, as in Erving Goffman’s “The Presentation of Self in Everyday Life,” or how we construct an entire identity, both internally and externally, from the things we choose to be seen liking—the argument of Pierre Bourdieu’s masterpiece “Distinction.” These are rich and complicated texts, which show how rich and complicated human difference can be. The focus on signalling and unconscious motives in “The Elephant in the Brain,” however, goes the other way: it reduces complex, diverse behavior to simple rules.
  • “A traditional cost-benefit analysis could easily have led to the discontinuation of a project widely viewed as being among the most successful health interventions in African history.”
  • Another part of me, though, is done with it, with the imperialist ambitions of economics and its tendency to explain away differences, to ignore culture, to exalt reductionism. I want to believe Morson and Schapiro and Desai when they posit that the gap between economics and the humanities can be bridged, but my experience in both writing fiction and studying economics leads me to think that they’re wrong. The hedgehog doesn’t want to learn from the fox. The realist novel is a solemn enemy of equations. The project of reducing behavior to laws and the project of attending to human beings in all their complexity and specifics are diametrically opposed. Perhaps I’m only talking about myself, and this is merely an autobiographical reflection, rather than a general truth, but I think that if I committed any further to economics I would have to give up writing fiction. I told an economist I know about this, and he laughed. He said, “Sounds like you’re maximizing your utility.” 
  • finance is full of “attribution errors,” in which people view their successes as deserved and their failures as bad luck. Desai notes that in business, law, or pedagogy we can gauge success only after months or years; in finance, you can be graded hour by hour, day by day, and by plainly quantifiable measures. What’s more, he says, “the ‘discipline of the market’ shrouds all of finance in a meritocratic haze.” And so people who succeed in finance “are susceptible to developing massively outsized egos and appetites.”
  • one of the things I liked about economics, finance, and the language of money was their lack of hypocrisy. Modern life is full of cant, of people saying things they don’t quite believe. The money guys, in private, don’t go in for cant. They’re more like Mafia bosses. I have to admit that part of me resonates to that coldness.
  • Economics, Morson and Schapiro say, has three systematic biases: it ignores the role of culture, it ignores the fact that “to understand people one must tell stories about them,” and it constantly touches on ethical questions beyond its ken. Culture, stories, and ethics are things that can’t be reduced to equations, and economics accordingly has difficulty with them
  • There is something thrilling about the intellectual audacity of thinking that you can explain ninety per cent of behavior in a society with one mental tool
  • According to Hanson and Simler, these unschooled workers “won’t show up for work reliably on time, or they have problematic superstitions, or they prefer to get job instructions via indirect hints instead of direct orders, or they won’t accept tasks and roles that conflict with their culturally assigned relative status with co-workers, or they won’t accept being told to do tasks differently than they had done them before.”
  • The idea that Maya Angelou’s career amounts to nothing more than a writer shaking her tail feathers to attract the attention of a dominant male is not just misleading; it’s actively embarrassing.
Ed Webb

Artificial intelligence, immune to fear or favour, is helping to make China's foreign p... - 0 views

  • Several prototypes of a diplomatic system using artificial intelligence are under development in China, according to researchers involved or familiar with the projects. One early-stage machine, built by the Chinese Academy of Sciences, is already being used by the Ministry of Foreign Affairs.
  • China’s ambition to become a world leader has significantly increased the burden and challenge to its diplomats. The “Belt and Road Initiative”, for instance, involves nearly 70 countries with 65 per cent of the world’s population. The unprecedented development strategy requires up to a US$900 billion investment each year for infrastructure construction, some in areas with high political, economic or environmental risk
  • researchers said the AI “policymaker” was a strategic decision support system, with experts stressing that it will be humans who will make any final decision
  • ...10 more annotations...
  • “Human beings can never get rid of the interference of hormones or glucose.”
  • “It would not even consider the moral factors that conflict with strategic goals,”
  • “The entire strategic game structure will be completely out of balance.”
  • “If one side of the strategic game has artificial intelligence technology, and the other side does not, then this kind of strategic game is almost a one-way, transparent confrontation,” he said. “The actors lacking the assistance of AI will be at an absolute disadvantage in many aspects such as risk judgment, strategy selection, decision making and execution efficiency, and decision-making reliability,” he said.
  • “AI can think many steps ahead of a human. It can think deeply in many possible scenarios and come up with the best strategy,”
  • A US Department of State spokesman said the agency had “many technological tools” to help it make decisions. There was, however, no specific information on AI that could be shared with the public,
  • The system, also known as geopolitical environment simulation and prediction platform, was used to vet “nearly all foreign investment projects” in recent years
  • One challenge to the development of AI policymaker is data sharing among Chinese government agencies. The foreign ministry, for instance, had been unable to get some data sets it needed because of administrative barriers
  • China is aggressively pushing AI into many sectors. The government is building a nationwide surveillance system capable of identifying any citizen by face within seconds. Research is also under way to introduce AI in nuclear submarines to help commanders making faster, more accurate decision in battle.
  • “AI can help us get more prepared for unexpected events. It can help find a scientific, rigorous solution within a short time.
Ed Webb

A woman first wrote the prescient ideas Huxley and Orwell made famous - Quartzy - 1 views

  • In 1919, a British writer named Rose Macaulay published What Not, a novel about a dystopian future—a brave new world if you will—where people are ranked by intelligence, the government mandates mind training for all citizens, and procreation is regulated by the state.You’ve probably never heard of Macaulay or What Not. However, Aldous Huxley, author of the science fiction classic Brave New World, hung out in the same London literary circles as her and his 1932 book contains many concepts that Macaulay first introduced in her work. In 2019, you’ll be able to read Macaulay’s book yourself and compare the texts as the British publisher Handheld Press is planning to re- release the forgotten novel in March. It’s been out of print since the year it was first released.
  • The resurfacing of What Not also makes this a prime time to consider another work that influenced Huxley’s Brave New World, the 1923 novel We by Yvgeny Zamyatin. What Not and We are lost classics about a future that foreshadows our present. Notably, they are also hidden influences on some of the most significant works of 20th century fiction, Brave New World and George Orwell’s 1984.
  • In Macaulay’s book—which is a hoot and well worth reading—a democratically elected British government has been replaced with a “United Council, five minds with but a single thought—if that,” as she put it. Huxley’s Brave New World is run by a similarly small group of elites known as “World Controllers.”
  • ...12 more annotations...
  • citizens of What Not are ranked based on their intelligence from A to C3 and can’t marry or procreate with someone of the same rank to ensure that intelligence is evenly distributed
  • Brave New World is more futuristic and preoccupied with technology than What Not. In Huxley’s world, procreation and education have become completely mechanized and emotions are strictly regulated pharmaceutically. Macaulay’s Britain is just the beginning of this process, and its characters are not yet completely indoctrinated into the new ways of the state—they resist it intellectually and question its endeavors, like the newly-passed Mental Progress Act. She writes:He did not like all this interfering, socialist what-not, which was both upsetting the domestic arrangements of his tenants and trying to put into their heads more learning than was suitable for them to have. For his part he thought every man had a right to be a fool if he chose, yes, and to marry another fool, and to bring up a family of fools too.
  • Where Huxley pairs dumb but pretty and “pneumatic” ladies with intelligent gentlemen, Macaulay’s work is decidedly less sexist.
  • We was published in French, Dutch, and German. An English version was printed and sold only in the US. When Orwell wrote about We in 1946, it was only because he’d managed to borrow a hard-to-find French translation.
  • While Orwell never indicated that he read Macaulay, he shares her subversive and subtle linguistic skills and satirical sense. His protagonist, Winston—like Kitty—works for the government in its Ministry of Truth, or Minitrue in Newspeak, where he rewrites historical records to support whatever Big Brother currently says is good for the regime. Macaulay would no doubt have approved of Orwell’s wit. And his state ministries bear a striking similarity to those she wrote about in What Not.
  • Orwell was familiar with Huxley’s novel and gave it much thought before writing his own blockbuster. Indeed, in 1946, before the release of 1984, he wrote a review of Zamyatin’s We (pdf), comparing the Russian novel with Huxley’s book. Orwell declared Huxley’s text derivative, writing in his review of We in The Tribune:The first thing anyone would notice about We is the fact—never pointed out, I believe—that Aldous Huxley’s Brave New World must be partly derived from it. Both books deal with the rebellion of the primitive human spirit against a rationalised, mechanized, painless world, and both stories are supposed to take place about six hundred years hence. The atmosphere of the two books is similar, and it is roughly speaking the same kind of society that is being described, though Huxley’s book shows less political awareness and is more influenced by recent biological and psychological theories.
  • In We, the story is told by D-503, a male engineer, while in Brave New World we follow Bernard Marx, a protagonist with a proper name. Both characters live in artificial worlds, separated from nature, and they recoil when they first encounter people who exist outside of the state’s constructed and controlled cities.
  • Although We is barely known compared to Orwell and Huxley’s later works, I’d argue that it’s among the best literary science fictions of all time, and it’s highly relevant, as it was when first written. Noam Chomsky calls it “more perceptive” than both 1984 and Brave New World. Zamyatin’s futuristic society was so on point, he was exiled from the Soviet Union because it was such an accurate description of life in a totalitarian regime, though he wrote it before Stalin took power.
  • Macaulay’s work is more subtle and funny than Huxley’s. Despite being a century old, What Not is remarkably relevant and readable, a satire that only highlights how little has changed in the years since its publication and how dangerous and absurd state policies can be. In this sense then, What Not reads more like George Orwell’s 1949 novel 1984 
  • Orwell was critical of Zamyatin’s technique. “[We] has a rather weak and episodic plot which is too complex to summarize,” he wrote. Still, he admired the work as a whole. “[Its] intuitive grasp of the irrational side of totalitarianism—human sacrifice, cruelty as an end in itself, the worship of a Leader who is credited with divine attributes—[…] makes Zamyatin’s book superior to Huxley’s,”
  • Like our own tech magnates and nations, the United State of We is obsessed with going to space.
  • Perhaps in 2019 Macaulay’s What Not, a clever and subversive book, will finally get its overdue recognition.
Ed Webb

Beware thought leaders and the wealthy purveying answers to our social ills - 0 views

  • “Just as the worst slave-owners were those who were kind to their slaves, and so prevented the horror of the system being realized by those who suffered from it, and understood by those who contemplated it,” Wilde wrote, “so, in the present state of things in England, the people who do most harm are the people who try to do most good.”
  • “For when elites assume leadership of social change, they are able to reshape what social change is — above all, to present it as something that should never threaten winners,”
  • to question the system that allows people to make money in predatory ways and compensate for that through philanthropy. “Instead of asking them to make their firms less monopolistic, greedy or harmful to children, it urged them to create side hustles to ‘change the world,’ ”
  • ...9 more annotations...
  • Andrew Carnegie, the famed American industrialist, who advocated that people be as aggressive as possible in their pursuit of wealth and then give it back through private philanthropy
  • “the poor might not need so much help had they been better paid.”
  • “MarketWorld.” In essence, this is the cultlike belief that intractable social problems can be solved in market-friendly ways that result in “win-wins” for everyone involved, and that those who have succeeded under the status quo are also those best equipped to fix the world’s problems.
  • Among the denizens of MarketWorld are so-called “thought leaders,” the speakers who populate the conference circuit, like TED, PopTech and, of course, the Clinton Global Initiative. (When you pause to think about it, “thought leader” is appallingly Orwellian.)
  • Giridharadas argues that the rise of thought leaders, whose views are sanctioned and sanitized by their patrons — the big corporations that support conferences — has come at the expense of public intellectuals, who are willing to voice controversial arguments that shake up the system and don’t have easy solutions. Thought leaders, on the other hand, always offer a small but actionable “tweak,” one that makes conference-goers feel like they’ve learned something but that doesn’t actually threaten anyone.
  • giving MarketWorld what it craved in a thinker: a way of framing a problem that made it about giving bits of power to those who lack it without taking power away from those who hold it
  • In a nod to Wilde, he argues that the person who “seeks to ‘change the world’ by doing what can be done within a bad system, but who is relatively silent about that system” is “putting himself in the difficult moral position of the kindhearted slave master.”
  • He’s come to big conclusions: that MarketWorld, along with its philosophical antecedents, like Carnegie-ism and neoliberalism (which anthropologist David Harvey defines as the idea that “human well being can best be advanced by liberating individual entrepreneurial freedoms and skills within an institutional framework characterized by strong property rights, free markets and free trade”), has been an abject failure
  • His key idea is to reinvigorate governments, which he believes could fix the world’s problems if they just had enough power and money. For readers who are cynical about the private sector but also versed enough in history to be cynical about governments, the book would have been more powerful if Giridharadas had stayed within his definition of an old-school public intellectual: someone who is willing to throw bombs at the current state of affairs, but lacks the arrogance and self-righteousness that comes with believing you have the solution
Ed Webb

On coming out as transgender in Donald Trump's America - Vox - 0 views

  • I used to think I wanted to see June and the other women on the show persevere in the face of suffering, because on some level, I believed that to embrace my own womanhood was to embrace suffering. Now I realize that I do want to see Gilead burn. I don’t want suffering anymore. I want catharsis. And not just for me.
  • As soon as I came out, an entire lifetime of unrealistic expectations for women’s beauty came crashing down on my head
  • What I believed for too long, and what you might believe too, is that your body is not a gift but an obligation. That it is not who you are but a series of tasks assigned to you by the accident of your birth. This is not true. The best obligations — the only real obligations — are chosen. Your life is your life. It is worth fighting for.
  • ...1 more annotation...
  • I recognize this scene from a lifetime spent among men who are angry and women who know precisely how to handle that anger. The men get to feel things, sometimes clumsily, sometimes eloquently. But the women are so often defined not by who they are but by what they have been asked to handle
Ed Webb

Could fully automated luxury communism ever work? - 0 views

  • Having achieved a seamless, pervasive commodification of online sociality, Big Tech companies have turned their attention to infrastructure. Attempts by Google, Amazon and Facebook to achieve market leadership, in everything from AI to space exploration, risk a future defined by the battle for corporate monopoly.
  • The technologies are coming. They’re already here in certain instances. It’s the politics that surrounds them. We have alternatives: we can have public ownership of data in the citizen’s interest or it could be used as it is in China where you have a synthesis of corporate and state power
  • the two alternatives that big data allows is an all-consuming surveillance state where you have a deep synthesis of capitalism with authoritarian control, or a reinvigorated welfare state where more and more things are available to everyone for free or very low cost
  • ...4 more annotations...
  • we can’t begin those discussions until we say, as a society, we want to at least try subordinating these potentials to the democratic project, rather than allow capitalism to do what it wants
  • I say in FALC that this isn’t a blueprint for utopia. All I’m saying is that there is a possibility for the end of scarcity, the end of work, a coming together of leisure and labour, physical and mental work. What do we want to do with it? It’s perfectly possible something different could emerge where you have this aggressive form of social value.
  • I think the thing that’s been beaten out of everyone since 2010 is one of the prevailing tenets of neoliberalism: work hard, you can be whatever you want to be, that you’ll get a job, be well paid and enjoy yourself.  In 2010, that disappeared overnight, the rules of the game changed. For the status quo to continue to administer itself,  it had to change common sense. You see this with Jordan Peterson; he’s saying you have to know your place and that’s what will make you happy. To me that’s the only future for conservative thought, how else do you mediate the inequality and unhappiness?
  • I don’t think we can rapidly decarbonise our economies without working people understanding that it’s in their self-interest. A green economy means better quality of life. It means more work. Luxury populism feeds not only into the green transition, but the rollout of Universal Basic Services and even further.
Ed Webb

The Digital Maginot Line - 0 views

  • The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless). In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.
  • There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.
  • There’s very little incentive not to try everything: this is a revolution that is being A/B tested.
  • ...17 more annotations...
  • The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.
  • Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target Russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure
  • Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle. But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.
  • The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead
  • Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble. But ultimately the information war is about territory — just not the geographic kind. In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.
  • This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”
  • The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.
  • If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it.
  • Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.
  • Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.
  • The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare
  • Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups
  • platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors
  • What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation. We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.
  • We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.
  • Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.
  • Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets.
Ed Webb

Border Patrol, Israel's Elbit Put Reservation Under Surveillance - 0 views

  • The vehicle is parked where U.S. Customs and Border Protection will soon construct a 160-foot surveillance tower capable of continuously monitoring every person and vehicle within a radius of up to 7.5 miles. The tower will be outfitted with high-definition cameras with night vision, thermal sensors, and ground-sweeping radar, all of which will feed real-time data to Border Patrol agents at a central operating station in Ajo, Arizona. The system will store an archive with the ability to rewind and track individuals’ movements across time — an ability known as “wide-area persistent surveillance.” CBP plans 10 of these towers across the Tohono O’odham reservation, which spans an area roughly the size of Connecticut. Two will be located near residential areas, including Rivas’s neighborhood, which is home to about 50 people. To build them, CBP has entered a $26 million contract with the U.S. division of Elbit Systems, Israel’s largest military company.
  • U.S. borderlands have become laboratories for new systems of enforcement and control
  • these same systems often end up targeting other marginalized populations as well as political dissidents
  • ...16 more annotations...
  • the spread of persistent surveillance technologies is particularly worrisome because they remove any limit on how much information police can gather on a person’s movements. “The border is the natural place for the government to start using them, since there is much more public support to deploy these sorts of intrusive technologies there,”
  • the company’s ultimate goal is to build a “layer” of electronic surveillance equipment across the entire perimeter of the U.S. “Over time, we’ll expand not only to the northern border, but to the ports and harbors across the country,”
  • In addition to fixed and mobile surveillance towers, other technology that CBP has acquired and deployed includes blimps outfitted with high-powered ground and air radar, sensors buried underground, and facial recognition software at ports of entry. CBP’s drone fleet has been described as the largest of any U.S. agency outside the Department of Defense
  • Nellie Jo David, a Tohono O’odham tribal member who is writing her dissertation on border security issues at the University of Arizona, says many younger people who have been forced by economic circumstances to work in nearby cities are returning home less and less, because they want to avoid the constant surveillance and harassment. “It’s especially taken a toll on our younger generations.”
  • Border militarism has been spreading worldwide owing to neoliberal economic policies, wars, and the onset of the climate crisis, all of which have contributed to the uprooting of increasingly large numbers of people, notes Reece Jones
  • In the U.S., leading companies with border security contracts include long-established contractors such as Lockheed Martin in addition to recent upstarts such as Anduril Industries, founded by tech mogul Palmer Luckey to feed the growing market for artificial intelligence and surveillance sensors — primarily in the borderlands. Elbit Systems has frequently touted a major advantage over these competitors: the fact that its products are “field-proven” on Palestinians
  • Verlon Jose, then-tribal vice chair, said that many nation members calculated that the towers would help dissuade the federal government from building a border wall across their lands. The Tohono O’odham are “only as sovereign as the federal government allows us to be,”
  • Leading Democrats have argued for the development of an ever-more sophisticated border surveillance state as an alternative to Trump’s border wall. “The positive, shall we say, almost technological wall that can be built is what we should be doing,” House Speaker Nancy Pelosi said in January. But for those crossing the border, the development of this surveillance apparatus has already taken a heavy toll. In January, a study published by researchers from the University of Arizona and Earlham College found that border surveillance towers have prompted migrants to cross along more rugged and circuitous pathways, leading to greater numbers of deaths from dehydration, exhaustion, and exposure.
  • “Walls are not only a question of blocking people from moving, but they are also serving as borders or frontiers between where you enter the surveillance state,” she said. “The idea is that at the very moment you step near the border, Elbit will catch you. Something similar happens in Palestine.”
  • CBP is by far the largest law enforcement entity in the U.S., with 61,400 employees and a 2018 budget of $16.3 billion — more than the militaries of Iran, Mexico, Israel, and Pakistan. The Border Patrol has jurisdiction 100 miles inland from U.S. borders, making roughly two-thirds of the U.S. population theoretically subject to its operations, including the entirety of the Tohono O’odham reservation
  • Between 2013 and 2016, for example, roughly 40 percent of Border Patrol seizures at immigration enforcement checkpoints involved 1 ounce or less of marijuana confiscated from U.S. citizens.
  • the agency uses its sprawling surveillance apparatus for purposes other than border enforcement
  • documents obtained via public records requests suggest that CBP drone flights included surveillance of Dakota Access pipeline protests
  • CBP’s repurposing of the surveillance tower and drones to surveil dissidents hints at other possible abuses. “It’s a reminder that technologies that are sold for one purpose, such as protecting the border or stopping terrorists — or whatever the original justification may happen to be — so often get repurposed for other reasons, such as targeting protesters.”
  • The impacts of the U.S. border on Tohono O’odham people date to the mid-19th century. The tribal nation’s traditional land extended 175 miles into Mexico before being severed by the 1853 Gadsden Purchase, a U.S. acquisition of land from the Mexican government. As many as 2,500 of the tribe’s more than 30,000 members still live on the Mexican side. Tohono O’odham people used to travel between the United States and Mexico fairly easily on roads without checkpoints to visit family, perform ceremonies, or obtain health care. But that was before the Border Patrol arrived en masse in the mid-2000s, turning the reservation into something akin to a military occupation zone. Residents say agents have administered beatings, used pepper spray, pulled people out of vehicles, shot two Tohono O’odham men under suspicious circumstances, and entered people’s homes without warrants. “It is apartheid here,” Ofelia Rivas says. “We have to carry our papers everywhere. And everyone here has experienced the Border Patrol’s abuse in some way.”
  • Tohono O’odham people have developed common cause with other communities struggling against colonization and border walls. David is among numerous activists from the U.S. and Mexican borderlands who joined a delegation to the West Bank in 2017, convened by Stop the Wall, to build relationships and learn about the impacts of Elbit’s surveillance systems. “I don’t feel safe with them taking over my community, especially if you look at what’s going on in Palestine — they’re bringing the same thing right over here to this land,” she says. “The U.S. government is going to be able to surveil basically anybody on the nation.”
Ed Webb

The fight against toxic gamer culture has moved to the classroom - The Verge - 0 views

  • If there were any lessons to be learned from Gamergate — from how to recognize bad faith actors or steps on how to protect yourself, to failings in law enforcement or therapy focused on the internet — the education system doesn’t seem to have fully grasped these concepts.
  • It’s a problem that goes beyond just topics specific to the gaming industry, extending to topics like feminism, politics, or philosophy. “Suddenly everyone who watches Jordan Peterson videos thinks they know what postmodernism is,” says Emma Vossen, a post doctoral fellow with a PhD in gender and games. These problems with students are not about disagreements or debates. It’s not even about kids acting out, but rather harassers in the classroom who have tapped into social media as a powerful weapon. Many educators can’t grasp that, says Vossen. “This is about students who could potentially access this hate movement that’s circling around you and use it against you,” she says. “This is about being afraid to give bad marks to students because they might go to their favorite YouTuber with a little bit of personal information about you that could be used to dox you.” Every word you say can be taken out of context, twisted, and used against you. “Education has no idea how to deal with this problem,” Vossen says. “And I think it’s only going to get worse.
  • An educator’s job is no longer just about teaching, but helping students unlearn false or even harmful information they’ve picked up from the internet.
  • ...1 more annotation...
  • “If we started teaching students the basics of feminism at a very young age,” Wilcox says, “they would have a far better appreciation for how different perspectives will lead to different outcomes, and how the distribution of power and privilege in society can influence who gets to speak in the first place.”
Ed Webb

China's New "Social Credit Score" Brings Dystopian Science Fiction to Life - 1 views

  • The Chinese government is taking a controversial step in security, with plans to implement a system that gives and collects financial, social, political, and legal credit ratings of citizens into a social credit score
  • Proponents of the idea are already testing various aspects of the system — gathering digital records of citizens, specifically financial behavior. These will then be used to create a social credit score system, which will determine if a citizen can avail themselves of certain services based on his or her social credit rating
  • it’s going to be like an episode from Black Mirror — the social credit score of citizens will be the basis for access to services ranging from travel and education to loans and insurance coverage.
Ed Webb

Clear backpacks, monitored emails: life for US students under constant surveillance | E... - 0 views

  • This level of surveillance is “not too over-the-top”, Ingrid said, and she feels her classmates are generally “accepting” of it.
  • One leading student privacy expert estimated that as many as a third of America’s roughly 15,000 school districts may already be using technology that monitors students’ emails and documents for phrases that might flag suicidal thoughts, plans for a school shooting, or a range of other offenses.
  • When Dapier talks with other teen librarians about the issue of school surveillance, “we’re very alarmed,” he said. “It sort of trains the next generation that [surveillance] is normal, that it’s not an issue. What is the next generation’s Mark Zuckerberg going to think is normal?
  • ...13 more annotations...
  • Some parents said they were alarmed and frightened by schools’ new monitoring technologies. Others said they were conflicted, seeing some benefits to schools watching over what kids are doing online, but uncertain if their schools were striking the right balance with privacy concerns. Many said they were not even sure what kind of surveillance technology their schools might be using, and that the permission slips they had signed when their kids brought home school devices had told them almost nothing
  • “They’re so unclear that I’ve just decided to cut off the research completely, to not do any of it.”
  • As of 2018, at least 60 American school districts had also spent more than $1m on separate monitoring technology to track what their students were saying on public social media accounts, an amount that spiked sharply in the wake of the 2018 Parkland school shooting, according to the Brennan Center for Justice, a progressive advocacy group that compiled and analyzed school contracts with a subset of surveillance companies.
  • “They are all mandatory, and the accounts have been created before we’ve even been consulted,” he said. Parents are given almost no information about how their children’s data is being used, or the business models of the companies involved. Any time his kids complete school work through a digital platform, they are generating huge amounts of very personal, and potentially very valuable, data. The platforms know what time his kids do their homework, and whether it’s done early or at the last minute. They know what kinds of mistakes his kids make on math problems.
  • Felix, now 12, said he is frustrated that the school “doesn’t really [educate] students on what is OK and what is not OK. They don’t make it clear when they are tracking you, or not, or what platforms they track you on. “They don’t really give you a list of things not to do,” he said. “Once you’re in trouble, they act like you knew.”
  • “It’s the school as panopticon, and the sweeping searchlight beams into homes, now, and to me, that’s just disastrous to intellectual risk-taking and creativity.”
  • Many parents also said that they wanted more transparency and more parental control over surveillance. A few years ago, Ben, a tech professional from Maryland, got a call from his son’s principal to set up an urgent meeting. His son, then about nine or 10-years old, had opened up a school Google document and typed “I want to kill myself.” It was not until he and his son were in a serious meeting with school officials that Ben found out what happened: his son had typed the words on purpose, curious about what would happen. “The smile on his face gave away that he was testing boundaries, and not considering harming himself,” Ben said. (He asked that his last name and his son’s school district not be published, to preserve his son’s privacy.) The incident was resolved easily, he said, in part because Ben’s family already had close relationships with the school administrators.
  • there is still no independent evaluation of whether this kind of surveillance technology actually works to reduce violence and suicide.
  • Certain groups of students could easily be targeted by the monitoring more intensely than others, she said. Would Muslim students face additional surveillance? What about black students? Her daughter, who is 11, loves hip-hop music. “Maybe some of that language could be misconstrued, by the wrong ears or the wrong eyes, as potentially violent or threatening,” she said.
  • The Parent Coalition for Student Privacy was founded in 2014, in the wake of parental outrage over the attempt to create a standardized national database that would track hundreds of data points about public school students, from their names and social security numbers to their attendance, academic performance, and disciplinary and behavior records, and share the data with education tech companies. The effort, which had been funded by the Gates Foundation, collapsed in 2014 after fierce opposition from parents and privacy activists.
  • “More and more parents are organizing against the onslaught of ed tech and the loss of privacy that it entails. But at the same time, there’s so much money and power and political influence behind these groups,”
  • some privacy experts – and students – said they are concerned that surveillance at school might actually be undermining students’ wellbeing
  • “I do think the constant screen surveillance has affected our anxiety levels and our levels of depression.” “It’s over-guarding kids,” she said. “You need to let them make mistakes, you know? That’s kind of how we learn.”
Ed Webb

At age 13, I joined the alt-right, aided by Reddit and Google - 0 views

  • Now, I’m 16, and I’ve been able to reflect on how I got sucked into that void—and how others do, too. My brief infatuation with the alt-right has helped me understand the ways big tech companies and their algorithms are contributing to the problem of radicalization—and why it’s so important to be skeptical of what you read online.
  • while a quick burst of radiation probably won’t give you cancer, prolonged exposure is far more dangerous. The same is true for the alt-right. I knew that the messages I was seeing were wrong, but the more I saw them, the more curious I became. I was unfamiliar with most of the popular discussion topics on Reddit. And when you want to know more about something, what do you do? You probably don’t think to go to the library and check out a book on that subject, and then fact check and cross reference what you find. If you just google what you want to know, you can get the information you want within seconds.
  • I started googling things like “Illegal immigration,” “Sandy Hook actors,” and “Black crime rate.” And I found exactly what I was looking for.
  • ...11 more annotations...
  • The articles and videos I first found all backed up what I was seeing on Reddit—posts that asserted a skewed version of actual reality, using carefully selected, out-of-context, and dubiously sourced statistics that propped up a hateful world view. On top of that, my online results were heavily influenced by something called an algorithm. I understand algorithms to be secretive bits of code that a website like YouTube will use to prioritize content that you are more likely to click on first. Because all of the content I was reading or watching was from far-right sources, all of the links that the algorithms dangled on my screen for me to click were from far-right perspectives.
  • I spent months isolated in my room, hunched over my computer, removing and approving memes on Reddit and watching conservative “comedians” that YouTube served up to me.
  • The inflammatory language and radical viewpoints used by the alt-right worked to YouTube and Google’s favor—the more videos and links I clicked on, the more ads I saw, and in turn, the more ad revenue they generated.
  • the biggest step in my recovery came when I attended a pro-Trump rally in Washington, D.C., in September 2017, about a month after the “Unite the Right” rally in Charlottesville, Virginia
  • The difference between the online persona of someone who identifies as alt-right and the real thing is so extreme that you would think they are different people. Online, they have the power of fake and biased news to form their arguments. They sound confident and usually deliver their standard messages strongly. When I met them in person at the rally, they were awkward and struggled to back up their statements. They tripped over their own words, and when they were called out by any counter protestors in the crowd, they would immediately use a stock response such as “You’re just triggered.”
  • Seeing for myself that the people I was talking to online were weak, confused, and backwards was the turning point for me.
  • we’re too far gone to reverse the damage that the alt-right has done to the internet and to naive adolescents who don’t know any better—children like the 13-year-old boy I was. It’s convenient for a massive internet company like Google to deliberately ignore why people like me get misinformed in the first place, as their profit-oriented algorithms continue to steer ignorant, malleable people into the jaws of the far-right
  • Dylann Roof, the white supremacist who murdered nine people in a Charleston, South Carolina, church in 2015, was radicalized by far-right groups that spread misinformation with the aid of Google’s algorithms.
  • Over the past couple months, I’ve been getting anti-immigration YouTube ads that feature an incident presented as a “news” story, about two immigrants who raped an American girl. The ad offers no context or sources, and uses heated language to denounce immigration and call for our county to allow ICE to seek out illegal immigrants within our area. I wasn’t watching a video about immigration or even politics when those ads came on; I was watching the old Monty Python “Cheese Shop” sketch. How does British satire, circa 1972, relate to America’s current immigration debate? It doesn’t.
  • tech companies need to be held accountable for the radicalization that results from their systems and standards.
  • anyone can be manipulated like I was. It’s so easy to find information online that we collectively forget that so much of the content the internet offers us is biased
Ed Webb

Elise Armani with Piotr Szyhalski - The Brooklyn Rail - 0 views

  • During the entire history of America, the US has not been at war for 17 years. That's incredible, mainly because if you talk to people who maybe aren't that much that interested in history, they would say, “That’s crazy. What are you talking about? There’s no war.”
  • Our relationship with war and how our country functions in the world is so warped and twisted. Every time the word “war” is introduced into the cultural discourse, you know that it is already corrupted. That's why it’s paired here with “back to normal,” because it's another combination of phrases that stood out…Everybody keeps talking about things getting back to normal. Then the pronouncements that this is a war and we’re fighting an invisible enemy. It just seems so disturbing really because what that means is that we're about to start doing things that are ethically questionable. To me, what was happening is that the pronouncement was made so that anything goes, and there's no culpability, nobody will be held responsible for making any decisions whatsoever because it was war and things had to be done.
  • if I think about how “war” has been used strategically in this context of COVID-19, it doesn’t feel like rhetoric that was raised to be alarmist, but almost to be comforting. That this is a familiar experience. We have a handle on it. We are attacking it like a war. War is our normal.
  • ...8 more annotations...
  • The funny thing about history is that you always look back from the luxury of time and you can see these massive events taking place, you can understand the dynamics. We don't have that perspective when we are in it, so my idea of studying history was to remap the past onto the present, so that we might gain insight into what’s happening now.
  • This weird concept that we have developed, essential, non-essential work, this arbitrary division of what will matter and what will not matter. It wasn't until I was swept up in the uprising and really asking myself what's happening—there was this amazing video of this silent moment of people with their fists up, thousands of people on their knees, and it was just incredibly moving—I had this realization that this was the essential work, the work that we need to be doing.
  • I think we all lack a historical distance right now to make sense of this moment. But art can provide us with an abstracted or a historical lens, that gives us distance or a sense of a time bigger than the moment we're in.
  • When you were describing this 1990s utopian idea of the internet as public space, I was struck by the contrast with how things turned out. In both our physical and digital reality, we have lost the commons. Is Instagram a public space?
  • there's something virus-like about social media anyway, in the way that it operates, in the way that it taps into our physiology on a chemical level in our brain. It’s designed to function that way. And combining that kind of functionality with our addiction to or the dominance of visual culture, it's just sort of like a deadly combination. We get addicted and we just consume incredible amounts of visual information every day.
  • It really is true that very rarely you will see a photograph of a dead body in the printed newspaper. It reminds me, for example, of the absence of the flag-draped coffins that come when soldiers return from war. Because we're taught to think about COVID-19 as a kind of war, maybe it makes sense to really think about that.
  • If you don't see the picture of the dead body, there's no dead body. It's the reason why if I teach a foundation class, I always show the students this famous Stan Brakhage film, The Act of Seeing with One's Own Eyes (1971). It’s a half hour silent film of multiple autopsies. We have this extended conversation about just how completely absent images of our bodies like that are from our cultural experience.
  • those are the images that people wanted to see. They were widely distributed; you could buy postcards of lynched men and women because people wanted to celebrate that. There is a completely different attitude at work. One could say they're doing the work of the same ideology, but in opposite directions. It would be hard to talk about a history of photography in this country and not talk about lynching photographs.
Ed Webb

Iran Says Face Recognition Will ID Women Breaking Hijab Laws | WIRED - 0 views

  • After Iranian lawmakers suggested last year that face recognition should be used to police hijab law, the head of an Iranian government agency that enforces morality law said in a September interview that the technology would be used “to identify inappropriate and unusual movements,” including “failure to observe hijab laws.” Individuals could be identified by checking faces against a national identity database to levy fines and make arrests, he said.
  • Iran’s government has monitored social media to identify opponents of the regime for years, Grothe says, but if government claims about the use of face recognition are true, it’s the first instance she knows of a government using the technology to enforce gender-related dress law.
  • Mahsa Alimardani, who researches freedom of expression in Iran at the University of Oxford, has recently heard reports of women in Iran receiving citations in the mail for hijab law violations despite not having had an interaction with a law enforcement officer. Iran’s government has spent years building a digital surveillance apparatus, Alimardani says. The country’s national identity database, built in 2015, includes biometric data like face scans and is used for national ID cards and to identify people considered dissidents by authorities.
  • ...5 more annotations...
  • Decades ago, Iranian law required women to take off headscarves in line with modernization plans, with police sometimes forcing women to do so. But hijab wearing became compulsory in 1979 when the country became a theocracy.
  • Shajarizadeh and others monitoring the ongoing outcry have noticed that some people involved in the protests are confronted by police days after an alleged incident—including women cited for not wearing a hijab. “Many people haven't been arrested in the streets,” she says. “They were arrested at their homes one or two days later.”
  • Some face recognition in use in Iran today comes from Chinese camera and artificial intelligence company Tiandy. Its dealings in Iran were featured in a December 2021 report from IPVM, a company that tracks the surveillance and security industry.
  • US Department of Commerce placed sanctions on Tiandy, citing its role in the repression of Uyghur Muslims in China and the provision of technology originating in the US to Iran’s Revolutionary Guard. The company previously used components from Intel, but the US chipmaker told NBC last month that it had ceased working with the Chinese company.
  • When Steven Feldstein, a former US State Department surveillance expert, surveyed 179 countries between 2012 and 2020, he found that 77 now use some form of AI-driven surveillance. Face recognition is used in 61 countries, more than any other form of digital surveillance technology, he says.
« First ‹ Previous 61 - 80 of 80
Showing 20 items per page