Skip to main content

Home/ Dystopias/ Group items tagged tools

Rss Feed Group items tagged

Ed Webb

Artificial Intelligence and the Future of Humans | Pew Research Center - 0 views

  • experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities
  • most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.
  • CONCERNS Human agency: Individuals are  experiencing a loss of control over their lives Decision-making on key aspects of digital life is automatically ceded to code-driven, "black box" tools. People lack input and do not learn the context about how the tools work. They sacrifice independence, privacy and power over choice; they have no control over these processes. This effect will deepen as automated systems become more prevalent and complex. Data abuse: Data use and surveillance in complex systems is designed for profit or for exercising power Most AI tools are and will be in the hands of companies striving for profits or governments striving for power. Values and ethics are often not baked into the digital systems making people's decisions for them. These systems are globally networked and not easy to regulate or rein in. Job loss: The AI takeover of jobs will widen economic divides, leading to social upheaval The efficiencies and other economic advantages of code-based machine intelligence will continue to disrupt all aspects of human work. While some expect new jobs will emerge, others worry about massive job losses, widening economic divides and social upheavals, including populist uprisings. Dependence lock-in: Reduction of individuals’ cognitive, social and survival skills Many see AI as augmenting human capacities but some predict the opposite - that people's deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others. Mayhem: Autonomous weapons, cybercrime and weaponized information Some predict further erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications and the use of weaponized information, lies and propaganda to dangerously destabilize human groups. Some also fear cybercriminals' reach into economic systems.
  • ...18 more annotations...
  • AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons
  • “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”
  • SUGGESTED SOLUTIONS Global good is No. 1: Improve human collaboration across borders and stakeholder groups Digital cooperation to serve humanity's best interests is the top priority. Ways must be found for people around the world to come to common understandings and agreements - to join forces to facilitate the innovation of widely accepted approaches aimed at tackling wicked problems and maintaining control over complex human-digital networks. Values-based system: Develop policies to assure AI will be directed at ‘humanness’ and common good Adopt a 'moonshot mentality' to build inclusive, decentralized intelligent digital networks 'imbued with empathy' that help humans aggressively ensure that technology meets social and ethical responsibilities. Some new level of regulatory and certification process will be necessary. Prioritize people: Alter economic and political systems to better help humans ‘race with the robots’ Reorganize economic and political systems toward the goal of expanding humans' capacities and capabilities in order to heighten human/AI collaboration and staunch trends that would compromise human relevance in the face of programmed intelligence.
  • “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”
  • We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals?
  • AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment
  • The record to date is that convenience overwhelms privacy
  • As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging
  • AI will eventually cause a large number of people to be permanently out of work
  • Newer generations of citizens will become more and more dependent on networked AI structures and processes
  • there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control
  • As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified
  • Given historical precedent, one would have to assume it will be our worst qualities that are augmented
  • Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing
  • We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully
  • the Orwellian nightmare realised
  • “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”
  • The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education
Ed Webb

WIRED - 0 views

  • Over the past two years, RealNetworks has developed a facial recognition tool that it hopes will help schools more accurately monitor who gets past their front doors. Today, the company launched a website where school administrators can download the tool, called SAFR, for free and integrate it with their own camera systems
  • how to balance privacy and security in a world that is starting to feel like a scene out of Minority Report
  • facial recognition technology often misidentifies black people and women at higher rates than white men
  • ...7 more annotations...
  • "The use of facial recognition in schools creates an unprecedented level of surveillance and scrutiny," says John Cusick, a fellow at the Legal Defense Fund. "It can exacerbate racial disparities in terms of how schools are enforcing disciplinary codes and monitoring their students."
  • The school would ask adults, not kids, to register their faces with the SAFR system. After they registered, they’d be able to enter the school by smiling at a camera at the front gate. (Smiling tells the software that it’s looking at a live person and not, for instance, a photograph). If the system recognizes the person, the gates automatically unlock
  • The software can predict a person's age and gender, enabling schools to turn off access for people below a certain age. But Glaser notes that if other schools want to register students going forward, they can
  • There are no guidelines about how long the facial data gets stored, how it’s used, or whether people need to opt in to be tracked.
  • Schools could, for instance, use facial recognition technology to monitor who's associating with whom and discipline students differently as a result. "It could criminalize friendships," says Cusick of the Legal Defense Fund.
  • SAFR boasts a 99.8 percent overall accuracy rating, based on a test, created by the University of Massachusetts, that vets facial recognition systems. But Glaser says the company hasn’t tested whether the tool is as good at recognizing black and brown faces as it is at recognizing white ones. RealNetworks deliberately opted not to have the software proactively predict ethnicity, the way it predicts age and gender, for fear of it being used for racial profiling. Still, testing the tool's accuracy among different demographics is key. Research has shown that many top facial recognition tools are particularly bad at recognizing black women
  • "It's tempting to say there's a technological solution, that we're going to find the dangerous people, and we're going to stop them," she says. "But I do think a large part of that is grasping at straws."
Ed Webb

The Digital Maginot Line - 0 views

  • The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless). In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.
  • There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.
  • There’s very little incentive not to try everything: this is a revolution that is being A/B tested.
  • ...17 more annotations...
  • The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.
  • Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target Russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure
  • Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle. But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.
  • The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead
  • Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble. But ultimately the information war is about territory — just not the geographic kind. In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.
  • If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it.
  • The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.
  • This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”
  • Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.
  • Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.
  • The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare
  • Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups
  • platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors
  • What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation. We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.
  • We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.
  • Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.
  • Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets.
Ed Webb

Rescu.me - 0 views

shared by Ed Webb on 03 Nov 10 - Cached
  •  
    When we think we need tools like this, what kind of world are we living in?
Ed Webb

Surveillant Society - 0 views

  • The assumption that one is not being recorded in any real way, a standard in civilization for more or less all of history, is being overturned.
  • CEOs have become slaves to the PR department in a bizarre inversion of internal corporate checks and balances
  • “Your word against mine” can be a serious and drawn-out dispute, subject to all kinds of subjective judgments, loyalties, rights, and arguments; “Your word against my high-definition video” gives citizens and the vulnerable a bit more leverage.
  • ...8 more annotations...
  • Hoaxes, fakes, set-ups, staged scenarios, creative editing, post-production, photoshopping, and every other tool of the trade, all show something other than the raw, original product. I’m not familiar with forensic digital media evaluation tools in use today, but I get the feeling that if they’re not inadequate now, they will be so in a few years.
  • if you show everything, you’re likely to show something you should have hidden, and if you hide everything, everyone will assume you did so for a reason
  • The issue of ownership is being muddied by the same process that has upended media industries – the transition of recordable data from physical to virtual property, infinitely copyable but still subject to many of the necessities of more traditionally-held items. Who owns what, who is legally bound to act in which way, which licenses supercede others? A team of lawyers and scholars might spend months putting together a cohesive argument for any number of possibilities. What chance does an end user have to figure out whether or not they have the right to print, distribute, delete, and so on?
  • our responsibilities as a society to use these new tools judiciously and responsibly
  • increasingly, the answers to these questions are tending towards the “record first” mentality
  • The logical next step, after assuming one is being recorded at all times when in public (potentially true) is ensuring one is being recorded at all times when in public. Theoretically, you won’t act any differently, since you’re already operating under that assumption.
  • how long before it’s considered negligent to have not recorded an accident or criminal act?
  • You have no privacy in public, haven’t had any for a long time, and what little you have you tend to give away. But the sword is double-edged; shouldn’t we benefit from that as well as suffer? A surveillance society is watched. A surveillant society is watching.
Ed Webb

Can Economists and Humanists Ever Be Friends? | The New Yorker - 0 views

  • There is something thrilling about the intellectual audacity of thinking that you can explain ninety per cent of behavior in a society with one mental tool.
  • education, which they believe is a form of domestication
  • The issue here is one of overreach: taking an argument that has worthwhile applications and extending it further than it usefully goes. Our motives are often not what they seem: true. This explains everything: not true. After all, it’s not as if the idea that we send signals about ourselves were news; you could argue that there is an entire social science, sociology, dedicated to the subject. Classic practitioners of that discipline study the signals we send and show how they are interpreted by those around us, as in Erving Goffman’s “The Presentation of Self in Everyday Life,” or how we construct an entire identity, both internally and externally, from the things we choose to be seen liking—the argument of Pierre Bourdieu’s masterpiece “Distinction.” These are rich and complicated texts, which show how rich and complicated human difference can be. The focus on signalling and unconscious motives in “The Elephant in the Brain,” however, goes the other way: it reduces complex, diverse behavior to simple rules.
  • ...11 more annotations...
  • intellectual overextension is often found in economics, as Gary Saul Morson and Morton Schapiro explain in their wonderful book “Cents and Sensibility: What Economics Can Learn from the Humanities” (Princeton). Morson and Schapiro—one a literary scholar and the other an economist—draw on the distinction between hedgehogs and foxes made by Isaiah Berlin in a famous essay from the nineteen-fifties, invoking an ancient Greek fragment: “The fox knows many things, but the hedgehog one big thing.” Economists tend to be hedgehogs, forever on the search for a single, unifying explanation of complex phenomena. They love to look at a huge, complicated mass of human behavior and reduce it to an equation: the supply-and-demand curves; the Phillips curve, which links unemployment and inflation; or mb=mc, which links a marginal benefit to a marginal cost—meaning that the fourth slice of pizza is worth less to you than the first. These are powerful tools, which can be taken too far. Morson and Schapiro cite the example of Gary Becker, the Nobel laureate in economics in 1992. Becker is a hero to many in the field, but, for all the originality of his thinking, to outsiders he can stand for intellectual overconfidence. He thought that “the economic approach is a comprehensive one that is applicable to all human behavior.” Not some, not most—all
  • Becker analyzed, in his own words, “fertility, education, the uses of time, crime, marriage, social interactions, and other ‘sociological,’ ‘legal,’ and ‘political problems,’ ” before concluding that economics explained everything
  • there is no moral dimension to this economic analysis: utility is a fundamentally amoral concept
  • “A traditional cost-benefit analysis could easily have led to the discontinuation of a project widely viewed as being among the most successful health interventions in African history.”
  • Economics, Morson and Schapiro say, has three systematic biases: it ignores the role of culture, it ignores the fact that “to understand people one must tell stories about them,” and it constantly touches on ethical questions beyond its ken. Culture, stories, and ethics are things that can’t be reduced to equations, and economics accordingly has difficulty with them
  • finance is full of “attribution errors,” in which people view their successes as deserved and their failures as bad luck. Desai notes that in business, law, or pedagogy we can gauge success only after months or years; in finance, you can be graded hour by hour, day by day, and by plainly quantifiable measures. What’s more, he says, “the ‘discipline of the market’ shrouds all of finance in a meritocratic haze.” And so people who succeed in finance “are susceptible to developing massively outsized egos and appetites.”
  • one of the things I liked about economics, finance, and the language of money was their lack of hypocrisy. Modern life is full of cant, of people saying things they don’t quite believe. The money guys, in private, don’t go in for cant. They’re more like Mafia bosses. I have to admit that part of me resonates to that coldness.
  • Another part of me, though, is done with it, with the imperialist ambitions of economics and its tendency to explain away differences, to ignore culture, to exalt reductionism. I want to believe Morson and Schapiro and Desai when they posit that the gap between economics and the humanities can be bridged, but my experience in both writing fiction and studying economics leads me to think that they’re wrong. The hedgehog doesn’t want to learn from the fox. The realist novel is a solemn enemy of equations. The project of reducing behavior to laws and the project of attending to human beings in all their complexity and specifics are diametrically opposed. Perhaps I’m only talking about myself, and this is merely an autobiographical reflection, rather than a general truth, but I think that if I committed any further to economics I would have to give up writing fiction. I told an economist I know about this, and he laughed. He said, “Sounds like you’re maximizing your utility.” 
  • There is something thrilling about the intellectual audacity of thinking that you can explain ninety per cent of behavior in a society with one mental tool
  • According to Hanson and Simler, these unschooled workers “won’t show up for work reliably on time, or they have problematic superstitions, or they prefer to get job instructions via indirect hints instead of direct orders, or they won’t accept tasks and roles that conflict with their culturally assigned relative status with co-workers, or they won’t accept being told to do tasks differently than they had done them before.”
  • The idea that Maya Angelou’s career amounts to nothing more than a writer shaking her tail feathers to attract the attention of a dominant male is not just misleading; it’s actively embarrassing.
Ed Webb

The Imaginative Reality of Ursula K. Le Guin | VQR Online - 1 views

  • The founders of this anarchist society made up a new language because they realized you couldn’t have a new society and an old language. They based the new language on the old one but changed it enormously. It’s simply an illustration of what Orwell was saying in his great essay about how writing English clearly is a political matter.
    • Ed Webb
       
      Le Guin, of course, admires "Politics and the English Language." Real-world examples of people changing languages to change society include the invention of modern Turkish and modern Hebrew.
  • There are advantages and disadvantages to living a very long time, as I have. One of the advantages is that you can’t help having a long view. You’ve seen it come and seen it go. Something that’s being announced as the absolute only way to write, you recognize as a fashion, a fad, trendy—the way to write right now if you want to sell right now to a right now editor. But there’s also the long run to consider. Nothing’s deader than last year’s trend. 
  • Obviously, the present tense has certain uses that it’s wonderfully suited for. But recently it has been adopted blindly, as the only way to tell a story—often by young writers who haven’t read very much. Well, it’s a good way to tell some stories, not a good way to tell others. It’s inherently limiting. I call it “flashlight focus.” You see a spot ahead of you and it is dark all around it. That’s great for high suspense, high drama, cut-to-the-chase writing. But if you want to tell a big, long story, like the books of Elena Ferrante, or Jane Smiley’s The Last Hundred Years trilogy, which moves year by year from 1920 to 2020—the present tense would cripple those books. To assume that the present tense is literally “now” and the past tense literally remote in time is extremely naïve. 
  • ...9 more annotations...
  • Henry James did the limited third person really well, showing us the way to do it. He milked that cow successfully. And it’s a great cow, it still gives lots of milk. But if you read only contemporary stuff, always third-person limited, you don’t realize that point of view in a story is very important and can be very movable. It’s here where I suggest that people read books like Woolf’s To the Lighthouse to see what she does by moving from mind to mind. Or Tolstoy’s War and Peace for goodness’ sake. Wow. The way he slides from one point of view to another without you knowing that you’ve changed point of view—he does it so gracefully. You know where you are, whose eyes you are seeing through, but you don’t have the sense of being jerked from place to place. That’s mastery of a craft.
  • Any of us who grew up reading eighteenth- or nineteenth-century fiction are perfectly at home with what is called “omniscience.” I myself call it “authorial” point of view because the term “omnisicence,” the idea of an author being omniscient, is so often used in a judgmental way, as if it were a bad thing. But the author, after all, is the author of all these characters, the maker, the inventor of them. In fact all the characters are the author if you come right down to the honest truth of it. So the author has the perfect right to know what they’re thinking. If the author doesn’t tell you what they are thinking … why? This is worth thinking about. Often it’s simply to spin out suspense by not telling you what the author knows. Well, that’s legitimate. This is art. But I’m trying to get people to think about their choices here, because there are so many beautiful choices that are going unused. In a way, first person and limited third are the easiest ones, the least interesting. 
  • to preach that story is conflict, always to ask, “Where’s the conflict in your story?”—this needs some thinking about. If you say that story is about conflict, that plot must be based on conflict, you’re limiting your view of the world severely. And in a sense making a political statement: that life is conflict, so in stories conflict is all that really matters. This is simply untrue. To see life as a battle is a narrow, social-Darwinist view, and a very masculine one. Conflict, of course, is part of life, I’m not saying you should try to keep it out of your stories, just that it’s not their only lifeblood. Stories are about a lot of different things
  • The first decade of her career, beginning in the sixties, included some of her most well-known works of fiction: A Wizard of Earthsea, The Left Hand of Darkness, The Dispossessed, and The Lathe of Heaven. Each of these works imagined not just worlds, but homes, homes that became real for her readers, homes where protagonists were women, people of color, gender fluid, anticapitalist—imaginary homes that did not simply spin out our worst dystopic fears for the future like so many of the apocalyptic novels of today, but also modeled other ways of being, other ways to create home.
  • “Children know perfectly well that unicorns aren’t real,” Le Guin once said. “But they also know that books about unicorns, if they are good books, are true books.”
  • “Fake rules” and “alternative facts” are used in our time not to increase moral understanding and social possibility but to increase power for those who already have it. A war on language has unhinged words from their meaning, language from its capacity as truth-teller. But perhaps, counterintuitively, it is in the realm of the imagination, the fictive, where we can best re-ground ourselves in the real and the true.
  • you can’t find your own voice if you aren’t listening for it. The sound of your writing is an essential part of what it’s doing. Our teaching of writing tends to ignore it, except maybe in poetry. And so we get prose that goes clunk, clunk, clunk. And we don’t know what’s wrong with it
  • You emphasize the importance of understanding grammar and grammar terminology but also the importance of interrogating its rules. You point out that it is a strange phenomenon that grammar is the tool of our trade and yet so many writers steer away from an engagement with it. In my generation and for a while after—I was born in 1929—we were taught grammar right from the start. It was quietly drilled into us. We knew the names of the parts of speech, we had a working acquaintance with how English works, which they don’t get in most schools anymore. There is so much less reading in schools, and very little teaching of grammar. For a writer this is kind of like being thrown into a carpenter’s shop without ever having learned the names of the tools or handled them consciously. What do you do with a Phillips screwdriver? What is a Phillips screwdriver? We’re not equipping people to write; we’re just saying, “You too can write!” or “Anybody can write, just sit down and do it!” But to make anything, you’ve got to have the tools to make it.
  • In your book on writing, Steering the Craft, you say that morality and language are linked, but that morality and correctness are not the same thing. Yet we often confuse them in the realm of grammar. The “grammar bullies”—you read them in places like the New York Times—and they tell you what is correct: You must never use “hopefully.” “Hopefully, we will be going there on Tuesday.” That is incorrect and wrong and you are basically an ignorant pig if you say it. This is judgmentalism. The game that is being played there is a game of social class. It has nothing to do with the morality of writing and speaking and thinking clearly, which Orwell, for instance, talked about so well. It’s just affirming that I am from a higher class than you are. The trouble is that people who aren’t taught grammar very well in school fall for these statements from these pundits, delivered with vast authority from above. I’m fighting that. A very interesting case in point is using “they” as a singular. This offends the grammar bullies endlessly; it is wrong, wrong, wrong! Well, it was right until the eighteenth century, when they invented the rule that “he” includes “she.” It didn’t exist in English before then; Shakespeare used “they” instead of “he or she”—we all do, we always have done, in speaking, in colloquial English. It took the women’s movement to bring it back to English literature. And it is important. Because it’s a crossroads between correctness bullying and the moral use of language. If “he” includes “she” but “she” doesn’t include “he,” a big statement is being made, with huge social and moral implications. But we don’t have to use “he” that way—we’ve got “they.” Why not use it?
Ed Webb

Anti-piracy tool will harvest and market your emotions - Computerworld Blogs - 0 views

  • After being awarded a grant, Aralia Systems teamed up with Machine Vision Lab in what seems like a massive invasion of your privacy beyond "in the name of security." Building on existing cinema anti-piracy technology, these companies plan to add the ability to harvest your emotions. This is the part where it seems that filmgoers should be eligible to charge movie theater owners. At the very least, shouldn't it result in a significantly discounted movie ticket?  Machine Vision Lab's Dr Abdul Farooq told PhysOrg, "We plan to build on the capabilities of current technology used in cinemas to detect criminals making pirate copies of films with video cameras. We want to devise instruments that will be capable of collecting data that can be used by cinemas to monitor audience reactions to films and adverts and also to gather data about attention and audience movement. ... It is envisaged that once the technology has been fine tuned it could be used by market researchers in all kinds of settings, including monitoring reactions to shop window displays."  
  • The 3D camera data will "capture the audience as a whole as a texture."
  • the technology will enable companies to cash in on your emotions and sell that personal information as marketing data
  • ...4 more annotations...
  • "Within the cinema industry this tool will feed powerful marketing data that will inform film directors, cinema advertisers and cinemas with useful data about what audiences enjoy and what adverts capture the most attention. By measuring emotion and movement film companies and cinema advertising agencies can learn so much from their audiences that will help to inform creativity and strategy.” 
  • hey plan to fine-tune it to monitor our reactions to window displays and probably anywhere else the data can be used for surveillance and marketing.
  • Muslim women have got the right idea. Soon well all be wearing privacy tents.
  • In George Orwell's novel 1984, each home has a mandatory "telescreen," a large flat panel, something like a TV, but with the ability for the authorities to observer viewers in order to ensure they are watching all the required propaganda broadcasts and reacting with appropriate emotions. Problem viewers would be brought to the attention of the Thought Police. The telescreen, of course, could not be turned off. It is reassuring to know that our technology has finally caught up with Oceania's.
Ed Webb

Student protests and the storming of Tory HQ - a story in social media | openDemocracy - 0 views

  •  
    For Dystopia class: note the use of a new storytelling tool, worth considering for your own presentations, perhaps. Also, since several of you have blogged about education, what do you think about this demo?
Ed Webb

Meet Eric Goldstein - CEO of Amplify : LyndiT.com - 0 views

  •  
    Prime mover behind a great social media tool
Ed Webb

Interoperability And Privacy: Squaring The Circle | Techdirt - 0 views

  • if there's one thing we've learned from more than a decade of Facebook scandals, it's that there's little reason to believe that Facebook possesses the requisite will and capabilities. Indeed, it may be that there is no automated system or system of human judgments that could serve as a moderator and arbiter of the daily lives of billions of people. Given Facebook's ambition to put more and more of our daily lives behind its walled garden, it's hard to see why we would ever trust Facebook to be the one to fix all that's wrong with Facebook.
  • Facebook users are eager for alternatives to the service, but are held back by the fact that the people they want to talk with are all locked within the company's walled garden
  • rather than using standards to describe how a good voting machine should work, the industry pushed a standard that described how their existing, flawed machines did work with some small changes in configurations. Had they succeeded, they could have simply slapped a "complies with IEEE standard" label on everything they were already selling and declared themselves to have fixed the problem... without making the serious changes needed to fix their systems, including requiring a voter-verified paper ballot.
  • ...13 more annotations...
  • the risk of trusting competition to an interoperability mandate is that it will create a new ecosystem where everything that's not forbidden is mandatory, freezing in place the current situation, in which Facebook and the other giants dominate and new entrants are faced with onerous compliance burdens that make it more difficult to start a new service, and limit those new services to interoperating in ways that are carefully designed to prevent any kind of competitive challenge
  • Facebook is a notorious opponent of adversarial interoperability. In 2008, Facebook successfully wielded a radical legal theory that allowed it to shut down Power Ventures, a competitor that allowed Facebook's users to use multiple social networks from a single interface. Facebook argued that by allowing users to log in and display Facebook with a different interface, even after receipt of a cease and desist letter telling Power Ventures to stop, the company had broken a Reagan-era anti-hacking law called the Computer Fraud and Abuse Act (CFAA). In other words, upsetting Facebook's investors made their conduct illegal.
  • Today, Facebook is viewed as holding all the cards because it has corralled everyone who might join a new service within its walled garden. But legal reforms to safeguard the right to adversarial interoperability would turn this on its head: Facebook would be the place that had conveniently organized all the people whom you might tempt to leave Facebook, and even supply you with the tools you need to target those people.
  • Such a tool would allow someone to use Facebook while minimizing how they are used by Facebook. For people who want to leave Facebook but whose friends, colleagues or fellow travelers are not ready to join them, a service like this could let Facebook vegans get out of the Facebook pool while still leaving a toe in its waters.
  • In a competitive market (which adversarial interoperability can help to bring into existence), even very large companies can't afford to enrage their customers
  • the audience for a legitimate adversarial interoperability product are the customers of the existing service that it connects to.
  • anyone using a Facebook mobile app might be exposing themselves to incredibly intrusive data-gathering, including some surprisingly creepy and underhanded tactics.
  • If users could use a third-party service to exchange private messages with friends, or to participate in a group they're a member of, they can avoid much (but not all) of this surveillance.
  • Facebook users (and even non-Facebook users) who want more privacy have a variety of options, none of them very good. Users can tweak Facebook's famously hard-to-understand privacy dashboard to lock down their accounts and bet that Facebook will honor their settings (this has not always been a good bet). Everyone can use tracker-blockers, ad-blockers and script-blockers to prevent Facebook from tracking them when they're not on Facebook, by watching how they interact with pages that have Facebook "Like" buttons and other beacons that let Facebook monitor activity elsewhere on the Internet. We're rightfully proud of our own tracker blocker, Privacy Badger, but it doesn't stop Facebook from tracking you if you have a Facebook account and you're using Facebook's service.
  • As Facebook's market power dwindled, so would the pressure that web publishers feel to embed Facebook trackers on their sites, so that non-Facebook users would not be as likely to be tracked as they use the Web.
  • Today, Facebook's scandals do not trigger mass departures from the service, and when users do leave, they tend to end up on Instagram, which is also owned by Facebook.
  • For users who have privacy needs -- and other needs -- beyond those the big platforms are willing to fulfill, it's important that we keep the door open to competitors (for-profit, nonprofit, hobbyist and individuals) who are willing to fill those needs.
  • helping Facebook's own users, or the users of any big service, to configure their experience to make their lives better should be legal and encouraged even (and especially) if it provides a path for users to either diversify their social media experience or move away entirely from the big, concentrated services. Either way, we'd be on our way to a more pluralistic, decentralized, diverse Internet
Ed Webb

Zoom urged by rights groups to rule out 'creepy' AI emotion tech - 0 views

  • Human rights groups have urged video-conferencing company Zoom to scrap research on integrating emotion recognition tools into its products, saying the technology can infringe users' privacy and perpetuate discrimination
  • "If Zoom advances with these plans, this feature will discriminate against people of certain ethnicities and people with disabilities, hardcoding stereotypes into millions of devices,"
  • The company has already built tools that purport to analyze the sentiment of meetings based on text transcripts of video calls
  • ...1 more annotation...
  • "This move to mine users for emotional data points based on the false idea that AI can track and analyze human emotions is a violation of privacy and human rights,"
Ed Webb

Project Vigilant and the government/corporate destruction of privacy - Glenn Greenwald ... - 0 views

  • it's the re-packaging and transfer of this data to the U.S. Government -- combined with the ability to link it not only to your online identity (IP address), but also your offline identity (name) -- that has made this industry particularly pernicious.  There are serious obstacles that impede the Government's ability to create these electronic dossiers themselves.  It requires both huge resources and expertise.  Various statutes enacted in the mid-1970s -- such as the Privacy Act of 1974 -- impose transparency requirements and other forms of accountability on programs whereby the Government collects data on citizens.  And the fact that much of the data about you ends up in the hands of private corporations can create further obstacles, because the tools which the Government has to compel private companies to turn over this information is limited (the fact that the FBI is sometimes unable to obtain your "transactional" Internet data without a court order -- i.e., whom you email, who emails you, what Google searches you enter, and what websites you visit --is what has caused the Obama administration to demand that Congress amend the Patriot Act to vest them with the power to obtain all of that with no judicial supervision). But the emergence of a private market that sells this data to the Government (or, in the case of Project Vigilance, is funded in order to hand it over voluntarily) has eliminated those obstacles.
  • a wide array of government agencies have created countless programs to encourage and formally train various private workers (such as cable installers, utilities workers and others who enter people's homes) to act as government informants and report any "suspicious" activity; see one example here.  Meanwhile, TIA has been replicated, and even surpassed, as a result of private industries' willingness to do the snooping work on American citizens which the Government cannot do.
  • this arrangement provides the best of all worlds for the Government and the worst for citizens: The use of private-sector data aggregators allows the government to insulate surveillance and information-handling practices from privacy laws or public scrutiny. That is sometimes an important motivation in outsourced surveillance.  Private companies are free not only from complying with the Privacy Act, but from other checks and balances, such as the Freedom of Information Act.  They are also insulated from oversight by Congress and are not subject to civil-service laws designed to ensure that government policymakers are not influenced by partisan politics. . . .
  • ...4 more annotations...
  • There is a long and unfortunate history of cooperation between government security agencies and powerful corporations to deprive individuals of their privacy and other civil liberties, and any program that institutionalizes close, secretive ties between such organizations raises serious questions about the scope of its activities, now and in the future.
  • Many people are indifferent to the disappearance of privacy -- even with regard to government officials -- because they don't perceive any real value to it.  The ways in which the loss of privacy destroys a society are somewhat abstract and difficult to articulate, though very real.  A society in which people know they are constantly being monitored is one that breeds conformism and submission, and which squashes innovation, deviation, and real dissent. 
  • that's what a Surveillance State does:  it breeds fear of doing anything out of the ordinary by creating a class of meek citizens who know they are being constantly watched.
  • The loss of privacy is entirely one-way.  Government and corporate authorities have destroyed most vestiges of privacy for you, while ensuring that they have more and more for themselves.  The extent to which you're monitored grows in direct proportion to the secrecy with which they operate.  Sir Francis Bacon's now platitudinous observation that "knowledge itself is power" is as true as ever.  That's why this severe and always-growing imbalance is so dangerous, even to those who are otherwise content to have themselves subjected to constant monitoring.
Ed Webb

CCTV vigilantes: Snoopers paid to catch shoplifters from home | Mail Online - 0 views

  • This is the privatisation of the surveillance society – a private company asking private individuals to spy on each other using private cameras connected to the internet.
  • The cameras are already there – we just link to them so people can watch them. It is not entertainment, it’s a tool for crime-fighting
Ed Webb

6 Free Sites for Creating Your Own Comics - 0 views

  •  
    For group projects, multimedia, presentations...
Ed Webb

Obama Administration Seeks Internet Privacy Protections, New Policy Office - WSJ.com - 0 views

  • The central issue in writing federal privacy legislation is whether the Internet industry's efforts to police its own behavior has been effective enough. Proponents of legislation argue the industry is a Wild West where consumer data are gathered and sold without restrictions. Opponents of legislation say the industry is committed to providing tools to give consumers better insight into and control over data about themselves.
Ed Webb

Break the law and your new 'friend' may be the FBI - Yahoo! News - 0 views

  • U.S. law enforcement agents are following the rest of the Internet world into popular social-networking services, going undercover with false online profiles to communicate with suspects and gather private information, according to an internal Justice Department document that offers a tantalizing glimpse of issues related to privacy and crime-fighting.
  • The Electronic Frontier Foundation, a San Francisco-based civil liberties group, obtained the Justice Department document when it sued the agency and five others in federal court. The 33-page document underscores the importance of social networking sites to U.S. authorities. The foundation said it would publish the document on its Web site on Tuesday.
  • mountains of personal data
  • ...5 more annotations...
  • "It doesn't really discuss any mechanisms for accountability or ensuring that government agents use those tools responsibly," said Marcia Hoffman, a senior attorney with the civil liberties foundation.
  • In the face-to-face world, agents can't impersonate a suspect's spouse, child, parent or best friend. But online, behind the guise of a social-networking account, they can.
  • Twitter's lawyers tell prosecutors they need a warrant or subpoena before the company turns over customer information
  • For government attorneys taking cases to trial, social networks are a "valuable source of info on defense witnesses," they said. "Knowledge is power. ... Research all witnesses on social networking sites."
  • "Social networking and the courtroom can be a dangerous combination," the government said.
1 - 20 of 31 Next ›
Showing 20 items per page