Skip to main content

Home/ Dystopias/ Group items tagged intelligence

Rss Feed Group items tagged

Ed Webb

Artificial Intelligence and the Future of Humans | Pew Research Center - 0 views

  • experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities
  • most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.
  • CONCERNS Human agency: Individuals are  experiencing a loss of control over their lives Decision-making on key aspects of digital life is automatically ceded to code-driven, "black box" tools. People lack input and do not learn the context about how the tools work. They sacrifice independence, privacy and power over choice; they have no control over these processes. This effect will deepen as automated systems become more prevalent and complex. Data abuse: Data use and surveillance in complex systems is designed for profit or for exercising power Most AI tools are and will be in the hands of companies striving for profits or governments striving for power. Values and ethics are often not baked into the digital systems making people's decisions for them. These systems are globally networked and not easy to regulate or rein in. Job loss: The AI takeover of jobs will widen economic divides, leading to social upheaval The efficiencies and other economic advantages of code-based machine intelligence will continue to disrupt all aspects of human work. While some expect new jobs will emerge, others worry about massive job losses, widening economic divides and social upheavals, including populist uprisings. Dependence lock-in: Reduction of individuals’ cognitive, social and survival skills Many see AI as augmenting human capacities but some predict the opposite - that people's deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others. Mayhem: Autonomous weapons, cybercrime and weaponized information Some predict further erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications and the use of weaponized information, lies and propaganda to dangerously destabilize human groups. Some also fear cybercriminals' reach into economic systems.
  • ...18 more annotations...
  • AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons
  • “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”
  • SUGGESTED SOLUTIONS Global good is No. 1: Improve human collaboration across borders and stakeholder groups Digital cooperation to serve humanity's best interests is the top priority. Ways must be found for people around the world to come to common understandings and agreements - to join forces to facilitate the innovation of widely accepted approaches aimed at tackling wicked problems and maintaining control over complex human-digital networks. Values-based system: Develop policies to assure AI will be directed at ‘humanness’ and common good Adopt a 'moonshot mentality' to build inclusive, decentralized intelligent digital networks 'imbued with empathy' that help humans aggressively ensure that technology meets social and ethical responsibilities. Some new level of regulatory and certification process will be necessary. Prioritize people: Alter economic and political systems to better help humans ‘race with the robots’ Reorganize economic and political systems toward the goal of expanding humans' capacities and capabilities in order to heighten human/AI collaboration and staunch trends that would compromise human relevance in the face of programmed intelligence.
  • “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”
  • We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals?
  • AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment
  • The record to date is that convenience overwhelms privacy
  • As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging
  • AI will eventually cause a large number of people to be permanently out of work
  • Newer generations of citizens will become more and more dependent on networked AI structures and processes
  • there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control
  • As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified
  • Given historical precedent, one would have to assume it will be our worst qualities that are augmented
  • Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing
  • We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully
  • the Orwellian nightmare realised
  • “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”
  • The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education
Ed Webb

Cambridge University to open 'Terminator centre' to study threat to humans from artific... - 0 views

  • the four greatest threats to the human species - artificial intelligence, climate change, nuclear war and rogue biotechnology.
  • Huw Price, Bertrand Russell Professor of Philosophy and another of the centre's three founders, said such an 'ultra-intelligent machine, or artificial general intelligence (AGI)' could have very serious consequences. He said: 'Nature didn’t anticipate us, and we in our turn shouldn’t take AGI for granted.'We need to take seriously the possibility that there might be a ‘Pandora’s box’ moment with AGI that, if missed, could be disastrous.
Ed Webb

DK Matai: The Rise of The Bio-Info-Nano Singularity - 0 views

  • The human capability for information processing is limited, yet there is an accelerating change in the development and deployment of new technology. This relentless wave upon wave of new information and technology causes an overload on the human mind by eventually flooding it. The resulting acopia -- inability to cope -- has to be solved by the use of ever more sophisticated information intelligence. Extrapolating these capabilities suggests the near-term emergence and visibility of self-improving neural networks, "artificial" intelligence, quantum algorithms, quantum computing and super-intelligence. This metamorphosis is so much beyond present human capabilities that it becomes impossible to understand it with the pre-conceptions and conditioning of the present mindset, societal make-up and existing technology
  • The Bio-Info-Nano Singularity is a transcendence to a wholly new regime of mind, society and technology, in which we have to learn to think in a new way in order to survive as a species.
  • What is globalized human society going to do with the mass of unemployed human beings that are rendered obsolete by the approaching super-intelligence of the Bio-Info-Nano Singularity?
  • ...5 more annotations...
  • Nothing futurists predict ever comes true, but, by the time the time comes, everybody has forgotten they said it--and then they are free to say something else that never will come true but that everybody will have forgotten they said by the time the time come
  • Most of us will become poisoned troglodytes in a techno dystopia
  • Any engineer can make 'stuff' go faster, kill deader, sort quicker, fly higher, record sharper, destroy more completely, etc.. We have a surfeit of that kind of creativity. What we need is some kind of genius to create a society that treats each other with equality, justice, caring and cooperativeness. The concept of 'singularity' doesn't excite me nearly as much as the idea that sometime we might be able to move beyond the civilization level of a troop of chimpanzees. I'm hoping that genius comes before we manage to destroy what little civilization we have with all our neat "stuff"
  • There's a lot of abstraction in this article, which is a trend of what I have read of a number of various movements taking up the Singularity cause. This nebulous but optimistic prediction of an incomprehensibly advanced future, wherein through technology and science we achieve quasi-immortality, or absolute control of thought, omniscience, or transcendence from the human entirely
  • Welcome to the Frankenstein plot. This is a very common Hollywood plot, the idea of a manmade creation running amok. The concept that the author describes can also be described as an asymtotic curve on a graph where scientific achievement parallels time at first then gradually begins to go vertical until infinite scientific knowledge and invention occurs in an incredibly short time.
Ed Webb

Could self-aware cities be the first forms of artificial intelligence? - 1 views

  • People have speculated before about the idea that the Internet might become self-aware and turn into the first "real" A.I., but could it be more likely to happen to cities, in which humans actually live and work and navigate, generating an even more chaotic system?
  • "By connecting and providing visibility into disparate systems, cities and buildings can operate like living organisms, sensing and responding quickly to potential problems before they occur to protect citizens, save resources and reduce energy consumption and carbon emissions," reads the invitation to IBM's PULSE 2010 event.
  • And Cisco is already building the first of these smart cities: Songdo, a Korean "instant city," which will be completely controlled by computer networks — including ubiquitious Telepresence applications, video screens which could be used for surveillance. Cisco's chief globalization officer, Wim Elfrink, told the San Jose Mercury News: Everything will be connected - buildings, cars, energy - everything. This is the tipping point. When we start building cities with technology in the infrastructure, it's beyond my imagination what that will enable.
  • ...9 more annotations...
  • Urbanscale founder Adam Greenfield has written a lot about ubiquitous computing in urban environments, most notably in 2006's Everyware, which posits that computers will "effectively disappear" as objects around us become "smart" in ways that are nearly invisible to lay-people.
  • tailored advertising just about anywhere
  • Some futurists are still predicting that cities will become closer to arcologies — huge slabs of integrated urban life, like a whole city in a single block — as they grapple with the need to house so many people in an efficient fashion. The implications for heating and cooling an arcology, let alone dealing with waste disposal, are mind-boggling. Could a future arcology become our first machine mind?
  • Science fiction gives us the occasional virtual worlds that look rural — like Doctor Who's visions of life inside the Matrix, which mostly looks (not surprisingly) like a gravel quarry — but for the most part, virtual worlds are always urban
  • So here's why cities might have an edge over, say, the Internet as a whole, when it comes to developing self awareness. Because every city is different, and every city has its own identity and sense of self — and this informs everything from urban planning to the ways in which parking and electricity use are mapped out. The more sophisticated the integrated systems associated with a city become, the more they'll reflect the city's unique personality, and the more programmers will try to imbue their computers with a sense of this unique urban identity. And a sense of the city's history, and the ways in which the city has evolved and grown, will be important for a more sophisticated urban planning system to grasp the future — so it's very possible to imagine this leading to a sense of personal history, on the part of a computer that identifies with the city it helps to manage.
  • next time you're wandering around your city, looking up at the outcroppings of huge buildings, the wild tides of traffic and the frenzy of construction and demolition, don't just think of it as a place haunted by history. Try, instead, to imagine it coming to life in a new way, opening its millions of electronic eyes, and greeting you with the first gleaming of independent thought
  • I can't wait for the day when city AI's decide to go to war with other city AI's over allocation of federal funds.
  • John Shirley has San Fransisco as a sentient being in City Come A Walkin
  • I doubt cities will ever be networked so smoothly... they are all about fractions, sections, niches, subcultures, ethicities, neighborhoods, markets, underground markets. It's literally like herding cats... I don't see it as feasible. It would be a schizophrenic intelligence at best. Which, Wintermute was I suppose...
  •  
    This is beginning to sound just like the cities we have read about. To me it sort of reminds me of the Burning chrome stories, as an element in all those stories was machines and technology at every turn. With the recent advances is technology it is alarming to see that an element in many science fiction tales is finally coming true. A city that acts as a machine in its self. Who is to say that this city won't become a city with a highly active hacker underbelly.
Ed Webb

Artificial intelligence, immune to fear or favour, is helping to make China's foreign p... - 0 views

  • Several prototypes of a diplomatic system using artificial intelligence are under development in China, according to researchers involved or familiar with the projects. One early-stage machine, built by the Chinese Academy of Sciences, is already being used by the Ministry of Foreign Affairs.
  • China’s ambition to become a world leader has significantly increased the burden and challenge to its diplomats. The “Belt and Road Initiative”, for instance, involves nearly 70 countries with 65 per cent of the world’s population. The unprecedented development strategy requires up to a US$900 billion investment each year for infrastructure construction, some in areas with high political, economic or environmental risk
  • researchers said the AI “policymaker” was a strategic decision support system, with experts stressing that it will be humans who will make any final decision
  • ...10 more annotations...
  • “Human beings can never get rid of the interference of hormones or glucose.”
  • “It would not even consider the moral factors that conflict with strategic goals,”
  • “If one side of the strategic game has artificial intelligence technology, and the other side does not, then this kind of strategic game is almost a one-way, transparent confrontation,” he said. “The actors lacking the assistance of AI will be at an absolute disadvantage in many aspects such as risk judgment, strategy selection, decision making and execution efficiency, and decision-making reliability,” he said.
  • “The entire strategic game structure will be completely out of balance.”
  • “AI can think many steps ahead of a human. It can think deeply in many possible scenarios and come up with the best strategy,”
  • A US Department of State spokesman said the agency had “many technological tools” to help it make decisions. There was, however, no specific information on AI that could be shared with the public,
  • The system, also known as geopolitical environment simulation and prediction platform, was used to vet “nearly all foreign investment projects” in recent years
  • One challenge to the development of AI policymaker is data sharing among Chinese government agencies. The foreign ministry, for instance, had been unable to get some data sets it needed because of administrative barriers
  • China is aggressively pushing AI into many sectors. The government is building a nationwide surveillance system capable of identifying any citizen by face within seconds. Research is also under way to introduce AI in nuclear submarines to help commanders making faster, more accurate decision in battle.
  • “AI can help us get more prepared for unexpected events. It can help find a scientific, rigorous solution within a short time.
Ed Webb

A woman first wrote the prescient ideas Huxley and Orwell made famous - Quartzy - 1 views

  • In 1919, a British writer named Rose Macaulay published What Not, a novel about a dystopian future—a brave new world if you will—where people are ranked by intelligence, the government mandates mind training for all citizens, and procreation is regulated by the state.You’ve probably never heard of Macaulay or What Not. However, Aldous Huxley, author of the science fiction classic Brave New World, hung out in the same London literary circles as her and his 1932 book contains many concepts that Macaulay first introduced in her work. In 2019, you’ll be able to read Macaulay’s book yourself and compare the texts as the British publisher Handheld Press is planning to re- release the forgotten novel in March. It’s been out of print since the year it was first released.
  • The resurfacing of What Not also makes this a prime time to consider another work that influenced Huxley’s Brave New World, the 1923 novel We by Yvgeny Zamyatin. What Not and We are lost classics about a future that foreshadows our present. Notably, they are also hidden influences on some of the most significant works of 20th century fiction, Brave New World and George Orwell’s 1984.
  • In Macaulay’s book—which is a hoot and well worth reading—a democratically elected British government has been replaced with a “United Council, five minds with but a single thought—if that,” as she put it. Huxley’s Brave New World is run by a similarly small group of elites known as “World Controllers.”
  • ...12 more annotations...
  • citizens of What Not are ranked based on their intelligence from A to C3 and can’t marry or procreate with someone of the same rank to ensure that intelligence is evenly distributed
  • Brave New World is more futuristic and preoccupied with technology than What Not. In Huxley’s world, procreation and education have become completely mechanized and emotions are strictly regulated pharmaceutically. Macaulay’s Britain is just the beginning of this process, and its characters are not yet completely indoctrinated into the new ways of the state—they resist it intellectually and question its endeavors, like the newly-passed Mental Progress Act. She writes:He did not like all this interfering, socialist what-not, which was both upsetting the domestic arrangements of his tenants and trying to put into their heads more learning than was suitable for them to have. For his part he thought every man had a right to be a fool if he chose, yes, and to marry another fool, and to bring up a family of fools too.
  • Where Huxley pairs dumb but pretty and “pneumatic” ladies with intelligent gentlemen, Macaulay’s work is decidedly less sexist.
  • We was published in French, Dutch, and German. An English version was printed and sold only in the US. When Orwell wrote about We in 1946, it was only because he’d managed to borrow a hard-to-find French translation.
  • While Orwell never indicated that he read Macaulay, he shares her subversive and subtle linguistic skills and satirical sense. His protagonist, Winston—like Kitty—works for the government in its Ministry of Truth, or Minitrue in Newspeak, where he rewrites historical records to support whatever Big Brother currently says is good for the regime. Macaulay would no doubt have approved of Orwell’s wit. And his state ministries bear a striking similarity to those she wrote about in What Not.
  • Orwell was familiar with Huxley’s novel and gave it much thought before writing his own blockbuster. Indeed, in 1946, before the release of 1984, he wrote a review of Zamyatin’s We (pdf), comparing the Russian novel with Huxley’s book. Orwell declared Huxley’s text derivative, writing in his review of We in The Tribune:The first thing anyone would notice about We is the fact—never pointed out, I believe—that Aldous Huxley’s Brave New World must be partly derived from it. Both books deal with the rebellion of the primitive human spirit against a rationalised, mechanized, painless world, and both stories are supposed to take place about six hundred years hence. The atmosphere of the two books is similar, and it is roughly speaking the same kind of society that is being described, though Huxley’s book shows less political awareness and is more influenced by recent biological and psychological theories.
  • In We, the story is told by D-503, a male engineer, while in Brave New World we follow Bernard Marx, a protagonist with a proper name. Both characters live in artificial worlds, separated from nature, and they recoil when they first encounter people who exist outside of the state’s constructed and controlled cities.
  • Although We is barely known compared to Orwell and Huxley’s later works, I’d argue that it’s among the best literary science fictions of all time, and it’s highly relevant, as it was when first written. Noam Chomsky calls it “more perceptive” than both 1984 and Brave New World. Zamyatin’s futuristic society was so on point, he was exiled from the Soviet Union because it was such an accurate description of life in a totalitarian regime, though he wrote it before Stalin took power.
  • Macaulay’s work is more subtle and funny than Huxley’s. Despite being a century old, What Not is remarkably relevant and readable, a satire that only highlights how little has changed in the years since its publication and how dangerous and absurd state policies can be. In this sense then, What Not reads more like George Orwell’s 1949 novel 1984 
  • Orwell was critical of Zamyatin’s technique. “[We] has a rather weak and episodic plot which is too complex to summarize,” he wrote. Still, he admired the work as a whole. “[Its] intuitive grasp of the irrational side of totalitarianism—human sacrifice, cruelty as an end in itself, the worship of a Leader who is credited with divine attributes—[…] makes Zamyatin’s book superior to Huxley’s,”
  • Like our own tech magnates and nations, the United State of We is obsessed with going to space.
  • Perhaps in 2019 Macaulay’s What Not, a clever and subversive book, will finally get its overdue recognition.
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
Ed Webb

An Algorithm Summarizes Lengthy Text Surprisingly Well - MIT Technology Review - 0 views

  • As information overload grows ever worse, computers may become our only hope for handling a growing deluge of documents. And it may become routine to rely on a machine to analyze and paraphrase articles, research papers, and other text for you.
  • Parsing language remains one of the grand challenges of artificial intelligence (see “AI’s Language Problem”). But it’s a challenge with enormous commercial potential. Even limited linguistic intelligence—the ability to parse spoken or written queries, and to respond in more sophisticated and coherent ways—could transform personal computing. In many specialist fields—like medicine, scientific research, and law—condensing information and extracting insights could have huge commercial benefits.
  • The system experiments in order to generate summaries of its own using a process called reinforcement learning. Inspired by the way animals seem to learn, this involves providing positive feedback for actions that lead toward a particular objective. Reinforcement learning has been used to train computers to do impressive new things, like playing complex games or controlling robots (see “10 Breakthrough Technologies 2017: Reinforcement Learning”). Those working on conversational interfaces are increasingly now looking at reinforcement learning as a way to improve their systems.
  • ...1 more annotation...
  • “At some point, we have to admit that we need a little bit of semantics and a little bit of syntactic knowledge in these systems in order for them to be fluid and fluent,”
Ed Webb

The Digital Maginot Line - 0 views

  • The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless). In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.
  • There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.
  • There’s very little incentive not to try everything: this is a revolution that is being A/B tested.
  • ...17 more annotations...
  • The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.
  • Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target Russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure
  • Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle. But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.
  • The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead
  • Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble. But ultimately the information war is about territory — just not the geographic kind. In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.
  • If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it.
  • The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.
  • This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”
  • Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.
  • Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.
  • The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare
  • Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups
  • platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors
  • What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation. We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.
  • We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.
  • Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.
  • Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets.
Ed Webb

GCHQ revelations: mastery of the internet will mean mastery of everyone | Henry Porter ... - 0 views

  • We are fond of saying that the younger generation doesn't know the meaning of the word privacy, but what you give away voluntarily and what the state takes are as different as charity and tax. Privacy is the defining quality of a free people. Snowden's compelling leaks show us that mastery of the internet will ineluctably mean mastery over the individual.
Ed Webb

Glenn Greenwald: How America's Surveillance State Breeds Conformity and Fear | Civil Li... - 0 views

  • The Surveillance State hovers over any attacks that meaningfully challenge state-appropriated power. It doesn’t just hover over it. It impedes it, it deters it and kills it.  That’s its intent. It does that by design.
  • the realization quickly emerged that, allowing government officials to eavesdrop on other people, on citizens, without constraints or oversight, to do so in the dark, is a power that gives so much authority and leverage to those in power that it is virtually impossible for human beings to resist abusing that power.  That’s how potent of a power it is.
  • If a dictator takes over the United States, the NSA could enable it to impose total tyranny, and there would be no way to fight back.
  • ...23 more annotations...
  • Now it’s virtually a religious obligation to talk about the National Security State and its close cousin, the Surveillance State, with nothing short of veneration.
  • The NSA, beginning 2001, was secretly ordered to spy domestically on the communications of American citizens. It has escalated in all sorts of lawless, and now lawful ways, such that it is now an enormous part of what that agency does. Even more significantly, the technology that it has developed is now shared by a whole variety of agencies, including the FBI
  • Now, the Patriot Act is completely uncontroversial. It gets renewed without any notice every three years with zero reforms, no matter which party is in control.
  • hey are two, as I said, established Democrats warning that the Democratic control of the Executive branch is massively abusing this already incredibly broad Patriot Act. And one of the things they are trying to do is extract some basic information from the NSA about what it is they’re doing in terms of the surveillance on the American people.  Because even though they are on the Intelligence Committee, they say they don’t even know the most basic information about what the NSA does including even how many Americans have had their e-mails read or had their telephone calls intercepted by the NSA.
  • "We can’t tell you how many millions of Americans are having their e-mails read by us, and their telephone calls listened in by us, because for us to tell you that would violate the privacy of American citizens."
  • An article in Popular Mechanics in 2004 reported on a study of American surveillance and this is what it said: “There are an estimated 30 million surveillance cameras now deployed in the United States shooting 4 billion hours of footage a week. Americans are being watched. All of us, almost everywhere.” There is a study in 2006 that estimated that that number would quadruple to 100 million cameras -- surveillance cameras -- in the United States within five years largely because of the bonanza of post-9/11 surveilling. 
  • it’s not just the government that is engaged in surveillance, but just as menacingly, corporations, private corporations, engage in huge amounts of surveillance on us. They give us cell phones that track every moment of where we are physically, and then provide that to law enforcement agencies without so much as a search warrant.  Obviously, credit card and banking transactions are reported, and tell anyone who wants to know everything we do. We talk about the scandal of the Bush eavesdropping program that was not really a government eavesdropping program, so much as it was a private industry eavesdropping program. It was done with the direct and full cooperation of AT&T, Sprint, Verizon and the other telecom giants. In fact, when you talk about the American Surveillance State, what you’re really talking about is no longer public government agencies. What you’re talking about is a full-scale merger between the federal government and industry. That is what the Surveillance State is
  • The principle being that there can be no human interaction, especially no human communication, not just for international between foreign nations but by America citizens on American soils that is beyond the reach of the U.S. government.
  • at exactly the same time that the government has been massively expanding its ability to know everything that we’re doing it has simultaneously erected a wall of secrecy around it that prevents us from knowing anything that they’re doing
  • government now operates with complete secrecy, and we have none
  • it makes people believe that they’re free even though they’ve been subtly convinced that there are things that they shouldn’t do that they might want to do
  • what has happened in the last three to four years is a radical change in the war on terror. The war on terror has now been imported into United States policy. It is now directed at American citizens on American soil. So rather than simply sending drones to foreign soil to assassinate foreign nationals, we are now sending drones to target and kill American citizens without charges or trial. Rather than indefinitely detaining foreign nationals like Guantanamo, Congress last year enacted, and President Obama signed, the National Defense Authorization Act that permits the detention -- without trial, indefinitely -- of American citizens on U.S. soil.
  • this is what the Surveillance State is designed to do.  It’s justified, in the name of terrorism, of course that’s the packaging in which it’s wrapped, that’s been used extremely, and in all sorts of ways, since 9/11 for domestic application. And that’s being, that’s happening even more. It’s happening in terms of the Occupy movement and the infiltration that federal officials were able to accomplish using Patriot Act authorities. It’s happened with pro-Palestinian activists in the United States and all other dissident groups that have themselves [dealt with] with surveillance and law enforcement used by what was originally the war on terror powers.
  • if the government is able to know what we speak about and know who we’re talking to, know what it is that we’re planning, it makes any kind of activism extremely difficult. Because secrecy and privacy are prerequisites to effective actions.
  • we are training our young citizens to live in a culture where the expect they are always being watched. And we want them to be chilled, we want them to be deterred, we want them not to ever challenge orthodoxy or to explore limits where engaging creativity in any kind. This type of surveillance, by design, breeds conformism.  That’s its purpose. that’s what makes surveillance so pernicious.
  • f you go and speak to communities of American Muslims is you find an incredibly pervasive climate of fear.
  • This climate of fear creates limits around the behavior in which they’re willing to engage in very damaging ways
  • governments, when they want to give themselves abusive and radical powers, typically first target people who they think their citizens won’t care very much about, because they’ll think they’re not affected by it
  • the psychological effects on East German people endure until today. The way in which they have been trained for decades to understand that there are limits to their life, even once you remove the limits, they’ve been trained that those are not things they need to transgress.
  • Rosa Luxembourg said, “He who does not move does not notice his chains.”
  • You can acculturate people to believing that tyranny is freedom, that their limits are actually emancipations and freedom, that is what this Surveillance State does, by training people  to accept their own conformity that they are actually free, that they no longer even realize the ways in which they’re being limited.
  • important means of subverting this one-way mirror that I’ve described is forcible, radical transparency. It’s one of the reasons I support, so enthusiastically and unqualafiably, groups like Anonymous and WikiLeaks. I want holes to be blown in the wall of secrecy.
  • There are things like the Tor project and other groups that enable people to use the internet without any detection from government authorities. That has the effect of preventing regimes that actually bar their citizens from using the Internet from doing so since you can no longer trace the origins of the Internet user. But it also protects people who live in countries like ours where the government is trying to constantly monitor what we do by sending our communications through multiple proxies around the world that can’t be invaded. There’s really a war taking place: an arms race where the government and these groups are attempting to stay one tactical step ahead of the other. In terms of ability to shield internet communications from the government and the government’s ability to invade them and participating in this war in ways that are supportive of the “good side” are really critical as is veiling yourself from the technology that exists, to make what you do as tight as possible.
Ed Webb

Project Vigilant and the government/corporate destruction of privacy - Glenn Greenwald ... - 0 views

  • it's the re-packaging and transfer of this data to the U.S. Government -- combined with the ability to link it not only to your online identity (IP address), but also your offline identity (name) -- that has made this industry particularly pernicious.  There are serious obstacles that impede the Government's ability to create these electronic dossiers themselves.  It requires both huge resources and expertise.  Various statutes enacted in the mid-1970s -- such as the Privacy Act of 1974 -- impose transparency requirements and other forms of accountability on programs whereby the Government collects data on citizens.  And the fact that much of the data about you ends up in the hands of private corporations can create further obstacles, because the tools which the Government has to compel private companies to turn over this information is limited (the fact that the FBI is sometimes unable to obtain your "transactional" Internet data without a court order -- i.e., whom you email, who emails you, what Google searches you enter, and what websites you visit --is what has caused the Obama administration to demand that Congress amend the Patriot Act to vest them with the power to obtain all of that with no judicial supervision). But the emergence of a private market that sells this data to the Government (or, in the case of Project Vigilance, is funded in order to hand it over voluntarily) has eliminated those obstacles.
  • a wide array of government agencies have created countless programs to encourage and formally train various private workers (such as cable installers, utilities workers and others who enter people's homes) to act as government informants and report any "suspicious" activity; see one example here.  Meanwhile, TIA has been replicated, and even surpassed, as a result of private industries' willingness to do the snooping work on American citizens which the Government cannot do.
  • this arrangement provides the best of all worlds for the Government and the worst for citizens: The use of private-sector data aggregators allows the government to insulate surveillance and information-handling practices from privacy laws or public scrutiny. That is sometimes an important motivation in outsourced surveillance.  Private companies are free not only from complying with the Privacy Act, but from other checks and balances, such as the Freedom of Information Act.  They are also insulated from oversight by Congress and are not subject to civil-service laws designed to ensure that government policymakers are not influenced by partisan politics. . . .
  • ...4 more annotations...
  • There is a long and unfortunate history of cooperation between government security agencies and powerful corporations to deprive individuals of their privacy and other civil liberties, and any program that institutionalizes close, secretive ties between such organizations raises serious questions about the scope of its activities, now and in the future.
  • Many people are indifferent to the disappearance of privacy -- even with regard to government officials -- because they don't perceive any real value to it.  The ways in which the loss of privacy destroys a society are somewhat abstract and difficult to articulate, though very real.  A society in which people know they are constantly being monitored is one that breeds conformism and submission, and which squashes innovation, deviation, and real dissent. 
  • that's what a Surveillance State does:  it breeds fear of doing anything out of the ordinary by creating a class of meek citizens who know they are being constantly watched.
  • The loss of privacy is entirely one-way.  Government and corporate authorities have destroyed most vestiges of privacy for you, while ensuring that they have more and more for themselves.  The extent to which you're monitored grows in direct proportion to the secrecy with which they operate.  Sir Francis Bacon's now platitudinous observation that "knowledge itself is power" is as true as ever.  That's why this severe and always-growing imbalance is so dangerous, even to those who are otherwise content to have themselves subjected to constant monitoring.
Ed Webb

AI Tweets "Little Beetles Is An Arthropod," and Other Facts About The World, As It Lear... - 0 views

  • By saying that NELL has "adopted" the human behaviour of tweeting you are misleading the reader. It is more likely that the software was specifically progremmed to do so and therefore has "adopted" no "human behavior". FAIL.
  •  
    sloppy journalism
1 - 20 of 33 Next ›
Showing 20 items per page