Skip to main content

Home/ Dystopias/ Group items tagged algorithms

Rss Feed Group items tagged

Ed Webb

elearnspace › The algorithms that rule our lives - 1 views

  • A significant difficulty that learning analytics needs to address is the possible return to behaviourism where we make decisions about learning only on observable behaviours of learners. Nonetheless, algorithms define our lives and how organizations interact with us. It’s a data-driven world, and the algorithm reigns supreme.
  •  
    Should we be worried about the growing dominance of algorithms in steering our fates?
Ed Webb

At age 13, I joined the alt-right, aided by Reddit and Google - 0 views

  • Now, I’m 16, and I’ve been able to reflect on how I got sucked into that void—and how others do, too. My brief infatuation with the alt-right has helped me understand the ways big tech companies and their algorithms are contributing to the problem of radicalization—and why it’s so important to be skeptical of what you read online.
  • while a quick burst of radiation probably won’t give you cancer, prolonged exposure is far more dangerous. The same is true for the alt-right. I knew that the messages I was seeing were wrong, but the more I saw them, the more curious I became. I was unfamiliar with most of the popular discussion topics on Reddit. And when you want to know more about something, what do you do? You probably don’t think to go to the library and check out a book on that subject, and then fact check and cross reference what you find. If you just google what you want to know, you can get the information you want within seconds.
  • I started googling things like “Illegal immigration,” “Sandy Hook actors,” and “Black crime rate.” And I found exactly what I was looking for.
  • ...11 more annotations...
  • The articles and videos I first found all backed up what I was seeing on Reddit—posts that asserted a skewed version of actual reality, using carefully selected, out-of-context, and dubiously sourced statistics that propped up a hateful world view. On top of that, my online results were heavily influenced by something called an algorithm. I understand algorithms to be secretive bits of code that a website like YouTube will use to prioritize content that you are more likely to click on first. Because all of the content I was reading or watching was from far-right sources, all of the links that the algorithms dangled on my screen for me to click were from far-right perspectives.
  • I spent months isolated in my room, hunched over my computer, removing and approving memes on Reddit and watching conservative “comedians” that YouTube served up to me.
  • The inflammatory language and radical viewpoints used by the alt-right worked to YouTube and Google’s favor—the more videos and links I clicked on, the more ads I saw, and in turn, the more ad revenue they generated.
  • the biggest step in my recovery came when I attended a pro-Trump rally in Washington, D.C., in September 2017, about a month after the “Unite the Right” rally in Charlottesville, Virginia
  • The difference between the online persona of someone who identifies as alt-right and the real thing is so extreme that you would think they are different people. Online, they have the power of fake and biased news to form their arguments. They sound confident and usually deliver their standard messages strongly. When I met them in person at the rally, they were awkward and struggled to back up their statements. They tripped over their own words, and when they were called out by any counter protestors in the crowd, they would immediately use a stock response such as “You’re just triggered.”
  • Seeing for myself that the people I was talking to online were weak, confused, and backwards was the turning point for me.
  • we’re too far gone to reverse the damage that the alt-right has done to the internet and to naive adolescents who don’t know any better—children like the 13-year-old boy I was. It’s convenient for a massive internet company like Google to deliberately ignore why people like me get misinformed in the first place, as their profit-oriented algorithms continue to steer ignorant, malleable people into the jaws of the far-right
  • Dylann Roof, the white supremacist who murdered nine people in a Charleston, South Carolina, church in 2015, was radicalized by far-right groups that spread misinformation with the aid of Google’s algorithms.
  • Over the past couple months, I’ve been getting anti-immigration YouTube ads that feature an incident presented as a “news” story, about two immigrants who raped an American girl. The ad offers no context or sources, and uses heated language to denounce immigration and call for our county to allow ICE to seek out illegal immigrants within our area. I wasn’t watching a video about immigration or even politics when those ads came on; I was watching the old Monty Python “Cheese Shop” sketch. How does British satire, circa 1972, relate to America’s current immigration debate? It doesn’t.
  • tech companies need to be held accountable for the radicalization that results from their systems and standards.
  • anyone can be manipulated like I was. It’s so easy to find information online that we collectively forget that so much of the content the internet offers us is biased
Ed Webb

Does the Digital Classroom Enfeeble the Mind? - NYTimes.com - 0 views

  • My father would have been unable to “teach to the test.” He once complained about errors in a sixth-grade math textbook, so he had the class learn math by designing a spaceship. My father would have been spat out by today’s test-driven educational regime.
  • A career in computer science makes you see the world in its terms. You start to see money as a form of information display instead of as a store of value. Money flows are the computational output of a lot of people planning, promising, evaluating, hedging and scheming, and those behaviors start to look like a set of algorithms. You start to see the weather as a computer processing bits tweaked by the sun, and gravity as a cosmic calculation that keeps events in time and space consistent. This way of seeing is becoming ever more common as people have experiences with computers. While it has its glorious moments, the computational perspective can at times be uniquely unromantic. Nothing kills music for me as much as having some algorithm calculate what music I will want to hear. That seems to miss the whole point. Inventing your musical taste is the point, isn’t it? Bringing computers into the middle of that is like paying someone to program a robot to have sex on your behalf so you don’t have to. And yet it seems we benefit from shining an objectifying digital light to disinfect our funky, lying selves once in a while. It’s heartless to have music chosen by digital algorithms. But at least there are fewer people held hostage to the tastes of bad radio D.J.’s than there once were. The trick is being ambidextrous, holding one hand to the heart while counting on the digits of the other.
  • The future of education in the digital age will be determined by our judgment of which aspects of the information we pass between generations can be represented in computers at all. If we try to represent something digitally when we actually can’t, we kill the romance and make some aspect of the human condition newly bland and absurd. If we romanticize information that shouldn’t be shielded from harsh calculations, we’ll suffer bad teachers and D.J.’s and their wares.
  • ...5 more annotations...
  • Some of the top digital designs of the moment, both in school and in the rest of life, embed the underlying message that we understand the brain and its workings. That is false. We don’t know how information is represented in the brain. We don’t know how reason is accomplished by neurons. There are some vaguely cool ideas floating around, and we might know a lot more about these things any moment now, but at this moment, we don’t. You could spend all day reading literature about educational technology without being reminded that this frontier of ignorance lies before us. We are tempted by the demons of commercial and professional ambition to pretend we know more than we do.
  • Outside school, something similar happens. Students spend a lot of time acting as trivialized relays in giant schemes designed for the purposes of advertising and other revenue-minded manipulations. They are prompted to create databases about themselves and then trust algorithms to assemble streams of songs and movies and stories for their consumption. We see the embedded philosophy bloom when students assemble papers as mash-ups from online snippets instead of thinking and composing on a blank piece of screen. What is wrong with this is not that students are any lazier now or learning less. (It is probably even true, I admit reluctantly, that in the presence of the ambient Internet, maybe it is not so important anymore to hold an archive of certain kinds of academic trivia in your head.) The problem is that students could come to conceive of themselves as relays in a transpersonal digital structure. Their job is then to copy and transfer data around, to be a source of statistics, whether to be processed by tests at school or by advertising schemes elsewhere.
  • If students don’t learn to think, then no amount of access to information will do them any good.
  • To the degree that education is about the transfer of the known between generations, it can be digitized, analyzed, optimized and bottled or posted on Twitter. To the degree that education is about the self-invention of the human race, the gargantuan process of steering billions of brains into unforeseeable states and configurations in the future, it can continue only if each brain learns to invent itself. And that is beyond computation because it is beyond our comprehension.
  • Roughly speaking, there are two ways to use computers in the classroom. You can have them measure and represent the students and the teachers, or you can have the class build a virtual spaceship. Right now the first way is ubiquitous, but the virtual spaceships are being built only by tenacious oddballs in unusual circumstances. More spaceships, please.
  •  
    How do we get this right - use the tech for what it can do well, develop our brains for what the tech can't do? Who's up for building a spaceship?
Ed Webb

The Digital Maginot Line - 0 views

  • The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless). In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.
  • There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.
  • There’s very little incentive not to try everything: this is a revolution that is being A/B tested.
  • ...17 more annotations...
  • The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.
  • Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target Russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure
  • Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle. But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.
  • The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead
  • Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble. But ultimately the information war is about territory — just not the geographic kind. In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.
  • This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”
  • The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.
  • If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it.
  • Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.
  • Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.
  • The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare
  • Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups
  • platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors
  • What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation. We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.
  • We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.
  • Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.
  • Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets.
Ed Webb

An Ode To RSS, A Vessel Of Freedom In Elearning | LMSPulse - 0 views

  • There is probably no technology more beaten down, more discarded by “innovators,” and yet more irreplaceable and urgent today than RSS.
  • In an age of user feeds, everyone insist on becoming the one channel to rule your life, into a tyranny of algorithmic centralization. In a crafty and much needed mesh of open source standards and open-mindedness, RSS understood the human spirit will seek after freedom and choice at every turn.
  • democratized syndication
  • ...2 more annotations...
  • In a world of algorithms thirsty to co-opt not just your data, but your experiences, where the mischievous interests seeking to divide and taking us into eschewing the old for old’s same seem to have won, being a user of RSS feels like wearing a badge of honor. A tiny, rectangular, orange one. At its height, RSS never reached the mass adoption levels of comfy social networks today. For many the final blow was dealt by Google, shutting down the popular Google Reader because it was not popular enough. It probably had nothing to do with the higher monetization capabilities of Google+, reaped one by one by Facebook in probably no kind of poetic justice.
  • plenty of us willing to go the extra geeky mile to get content directly, without an algorithm deciding what is good for me and what isn’t. Without foolish commenters letting me know how I should feel.
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
Ed Webb

I unintentionally created a biased AI algorithm 25 years ago - tech companies are still... - 0 views

  • How and why do well-educated, well-intentioned scientists produce biased AI systems? Sociological theories of privilege provide one useful lens.
  • Their training data is biased. They are designed by an unrepresentative group. They face the mathematical impossibility of treating all categories equally. They must somehow trade accuracy for fairness. And their biases are hiding behind millions of inscrutable numerical parameters.
  • fairness can still be the victim of competitive pressures in academia and industry. The flawed Bard and Bing chatbots from Google and Microsoft are recent evidence of this grim reality. The commercial necessity of building market share led to the premature release of these systems.
  • ...3 more annotations...
  • Scientists also face a nasty subconscious dilemma when incorporating diversity into machine learning models: Diverse, inclusive models perform worse than narrow models.
  • biased AI systems can still be created unintentionally and easily. It’s also clear that the bias in these systems can be harmful, hard to detect and even harder to eliminate.
  • with North American computer science doctoral programs graduating only about 23% female, and 3% Black and Latino students, there will continue to be many rooms and many algorithms in which underrepresented groups are not represented at all.
Ed Webb

An Algorithm Summarizes Lengthy Text Surprisingly Well - MIT Technology Review - 0 views

  • As information overload grows ever worse, computers may become our only hope for handling a growing deluge of documents. And it may become routine to rely on a machine to analyze and paraphrase articles, research papers, and other text for you.
  • Parsing language remains one of the grand challenges of artificial intelligence (see “AI’s Language Problem”). But it’s a challenge with enormous commercial potential. Even limited linguistic intelligence—the ability to parse spoken or written queries, and to respond in more sophisticated and coherent ways—could transform personal computing. In many specialist fields—like medicine, scientific research, and law—condensing information and extracting insights could have huge commercial benefits.
  • The system experiments in order to generate summaries of its own using a process called reinforcement learning. Inspired by the way animals seem to learn, this involves providing positive feedback for actions that lead toward a particular objective. Reinforcement learning has been used to train computers to do impressive new things, like playing complex games or controlling robots (see “10 Breakthrough Technologies 2017: Reinforcement Learning”). Those working on conversational interfaces are increasingly now looking at reinforcement learning as a way to improve their systems.
  • ...1 more annotation...
  • “At some point, we have to admit that we need a little bit of semantics and a little bit of syntactic knowledge in these systems in order for them to be fluid and fluent,”
Ed Webb

Where is the boundary between your phone and your mind? | US news | The Guardian - 1 views

  • Here’s a thought experiment: where do you end? Not your body, but you, the nebulous identity you think of as your “self”. Does it end at the limits of your physical form? Or does it include your voice, which can now be heard as far as outer space; your personal and behavioral data, which is spread out across the impossibly broad plane known as digital space; and your active online personas, which probably encompass dozens of different social media networks, text message conversations, and email exchanges? This is a question with no clear answer, and, as the smartphone grows ever more essential to our daily lives, that border’s only getting blurrier.
  • our minds have become even more radically extended than ever before
  • one of the essential differences between a smartphone and a piece of paper, which is that our relationship with our phones is reciprocal: we not only put information into the device, we also receive information from it, and, in that sense, it shapes our lives far more actively than would, say, a shopping list. The shopping list isn’t suggesting to us, based on algorithmic responses to our past and current shopping behavior, what we should buy; the phone is
  • ...10 more annotations...
  • American consumers spent five hours per day on their mobile devices, and showed a dizzying 69% year-over-year increase in time spent in apps like Facebook, Twitter, and YouTube. The prevalence of apps represents a concrete example of the movement away from the old notion of accessing the Internet through a browser and the new reality of the connected world and its myriad elements – news, social media, entertainment – being with us all the time
  • “In the 90s and even through the early 2000s, for many people, there was this way of thinking about cyberspace as a space that was somewhere else: it was in your computer. You went to your desktop to get there,” Weigel says. “One of the biggest shifts that’s happened and that will continue to happen is the undoing of a border that we used to perceive between the virtual and the physical world.”
  • While many of us think of the smartphone as a portal for accessing the outside world, the reciprocity of the device, as well as the larger pattern of our behavior online, means the portal goes the other way as well: it’s a means for others to access us
  • Weigel sees the unfettered access to our data, through our smartphone and browser use, of what she calls the big five tech companies – Apple, Alphabet (the parent company of Google), Microsoft, Facebook, and Amazon – as a legitimate problem for notions of democracy
  • an unfathomable amount of wealth, power, and direct influence on the consumer in the hands of just a few individuals – individuals who can affect billions of lives with a tweak in the code of their products
  • “This is where the fundamental democracy deficit comes from: you have this incredibly concentrated private power with zero transparency or democratic oversight or accountability, and then they have this unprecedented wealth of data about their users to work with,”
  • the rhetoric around the Internet was that the crowd would prevent the spread of misinformation, filtering it out like a great big hive mind; it would also help to prevent the spread of things like hate speech. Obviously, this has not been the case, and even the relatively successful experiments in this, such as Wikipedia, have a great deal of human governance that allows them to function properly
  • We should know and be aware of how these companies work, how they track our behavior, and how they make recommendations to us based on our behavior and that of others. Essentially, we need to understand the fundamental difference between our behavior IRL and in the digital sphere – a difference that, despite the erosion of boundaries, still stands
  • “Whether we know it or not, the connections that we make on the Internet are being used to cultivate an identity for us – an identity that is then sold to us afterward,” Lynch says. “Google tells you what questions to ask, and then it gives you the answers to those questions.”
  • It isn’t enough that the apps in our phone flatten all of the different categories of relationships we have into one broad group: friends, followers, connections. They go one step further than that. “You’re being told who you are all the time by Facebook and social media because which posts are coming up from your friends are due to an algorithm that is trying to get you to pay more attention to Facebook,” Lynch says. “That’s affecting our identity, because it affects who you think your friends are, because they’re the ones who are popping up higher on your feed.”
Ed Webb

What we still haven't learned from Gamergate - Vox - 0 views

  • Harassment and misogyny had been problems in the community for years before this; the deep resentment and anger toward women that powered Gamergate percolated for years on internet forums. Robert Evans, a journalist who specializes in extremist communities and the host of the Behind the Bastards podcast, described Gamergate to me as partly organic and partly born out of decades-long campaigns by white supremacists and extremists to recruit heavily from online forums. “Part of why Gamergate happened in the first place was because you had these people online preaching to these groups of disaffected young men,” he said. But what Gamergate had that those previous movements didn’t was an organized strategy, made public, cloaking itself as a political movement with a flimsy philosophical stance, its goals and targets amplified by the power of Twitter and a hashtag.
  • The hate campaign, we would later learn, was the moment when our ability to repress toxic communities and write them off as just “trolls” began to crumble. Gamergate ultimately gave way to something deeper, more violent, and more uncontrollable.
  • Police have to learn how to keep the rest of us safe from internet mobs
  • ...20 more annotations...
  • the justice system continues to be slow to understand the link between online harassment and real-life violence
  • In order to increase public safety this decade, it is imperative that police — and everyone else — become more familiar with the kinds of communities that engender toxic, militant systems of harassment, and the online and offline spaces where these communities exist. Increasingly, that means understanding social media’s dark corners, and the types of extremism they can foster.
  • Businesses have to learn when online outrage is manufactured
  • There’s a difference between organic outrage that arises because an employee actually does something outrageous, and invented outrage that’s an excuse to harass someone whom a group has already decided to target for unrelated reasons — for instance, because an employee is a feminist. A responsible business would ideally figure out which type of outrage is occurring before it punished a client or employee who was just doing their job.
  • Social media platforms didn’t learn how to shut down disingenuous conversations over ethics and free speech before they started to tear their cultures apart
  • Dedication to free speech over the appearance of bias is especially important within tech culture, where a commitment to protecting free speech is both a banner and an excuse for large corporations to justify their approach to content moderation — or lack thereof.
  • Reddit’s free-speech-friendly moderation stance resulted in the platform tacitly supporting pro-Gamergate subforums like r/KotakuInAction, which became a major contributor to Reddit’s growing alt-right community. Twitter rolled out a litany of moderation tools in the wake of Gamergate, intended to allow harassment targets to perpetually block, mute, and police their own harassers — without actually effectively making the site unwelcome for the harassers themselves. And YouTube and Facebook, with their algorithmic amplification of hateful and extreme content, made no effort to recognize the violence and misogyny behind pro-Gamergate content, or police them accordingly.
  • All of these platforms are wrestling with problems that seem to have grown beyond their control; it’s arguable that if they had reacted more swiftly to slow the growth of the internet’s most toxic and misogynistic communities back when those communities, particularly Gamergate, were still nascent, they could have prevented headaches in the long run — and set an early standard for how to deal with ever-broadening issues of extremist content online.
  • Violence against women is a predictor of other kinds of violence. We need to acknowledge it.
  • Somehow, the idea that all of that sexism and anti-feminist anger could be recruited, harnessed, and channeled into a broader white supremacist movement failed to generate any real alarm, even well into 2016
  • many of the perpetrators of real-world violence are radicalized online first
  • It remains difficult for many to accept the throughline from online abuse to real-world violence against women, much less the fact that violence against women, online and off, is a predictor of other kinds of real-world violence
  • Politicians and the media must take online “ironic” racism and misogyny seriously
  • Gamergate masked its misogyny in a coating of shrill yelling that had most journalists in 2014 writing off the whole incident as “satirical” and immature “trolling,” and very few correctly predicting that Gamergate’s trolling was the future of politics
  • Gamergate was all about disguising a sincere wish for violence and upheaval by dressing it up in hyperbole and irony in order to confuse outsiders and make it all seem less serious.
  • Gamergate simultaneously masqueraded as legitimate concern about ethics that demanded audiences take it seriously, and as total trolling that demanded audiences dismiss it entirely. Both these claims served to obfuscate its real aim — misogyny, and, increasingly, racist white supremacy
  • The public’s failure to understand and accept that the alt-right’s misogyny, racism, and violent rhetoric is serious goes hand in hand with its failure to understand and accept that such rhetoric is identical to that of President Trump
  • deploying offensive behavior behind a guise of mock outrage, irony, trolling, and outright misrepresentation, in order to mask the sincere extremism behind the message.
  • many members of the media, politicians, and members of the public still struggle to accept that Trump’s rhetoric is having violent consequences, despite all evidence to the contrary.
  • The movement’s insistence that it was about one thing (ethics in journalism) when it was about something else (harassing women) provided a case study for how extremists would proceed to drive ideological fissures through the foundations of democracy: by building a toxic campaign of hate beneath a veneer of denial.
Ed Webb

Wearing a mask won't stop facial recognition anymore - The coronavirus is prompting fac... - 0 views

  • expanding this system to a wider group of people would be hard. When a population reaches a certain scale, the system is likely to encounter people with similar eyes.This might be why most commercial facial recognition systems that can identify masked faces seem limited to small-scale applications
  • Many residential communities, especially in areas hit hardest by the virus, have been limiting entry to residents only. Minivision introduced the new algorithm to its facial recognition gate lock systems in communities in Nanjing to quickly recognize residents without the need to take off masks.
  • SenseTime, which announced the rollout of its face mask-busting tech last week, explained that its algorithm is designed to read 240 facial feature key points around the eyes, mouth and nose. It can make a match using just the parts of the face that are visible.
  • ...1 more annotation...
  • New forms of facial recognition can now recognize not just people wearing masks over their mouths, but also people in scarves and even with fake beards. And the technology is already rolling out in China because of one unexpected event: The coronavirus outbreak.
Ed Webb

DK Matai: The Rise of The Bio-Info-Nano Singularity - 0 views

  • The human capability for information processing is limited, yet there is an accelerating change in the development and deployment of new technology. This relentless wave upon wave of new information and technology causes an overload on the human mind by eventually flooding it. The resulting acopia -- inability to cope -- has to be solved by the use of ever more sophisticated information intelligence. Extrapolating these capabilities suggests the near-term emergence and visibility of self-improving neural networks, "artificial" intelligence, quantum algorithms, quantum computing and super-intelligence. This metamorphosis is so much beyond present human capabilities that it becomes impossible to understand it with the pre-conceptions and conditioning of the present mindset, societal make-up and existing technology
  • The Bio-Info-Nano Singularity is a transcendence to a wholly new regime of mind, society and technology, in which we have to learn to think in a new way in order to survive as a species.
  • What is globalized human society going to do with the mass of unemployed human beings that are rendered obsolete by the approaching super-intelligence of the Bio-Info-Nano Singularity?
  • ...5 more annotations...
  • Nothing futurists predict ever comes true, but, by the time the time comes, everybody has forgotten they said it--and then they are free to say something else that never will come true but that everybody will have forgotten they said by the time the time come
  • Most of us will become poisoned troglodytes in a techno dystopia
  • Any engineer can make 'stuff' go faster, kill deader, sort quicker, fly higher, record sharper, destroy more completely, etc.. We have a surfeit of that kind of creativity. What we need is some kind of genius to create a society that treats each other with equality, justice, caring and cooperativeness. The concept of 'singularity' doesn't excite me nearly as much as the idea that sometime we might be able to move beyond the civilization level of a troop of chimpanzees. I'm hoping that genius comes before we manage to destroy what little civilization we have with all our neat "stuff"
  • There's a lot of abstraction in this article, which is a trend of what I have read of a number of various movements taking up the Singularity cause. This nebulous but optimistic prediction of an incomprehensibly advanced future, wherein through technology and science we achieve quasi-immortality, or absolute control of thought, omniscience, or transcendence from the human entirely
  • Welcome to the Frankenstein plot. This is a very common Hollywood plot, the idea of a manmade creation running amok. The concept that the author describes can also be described as an asymtotic curve on a graph where scientific achievement parallels time at first then gradually begins to go vertical until infinite scientific knowledge and invention occurs in an incredibly short time.
Ed Webb

The Web Means the End of Forgetting - NYTimes.com - 1 views

  • for a great many people, the permanent memory bank of the Web increasingly means there are no second chances — no opportunities to escape a scarlet letter in your digital past. Now the worst thing you’ve done is often the first thing everyone knows about you.
  • a collective identity crisis. For most of human history, the idea of reinventing yourself or freely shaping your identity — of presenting different selves in different contexts (at home, at work, at play) — was hard to fathom, because people’s identities were fixed by their roles in a rigid social hierarchy. With little geographic or social mobility, you were defined not as an individual but by your village, your class, your job or your guild. But that started to change in the late Middle Ages and the Renaissance, with a growing individualism that came to redefine human identity. As people perceived themselves increasingly as individuals, their status became a function not of inherited categories but of their own efforts and achievements. This new conception of malleable and fluid identity found its fullest and purest expression in the American ideal of the self-made man, a term popularized by Henry Clay in 1832.
  • the dawning of the Internet age promised to resurrect the ideal of what the psychiatrist Robert Jay Lifton has called the “protean self.” If you couldn’t flee to Texas, you could always seek out a new chat room and create a new screen name. For some technology enthusiasts, the Web was supposed to be the second flowering of the open frontier, and the ability to segment our identities with an endless supply of pseudonyms, avatars and categories of friendship was supposed to let people present different sides of their personalities in different contexts. What seemed within our grasp was a power that only Proteus possessed: namely, perfect control over our shifting identities. But the hope that we could carefully control how others view us in different contexts has proved to be another myth. As social-networking sites expanded, it was no longer quite so easy to have segmented identities: now that so many people use a single platform to post constant status updates and photos about their private and public activities, the idea of a home self, a work self, a family self and a high-school-friends self has become increasingly untenable. In fact, the attempt to maintain different selves often arouses suspicion.
  • ...20 more annotations...
  • All around the world, political leaders, scholars and citizens are searching for responses to the challenge of preserving control of our identities in a digital world that never forgets. Are the most promising solutions going to be technological? Legislative? Judicial? Ethical? A result of shifting social norms and cultural expectations? Or some mix of the above?
  • These approaches share the common goal of reconstructing a form of control over our identities: the ability to reinvent ourselves, to escape our pasts and to improve the selves that we present to the world.
  • many technological theorists assumed that self-governing communities could ensure, through the self-correcting wisdom of the crowd, that all participants enjoyed the online identities they deserved. Wikipedia is one embodiment of the faith that the wisdom of the crowd can correct most mistakes — that a Wikipedia entry for a small-town mayor, for example, will reflect the reputation he deserves. And if the crowd fails — perhaps by turning into a digital mob — Wikipedia offers other forms of redress
  • In practice, however, self-governing communities like Wikipedia — or algorithmically self-correcting systems like Google — often leave people feeling misrepresented and burned. Those who think that their online reputations have been unfairly tarnished by an isolated incident or two now have a practical option: consulting a firm like ReputationDefender, which promises to clean up your online image. ReputationDefender was founded by Michael Fertik, a Harvard Law School graduate who was troubled by the idea of young people being forever tainted online by their youthful indiscretions. “I was seeing articles about the ‘Lord of the Flies’ behavior that all of us engage in at that age,” he told me, “and it felt un-American that when the conduct was online, it could have permanent effects on the speaker and the victim. The right to new beginnings and the right to self-definition have always been among the most beautiful American ideals.”
  • In the Web 3.0 world, Fertik predicts, people will be rated, assessed and scored based not on their creditworthiness but on their trustworthiness as good parents, good dates, good employees, good baby sitters or good insurance risks.
  • “Our customers include parents whose kids have talked about them on the Internet — ‘Mom didn’t get the raise’; ‘Dad got fired’; ‘Mom and Dad are fighting a lot, and I’m worried they’ll get a divorce.’ ”
  • as facial-recognition technology becomes more widespread and sophisticated, it will almost certainly challenge our expectation of anonymity in public
  • Ohm says he worries that employers would be able to use social-network-aggregator services to identify people’s book and movie preferences and even Internet-search terms, and then fire or refuse to hire them on that basis. A handful of states — including New York, California, Colorado and North Dakota — broadly prohibit employers from discriminating against employees for legal off-duty conduct like smoking. Ohm suggests that these laws could be extended to prevent certain categories of employers from refusing to hire people based on Facebook pictures, status updates and other legal but embarrassing personal information. (In practice, these laws might be hard to enforce, since employers might not disclose the real reason for their hiring decisions, so employers, like credit-reporting agents, might also be required by law to disclose to job candidates the negative information in their digital files.)
  • research group’s preliminary results suggest that if rumors spread about something good you did 10 years ago, like winning a prize, they will be discounted; but if rumors spread about something bad that you did 10 years ago, like driving drunk, that information has staying power
  • many people aren’t worried about false information posted by others — they’re worried about true information they’ve posted about themselves when it is taken out of context or given undue weight. And defamation law doesn’t apply to true information or statements of opinion. Some legal scholars want to expand the ability to sue over true but embarrassing violations of privacy — although it appears to be a quixotic goal.
  • Researchers at the University of Washington, for example, are developing a technology called Vanish that makes electronic data “self-destruct” after a specified period of time. Instead of relying on Google, Facebook or Hotmail to delete the data that is stored “in the cloud” — in other words, on their distributed servers — Vanish encrypts the data and then “shatters” the encryption key. To read the data, your computer has to put the pieces of the key back together, but they “erode” or “rust” as time passes, and after a certain point the document can no longer be read.
  • Plenty of anecdotal evidence suggests that young people, having been burned by Facebook (and frustrated by its privacy policy, which at more than 5,000 words is longer than the U.S. Constitution), are savvier than older users about cleaning up their tagged photos and being careful about what they post.
  • norms are already developing to recreate off-the-record spaces in public, with no photos, Twitter posts or blogging allowed. Milk and Honey, an exclusive bar on Manhattan’s Lower East Side, requires potential members to sign an agreement promising not to blog about the bar’s goings on or to post photos on social-networking sites, and other bars and nightclubs are adopting similar policies. I’ve been at dinners recently where someone has requested, in all seriousness, “Please don’t tweet this” — a custom that is likely to spread.
  • There’s already a sharp rise in lawsuits known as Twittergation — that is, suits to force Web sites to remove slanderous or false posts.
  • strategies of “soft paternalism” that might nudge people to hesitate before posting, say, drunken photos from Cancún. “We could easily think about a system, when you are uploading certain photos, that immediately detects how sensitive the photo will be.”
  • It’s sobering, now that we live in a world misleadingly called a “global village,” to think about privacy in actual, small villages long ago. In the villages described in the Babylonian Talmud, for example, any kind of gossip or tale-bearing about other people — oral or written, true or false, friendly or mean — was considered a terrible sin because small communities have long memories and every word spoken about other people was thought to ascend to the heavenly cloud. (The digital cloud has made this metaphor literal.) But the Talmudic villages were, in fact, far more humane and forgiving than our brutal global village, where much of the content on the Internet would meet the Talmudic definition of gossip: although the Talmudic sages believed that God reads our thoughts and records them in the book of life, they also believed that God erases the book for those who atone for their sins by asking forgiveness of those they have wronged. In the Talmud, people have an obligation not to remind others of their past misdeeds, on the assumption they may have atoned and grown spiritually from their mistakes. “If a man was a repentant [sinner],” the Talmud says, “one must not say to him, ‘Remember your former deeds.’ ” Unlike God, however, the digital cloud rarely wipes our slates clean, and the keepers of the cloud today are sometimes less forgiving than their all-powerful divine predecessor.
  • On the Internet, it turns out, we’re not entitled to demand any particular respect at all, and if others don’t have the empathy necessary to forgive our missteps, or the attention spans necessary to judge us in context, there’s nothing we can do about it.
  • Gosling is optimistic about the implications of his study for the possibility of digital forgiveness. He acknowledged that social technologies are forcing us to merge identities that used to be separate — we can no longer have segmented selves like “a home or family self, a friend self, a leisure self, a work self.” But although he told Facebook, “I have to find a way to reconcile my professor self with my having-a-few-drinks self,” he also suggested that as all of us have to merge our public and private identities, photos showing us having a few drinks on Facebook will no longer seem so scandalous. “You see your accountant going out on weekends and attending clown conventions, that no longer makes you think that he’s not a good accountant. We’re coming to terms and reconciling with that merging of identities.”
  • a humane society values privacy, because it allows people to cultivate different aspects of their personalities in different contexts; and at the moment, the enforced merging of identities that used to be separate is leaving many casualties in its wake.
  • we need to learn new forms of empathy, new ways of defining ourselves without reference to what others say about us and new ways of forgiving one another for the digital trails that will follow us forever
Ed Webb

WIRED - 0 views

  • Over the past two years, RealNetworks has developed a facial recognition tool that it hopes will help schools more accurately monitor who gets past their front doors. Today, the company launched a website where school administrators can download the tool, called SAFR, for free and integrate it with their own camera systems
  • how to balance privacy and security in a world that is starting to feel like a scene out of Minority Report
  • facial recognition technology often misidentifies black people and women at higher rates than white men
  • ...7 more annotations...
  • "The use of facial recognition in schools creates an unprecedented level of surveillance and scrutiny," says John Cusick, a fellow at the Legal Defense Fund. "It can exacerbate racial disparities in terms of how schools are enforcing disciplinary codes and monitoring their students."
  • The school would ask adults, not kids, to register their faces with the SAFR system. After they registered, they’d be able to enter the school by smiling at a camera at the front gate. (Smiling tells the software that it’s looking at a live person and not, for instance, a photograph). If the system recognizes the person, the gates automatically unlock
  • The software can predict a person's age and gender, enabling schools to turn off access for people below a certain age. But Glaser notes that if other schools want to register students going forward, they can
  • There are no guidelines about how long the facial data gets stored, how it’s used, or whether people need to opt in to be tracked.
  • Schools could, for instance, use facial recognition technology to monitor who's associating with whom and discipline students differently as a result. "It could criminalize friendships," says Cusick of the Legal Defense Fund.
  • SAFR boasts a 99.8 percent overall accuracy rating, based on a test, created by the University of Massachusetts, that vets facial recognition systems. But Glaser says the company hasn’t tested whether the tool is as good at recognizing black and brown faces as it is at recognizing white ones. RealNetworks deliberately opted not to have the software proactively predict ethnicity, the way it predicts age and gender, for fear of it being used for racial profiling. Still, testing the tool's accuracy among different demographics is key. Research has shown that many top facial recognition tools are particularly bad at recognizing black women
  • "It's tempting to say there's a technological solution, that we're going to find the dangerous people, and we're going to stop them," she says. "But I do think a large part of that is grasping at straws."
Ed Webb

How ethical is it for advertisers to target your mood? | Emily Bell | Opinion | The Gua... - 0 views

  • The effectiveness of psychographic targeting is one bet being made by an increasing number of media companies when it comes to interrupting your viewing experience with advertising messages.
  • “Across the board, articles that were in top emotional categories, such as love, sadness and fear, performed significantly better than articles that were not.”
  • ESPN and USA Today are also using psychographic rather than demographic targeting to sell to advertisers, including in ESPN’s case, the decision to not show you advertising at all if your team is losing.
  • ...9 more annotations...
  • Media companies using this technology claim it is now possible for the “mood” of the reader or viewer to be tracked in real time and the content of the advertising to be changed accordingly
  • ads targeted at readers based on their predicted moods rather than their previous behaviour improved the click-through rate by 40%.
  • Given that the average click through rate (the number of times anyone actually clicks on an ad) is about 0.4%, this number (in gross terms) is probably less impressive than it sounds.
  • Cambridge Analytica, the company that misused Facebook data and, according to its own claims, helped Donald Trump win the 2016 election, used psychographic segmentation.
  • For many years “contextual” ads served by not very intelligent algorithms were the bane of digital editors’ lives. Improvements in machine learning should help eradicate the horrible business of showing insurance advertising to readers in the middle of an article about a devastating fire.
  • The words “brand safety” are increasingly used by publishers when demonstrating products such as Project Feels. It is a way publishers can compete on micro-targeting with platforms such as Facebook and YouTube by pointing out that their targeting will not land you next to a conspiracy theory video about the dangers of chemtrails.
  • the exploitation of psychographics is not limited to the responsible and transparent scientists at the NYT. While publishers were showing these shiny new tools to advertisers, Amazon was advertising for a managing editor for its surveillance doorbell, Ring, which contacts your device when someone is at your door. An editor for a doorbell, how is that going to work? In all kinds of perplexing ways according to the ad. It’s “an exciting new opportunity within Ring to manage a team of news editors who deliver breaking crime news alerts to our neighbours. This position is best suited for a candidate with experience and passion for journalism, crime reporting, and people management.” So if instead of thinking about crime articles inspiring fear and advertising doorbells in the middle of them, what if you took the fear that the surveillance-device-cum-doorbell inspires and layered a crime reporting newsroom on top of it to make sure the fear is properly engaging?
  • The media has arguably already played an outsized role in making sure that people are irrationally scared, and now that practice is being strapped to the considerably more powerful engine of an Amazon product.
  • This will not be the last surveillance-based newsroom we see. Almost any product that produces large data feeds can also produce its own “news”. Imagine the Fitbit newsroom or the managing editor for traffic reports from dashboard cams – anything that has a live data feed emanating from it, in the age of the Internet of Things, can produce news.
1 - 15 of 15
Showing 20 items per page