Skip to main content

Home/ Dystopias/ Group items tagged data

Rss Feed Group items tagged

Ed Webb

Sad by design | Eurozine - 0 views

  • ‘technological sadness’ – the default mental state of the online billions
  • If only my phone could gently weep. McLuhan’s ‘extensions of man’ has imploded right into the exhausted self.
  • Social reality is a corporate hybrid between handheld media and the psychic structure of the user. It’s a distributed form of social ranking that can no longer be reduced to the interests of state and corporate platforms. As online subjects, we too are implicit, far too deeply involved
  • ...20 more annotations...
  • Google and Facebook know how to utilize negative emotions, leading to the new system-wide goal: find personalized ways to make you feel bad
  • in Adam Greenfield’s Radical Technologies, where he notices that ‘it seems strange to assert that anything as broad as a class of technologies might have an emotional tenor, but the internet of things does. That tenor is sadness… a melancholy that rolls off it in waves and sheets. The entire pretext on which it depends is a milieu of continuously shattered attention, of overloaded awareness, and of gaps between people just barely annealed with sensors, APIs and scripts.’ It is a life ‘savaged by bullshit jobs, over-cranked schedules and long commutes, of intimacy stifled by exhaustion and the incapacity by exhaustion and the incapacity or unwillingness to be emotionally present.’
  • Omnipresent social media places a claim on our elapsed time, our fractured lives. We’re all sad in our very own way.4 As there are no lulls or quiet moments anymore, the result is fatigue, depletion and loss of energy. We’re becoming obsessed with waiting. How long have you been forgotten by your love ones? Time, meticulously measured on every app, tells us right to our face. Chronos hurts. Should I post something to attract attention and show I’m still here? Nobody likes me anymore. As the random messages keep relentlessly piling in, there’s no way to halt them, to take a moment and think it all through.
  • Unlike the blog entries of the Web 2.0 era, social media have surpassed the summary stage of the diary in a desperate attempt to keep up with real-time regime. Instagram Stories, for example, bring back the nostalgia of an unfolding chain of events – and then disappear at the end of the day, like a revenge act, a satire of ancient sentiments gone by. Storage will make the pain permanent. Better forget about it and move on
  • By browsing through updates, we’re catching up with machine time – at least until we collapse under the weight of participation fatigue. Organic life cycles are short-circuited and accelerated up to a point where the personal life of billions has finally caught up with cybernetics
  • The price of self-control in an age of instant gratification is high. We long to revolt against the restless zombie inside us, but we don’t know how.
  • Sadness arises at the point we’re exhausted by the online world.6 After yet another app session in which we failed to make a date, purchased a ticket and did a quick round of videos, the post-dopamine mood hits us hard. The sheer busyness and self-importance of the world makes you feel joyless. After a dive into the network we’re drained and feel socially awkward. The swiping finger is tired and we have to stop.
  • Much like boredom, sadness is not a medical condition (though never say never because everything can be turned into one). No matter how brief and mild, sadness is the default mental state of the online billions. Its original intensity gets dissipated, it seeps out, becoming a general atmosphere, a chronic background condition. Occasionally – for a brief moment – we feel the loss. A seething rage emerges. After checking for the tenth time what someone said on Instagram, the pain of the social makes us feel miserable, and we put the phone away. Am I suffering from the phantom vibration syndrome? Wouldn’t it be nice if we were offline? Why is life so tragic? He blocked me. At night, you read through the thread again. Do we need to quit again, to go cold turkey again? Others are supposed to move us, to arouse us, and yet we don’t feel anything anymore. The heart is frozen
  • If experience is the ‘habit of creating isolated moments within raw occurrence in order to save and recount them,’11 the desire to anaesthetize experience is a kind of immune response against ‘the stimulations of another modern novelty, the total aesthetic environment’.
  • unlike burn-out, sadness is a continuous state of mind. Sadness pops up the second events start to fade away – and now you’re down the rabbit hole once more. The perpetual now can no longer be captured and leaves us isolated, a scattered set of online subjects. What happens when the soul is caught in the permanent present? Is this what Franco Berardi calls the ‘slow cancellation of the future’? By scrolling, swiping and flipping, we hungry ghosts try to fill the existential emptiness, frantically searching for a determining sign – and failing
  • Millennials, as one recently explained to me, have grown up talking more openly about their state of mind. As work/life distinctions disappear, subjectivity becomes their core content. Confessions and opinions are externalized instantly. Individuation is no longer confined to the diary or small group of friends, but is shared out there, exposed for all to see.
  • Snapstreaks, the ‘best friends’ fire emoji next to a friend’s name indicating that ‘you and that special person in your life have snapped one another within 24 hours for at least two days in a row.’19 Streaks are considered a proof of friendship or commitment to someone. So it’s heartbreaking when you lose a streak you’ve put months of work into. The feature all but destroys the accumulated social capital when users are offline for a few days. The Snap regime forces teenagers, the largest Snapchat user group, to use the app every single day, making an offline break virtually impossible.20 While relationships amongst teens are pretty much always in flux, with friendships being on the edge and always questioned, Snap-induced feelings sync with the rapidly changing teenage body, making puberty even more intense
  • The bare-all nature of social media causes rifts between lovers who would rather not have this information. But in the information age, this does not bode well with the social pressure to participate in social networks.
  • dating apps like Tinder. These are described as time-killing machines – the reality game that overcomes boredom, or alternatively as social e-commerce – shopping my soul around. After many hours of swiping, suddenly there’s a rush of dopamine when someone likes you back. ‘The goal of the game is to have your egos boosted. If you swipe right and you match with a little celebration on the screen, sometimes that’s all that is needed. ‘We want to scoop up all our options immediately and then decide what we actually really want later.’25 On the other hand, ‘crippling social anxiety’ is when you match with somebody you are interested in, but you can’t bring yourself to send a message or respond to theirs ‘because oh god all I could think of was stupid responses or openers and she’ll think I’m an idiot and I am an idiot and…’
  • The metric to measure today’s symptoms would be time – or ‘attention’, as it is called in the industry. While for the archaic melancholic the past never passes, techno-sadness is caught in the perpetual now. Forward focused, we bet on acceleration and never mourn a lost object. The primary identification is there, in our hand. Everything is evident, on the screen, right in your face. Contrasted with the rich historical sources on melancholia, our present condition becomes immediately apparent. Whereas melancholy in the past was defined by separation from others, reduced contacts and reflection on oneself, today’s tristesse plays itself out amidst busy social (media) interactions. In Sherry Turkle’s phrase, we are alone together, as part of the crowd – a form of loneliness that is particularly cruel, frantic and tiring.
  • What we see today are systems that constantly disrupt the timeless aspect of melancholy.31 There’s no time for contemplation, or Weltschmerz. Social reality does not allow us to retreat.32 Even in our deepest state of solitude we’re surrounded by (online) others that babble on and on, demanding our attention
  • distraction does not pull us away, but instead draws us back into the social
  • The purpose of sadness by design is, as Paul B. Preciado calls it, ‘the production of frustrating satisfaction’.39 Should we have an opinion about internet-induced sadness? How can we address this topic without looking down on the online billions, without resorting to fast-food comparisons or patronizingly viewing people as fragile beings that need to be liberated and taken care of.
  • We overcome sadness not through happiness, but rather, as media theorist Andrew Culp has insisted, through a hatred of this world. Sadness occurs in situations where stagnant ‘becoming’ has turned into a blatant lie. We suffer, and there’s no form of absurdism that can offer an escape. Public access to a 21st-century version of dadaism has been blocked. The absence of surrealism hurts. What could our social fantasies look like? Are legal constructs such as creative commons and cooperatives all we can come up with? It seems we’re trapped in smoothness, skimming a surface littered with impressions and notifications. The collective imaginary is on hold. What’s worse, this banality itself is seamless, offering no indicators of its dangers and distortions. As a result, we’ve become subdued. Has the possibility of myth become technologically impossible?
  • We can neither return to mysticism nor to positivism. The naive act of communication is lost – and this is why we cry
Ed Webb

How the U.S. Military Buys Location Data from Ordinary Apps - 0 views

  • The U.S. military is buying the granular movement data of people around the world, harvested from innocuous-seeming apps, Motherboard has learned. The most popular app among a group Motherboard analyzed connected to this sort of data sale is a Muslim prayer and Quran app that has more than 98 million downloads worldwide. Others include a Muslim dating app, a popular Craigslist app, an app for following storms, and a "level" app that can be used to help, for example, install shelves in a bedroom.
  • The Locate X data itself is anonymized, but the source said "we could absolutely deanonymize a person." Babel Street employees would "play with it, to be honest,"
  • "Our access to the software is used to support Special Operations Forces mission requirements overseas. We strictly adhere to established procedures and policies for protecting the privacy, civil liberties, constitutional and legal rights of American citizens."
  • ...7 more annotations...
  • In March, tech publication Protocol first reported that U.S. law enforcement agencies such as Customs and Border Protection (CBP) and Immigration and Customs Enforcement (ICE) were using Locate X. Motherboard then obtained an internal Secret Service document confirming the agency's use of the technology. Some government agencies, including CBP and the Internal Revenue Service (IRS), have also purchased access to location data from another vendor called Venntel.
  • the company tracks 25 million devices inside the United States every month, and 40 million elsewhere, including in the European Union, Latin America, and the Asia-Pacific region
  • Motherboard found another network of dating apps that look and operate nearly identically to Mingle, including sending location data to X-Mode. Motherboard installed another dating app, called Iran Social, on a test device and observed GPS coordinates being sent to the company. The network of apps also includes Turkey Social, Egypt Social, Colombia Social, and others focused on particular countries.
  • Senator Ron Wyden told Motherboard in a statement that X-Mode said it is selling location data harvested from U.S. phones to U.S. military customers."In a September call with my office, lawyers for the data broker X-Mode Social confirmed that the company is selling data collected from phones in the United States to U.S. military customers, via defense contractors. Citing non-disclosure agreements, the company refused to identify the specific defense contractors or the specific government agencies buying the data,"
  • some apps that are harvesting location data on behalf of X-Mode are essentially hiding the data transfer. Muslim Pro does not mention X-Mode in its privacy policy, and did not provide any sort of pop-up when installing or opening the app that explained the transfer of location data in detail. The privacy policy does say Muslim Pro works with Tutela and Quadrant, two other location data companies, however. Motherboard did observe data transfer to Tutela.
  • The Muslim Mingle app provided no pop-up disclosure in Motherboard's tests, nor does the app's privacy policy mention X-Mode at all. Iran Social, one of the apps in the second network of dating apps that used much of the same code, also had the same lack of disclosures around the sale of location data.
  • "The question to ask is whether a reasonable consumer of these services would foresee of these uses and agree to them if explicitly asked. It is safe to say from this context that the reasonable consumer—who is not a tech person—would not have military uses of their data in mind, even if they read the disclosures."
Ed Webb

Saudi Crown Prince Asks: What if a City, But It's a 105-Mile Line - 0 views

  • Vicious Saudi autocrat Mohamed bin Salman has a new vision for Neom, his plan for a massive, $500 billion, AI-powered, nominally legally independent city-state of the future on the border with Egypt and Jordan. When we last left the crown prince, he had reportedly commissioned 2,300-pages’ worth of proposals from Boston Consulting Group, McKinsey & Co. and Oliver Wyman boasting of possible amenities like holographic schoolteachers, cloud seeding to create rain, flying taxis, glow-in-the-dark beaches, a giant NASA-built artificial moon, and lots of robots: maids, cage fighters, and dinosaurs.
  • Now Salman has a bold new idea: One of the cities in Neom is a line. A line roughly 105-miles (170-kilometers) long and a five-minute walk wide, to be exact. No, really, it’s a line. The proposed city is a line that stretches across all of Saudi Arabia. That’s the plan.
  • “With zero cars, zero streets, and zero carbon emissions, you can fulfill all your daily requirements within a five minute walk,” the crown prince continued. “And you can travel from end to end within 20 minutes.”AdvertisementThe end-to-end in 20 minutes boast likely refers to some form of mass transit that doesn’t yet exist. That works out to a transit system running at about 317 mph (510 kph). That would be much faster than Japan’s famous Shinkansen train network, which is capped at 200 mph (321 kph). Some Japanese rail companies have tested maglev trains that have gone up to 373 mph (600 kph), though it’s nowhere near ready for primetime.
  • ...3 more annotations...
  • According to Bloomberg, Saudi officials project the Line will cost around $100-$200 billion of the $500 billion planned to be spent on Neom and will have a population of 1 million with 380,000 jobs by the year 2030. It will have one of the biggest airports in the world for some reason, which seems like a strange addition to a supposedly climate-friendly city.
  • The site also makes numerous hand wavy and vaguely menacing claims, including that “all businesses and communities” will have “over 90%” of their data processed by AI and robots:
  • Don’t pay attention to Saudi war crimes in Yemen, the prince’s brutal crackdowns on dissent, the hit squad that tortured journalist Jamal Khashoggi to death, and the other habitual human rights abuses that allow the Saudi monarchy to remain in power. Also, ignore that obstacles facing Neom include budgetary constraints, the forced eviction of tens of thousands of existing residents such as the Huwaitat tribe, coronavirus and oil shock, investor flight over human rights concerns, and the lingering questions of whether the whole project is a distraction from pressing domestic issues and/or a mirage conjured up by consulting firms pandering to the crown prince’s ego and hungry for lucrative fees. Nevermind you that there are numerous ways we could ensure the cities people already live in are prepared for climate change rather than blowing billions of dollars on a vanity project.
Ed Webb

The Social Split Between TV and Movie Dystopias - NYTimes.com - 0 views

  • Dystopian parables like “The Walking Dead,” where zombies rule the earth, are an increasingly fashionable genre of entertainment, but the degree of apocalyptic pessimism is very different depending on the size of the screen.The dividing line between television and movies seems to be class conflict.Television shows posit a hideous future with a silver lining; survivors, good or bad, are more or less equals. Movies like “Divergent,” “Snowpiercer” and “Elysium” foresee societal divisions that last into Armageddon and beyond and that define a new, inevitably Orwellian world order that emerges from the ruins of civilization.
  • Movies project a morose, scary future where man is his own worst enemy, whereas television can’t entirely suppress a smile.There is something positive about the end of the world on shows like “The Walking Dead,” and “Z Nation” on Syfy and “The Last Ship,” on TNT. True, civilization as we know it is gone, but so is social stratification. Survivors don’t group into castes according to birth, race, income or religion. People of all kinds bond with whomever seems friendly, or at least unthreatening.
  • Dystopian movies based on young-adult novels understandably focus on the oppression of young adults, but in “Divergent” and “The Hunger Games,” a despotic elite divides the little people into cliques, only there is no prom in sight.Engels wrote about “contests between exploiting and exploited, ruling and oppressed classes.” He meant in the movies. On TV, all men are equal and equally at peril in the apocalypse.
Ed Webb

DHS built huge database from cellphones, computers seized at border - The Washington Post - 0 views

  • U.S. government officials are adding data from as many as 10,000 electronic devices each year to a massive database they’ve compiled from cellphones, iPads and computers seized from travelers at the country’s airports, seaports and border crossings, leaders of Customs and Border Protection told congressional staff in a briefing this summer.WpGet the full experience.Choose your planArrowRightThe rapid expansion of the database and the ability of 2,700 CBP officers to access it without a warrant — two details not previously known about the database — have raised alarms in Congress
  • captured from people not suspected of any crime
  • many Americans may not understand or consent to
  • ...6 more annotations...
  • the revelation that thousands of agents have access to a searchable database without public oversight is a new development in what privacy advocates and some lawmakers warn could be an infringement of Americans’ Fourth Amendment rights against unreasonable searches and seizures.
  • CBP officials declined, however, to answer questions about how many Americans’ phone records are in the database, how many searches have been run or how long the practice has gone on, saying it has made no additional statistics available “due to law enforcement sensitivities and national security implications.”
  • Law enforcement agencies must show probable cause and persuade a judge to approve a search warrant before searching Americans’ phones. But courts have long granted an exception to border authorities, allowing them to search people’s devices without a warrant or suspicion of a crime.
  • The CBP directive gives officers the authority to look and scroll through any traveler’s device using what’s known as a “basic search,” and any traveler who refuses to unlock their phone for this process can have it confiscated for up to five days.
  • CBP officials give travelers a printed document saying that the searches are “mandatory,” but the document does not mention that data can be retained for 15 years and that thousands of officials will have access to it.
  • Officers are also not required to give the document to travelers before the search, meaning that some travelers may not fully understand their rights to refuse the search until after they’ve handed over their phones
Ed Webb

Artificial Intelligence and the Future of Humans | Pew Research Center - 0 views

  • experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities
  • most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.
  • CONCERNS Human agency: Individuals are  experiencing a loss of control over their lives Decision-making on key aspects of digital life is automatically ceded to code-driven, "black box" tools. People lack input and do not learn the context about how the tools work. They sacrifice independence, privacy and power over choice; they have no control over these processes. This effect will deepen as automated systems become more prevalent and complex. Data abuse: Data use and surveillance in complex systems is designed for profit or for exercising power Most AI tools are and will be in the hands of companies striving for profits or governments striving for power. Values and ethics are often not baked into the digital systems making people's decisions for them. These systems are globally networked and not easy to regulate or rein in. Job loss: The AI takeover of jobs will widen economic divides, leading to social upheaval The efficiencies and other economic advantages of code-based machine intelligence will continue to disrupt all aspects of human work. While some expect new jobs will emerge, others worry about massive job losses, widening economic divides and social upheavals, including populist uprisings. Dependence lock-in: Reduction of individuals’ cognitive, social and survival skills Many see AI as augmenting human capacities but some predict the opposite - that people's deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others. Mayhem: Autonomous weapons, cybercrime and weaponized information Some predict further erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications and the use of weaponized information, lies and propaganda to dangerously destabilize human groups. Some also fear cybercriminals' reach into economic systems.
  • ...18 more annotations...
  • AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons
  • “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”
  • SUGGESTED SOLUTIONS Global good is No. 1: Improve human collaboration across borders and stakeholder groups Digital cooperation to serve humanity's best interests is the top priority. Ways must be found for people around the world to come to common understandings and agreements - to join forces to facilitate the innovation of widely accepted approaches aimed at tackling wicked problems and maintaining control over complex human-digital networks. Values-based system: Develop policies to assure AI will be directed at ‘humanness’ and common good Adopt a 'moonshot mentality' to build inclusive, decentralized intelligent digital networks 'imbued with empathy' that help humans aggressively ensure that technology meets social and ethical responsibilities. Some new level of regulatory and certification process will be necessary. Prioritize people: Alter economic and political systems to better help humans ‘race with the robots’ Reorganize economic and political systems toward the goal of expanding humans' capacities and capabilities in order to heighten human/AI collaboration and staunch trends that would compromise human relevance in the face of programmed intelligence.
  • As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging
  • We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals?
  • AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment
  • The record to date is that convenience overwhelms privacy
  • “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”
  • AI will eventually cause a large number of people to be permanently out of work
  • Newer generations of citizens will become more and more dependent on networked AI structures and processes
  • there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control
  • As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified
  • Given historical precedent, one would have to assume it will be our worst qualities that are augmented
  • Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing
  • We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully
  • the Orwellian nightmare realised
  • “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”
  • The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education
Ed Webb

Iran Says Face Recognition Will ID Women Breaking Hijab Laws | WIRED - 0 views

  • After Iranian lawmakers suggested last year that face recognition should be used to police hijab law, the head of an Iranian government agency that enforces morality law said in a September interview that the technology would be used “to identify inappropriate and unusual movements,” including “failure to observe hijab laws.” Individuals could be identified by checking faces against a national identity database to levy fines and make arrests, he said.
  • Iran’s government has monitored social media to identify opponents of the regime for years, Grothe says, but if government claims about the use of face recognition are true, it’s the first instance she knows of a government using the technology to enforce gender-related dress law.
  • Mahsa Alimardani, who researches freedom of expression in Iran at the University of Oxford, has recently heard reports of women in Iran receiving citations in the mail for hijab law violations despite not having had an interaction with a law enforcement officer. Iran’s government has spent years building a digital surveillance apparatus, Alimardani says. The country’s national identity database, built in 2015, includes biometric data like face scans and is used for national ID cards and to identify people considered dissidents by authorities.
  • ...5 more annotations...
  • Decades ago, Iranian law required women to take off headscarves in line with modernization plans, with police sometimes forcing women to do so. But hijab wearing became compulsory in 1979 when the country became a theocracy.
  • Shajarizadeh and others monitoring the ongoing outcry have noticed that some people involved in the protests are confronted by police days after an alleged incident—including women cited for not wearing a hijab. “Many people haven't been arrested in the streets,” she says. “They were arrested at their homes one or two days later.”
  • Some face recognition in use in Iran today comes from Chinese camera and artificial intelligence company Tiandy. Its dealings in Iran were featured in a December 2021 report from IPVM, a company that tracks the surveillance and security industry.
  • US Department of Commerce placed sanctions on Tiandy, citing its role in the repression of Uyghur Muslims in China and the provision of technology originating in the US to Iran’s Revolutionary Guard. The company previously used components from Intel, but the US chipmaker told NBC last month that it had ceased working with the Chinese company.
  • When Steven Feldstein, a former US State Department surveillance expert, surveyed 179 countries between 2012 and 2020, he found that 77 now use some form of AI-driven surveillance. Face recognition is used in 61 countries, more than any other form of digital surveillance technology, he says.
Ed Webb

How ethical is it for advertisers to target your mood? | Emily Bell | Opinion | The Gua... - 0 views

  • The effectiveness of psychographic targeting is one bet being made by an increasing number of media companies when it comes to interrupting your viewing experience with advertising messages.
  • “Across the board, articles that were in top emotional categories, such as love, sadness and fear, performed significantly better than articles that were not.”
  • ESPN and USA Today are also using psychographic rather than demographic targeting to sell to advertisers, including in ESPN’s case, the decision to not show you advertising at all if your team is losing.
  • ...9 more annotations...
  • Media companies using this technology claim it is now possible for the “mood” of the reader or viewer to be tracked in real time and the content of the advertising to be changed accordingly
  • ads targeted at readers based on their predicted moods rather than their previous behaviour improved the click-through rate by 40%.
  • Given that the average click through rate (the number of times anyone actually clicks on an ad) is about 0.4%, this number (in gross terms) is probably less impressive than it sounds.
  • Cambridge Analytica, the company that misused Facebook data and, according to its own claims, helped Donald Trump win the 2016 election, used psychographic segmentation.
  • For many years “contextual” ads served by not very intelligent algorithms were the bane of digital editors’ lives. Improvements in machine learning should help eradicate the horrible business of showing insurance advertising to readers in the middle of an article about a devastating fire.
  • The words “brand safety” are increasingly used by publishers when demonstrating products such as Project Feels. It is a way publishers can compete on micro-targeting with platforms such as Facebook and YouTube by pointing out that their targeting will not land you next to a conspiracy theory video about the dangers of chemtrails.
  • the exploitation of psychographics is not limited to the responsible and transparent scientists at the NYT. While publishers were showing these shiny new tools to advertisers, Amazon was advertising for a managing editor for its surveillance doorbell, Ring, which contacts your device when someone is at your door. An editor for a doorbell, how is that going to work? In all kinds of perplexing ways according to the ad. It’s “an exciting new opportunity within Ring to manage a team of news editors who deliver breaking crime news alerts to our neighbours. This position is best suited for a candidate with experience and passion for journalism, crime reporting, and people management.” So if instead of thinking about crime articles inspiring fear and advertising doorbells in the middle of them, what if you took the fear that the surveillance-device-cum-doorbell inspires and layered a crime reporting newsroom on top of it to make sure the fear is properly engaging?
  • The media has arguably already played an outsized role in making sure that people are irrationally scared, and now that practice is being strapped to the considerably more powerful engine of an Amazon product.
  • This will not be the last surveillance-based newsroom we see. Almost any product that produces large data feeds can also produce its own “news”. Imagine the Fitbit newsroom or the managing editor for traffic reports from dashboard cams – anything that has a live data feed emanating from it, in the age of the Internet of Things, can produce news.
Ed Webb

What we still haven't learned from Gamergate - Vox - 0 views

  • Harassment and misogyny had been problems in the community for years before this; the deep resentment and anger toward women that powered Gamergate percolated for years on internet forums. Robert Evans, a journalist who specializes in extremist communities and the host of the Behind the Bastards podcast, described Gamergate to me as partly organic and partly born out of decades-long campaigns by white supremacists and extremists to recruit heavily from online forums. “Part of why Gamergate happened in the first place was because you had these people online preaching to these groups of disaffected young men,” he said. But what Gamergate had that those previous movements didn’t was an organized strategy, made public, cloaking itself as a political movement with a flimsy philosophical stance, its goals and targets amplified by the power of Twitter and a hashtag.
  • The hate campaign, we would later learn, was the moment when our ability to repress toxic communities and write them off as just “trolls” began to crumble. Gamergate ultimately gave way to something deeper, more violent, and more uncontrollable.
  • Police have to learn how to keep the rest of us safe from internet mobs
  • ...20 more annotations...
  • the justice system continues to be slow to understand the link between online harassment and real-life violence
  • In order to increase public safety this decade, it is imperative that police — and everyone else — become more familiar with the kinds of communities that engender toxic, militant systems of harassment, and the online and offline spaces where these communities exist. Increasingly, that means understanding social media’s dark corners, and the types of extremism they can foster.
  • Businesses have to learn when online outrage is manufactured
  • There’s a difference between organic outrage that arises because an employee actually does something outrageous, and invented outrage that’s an excuse to harass someone whom a group has already decided to target for unrelated reasons — for instance, because an employee is a feminist. A responsible business would ideally figure out which type of outrage is occurring before it punished a client or employee who was just doing their job.
  • Social media platforms didn’t learn how to shut down disingenuous conversations over ethics and free speech before they started to tear their cultures apart
  • Dedication to free speech over the appearance of bias is especially important within tech culture, where a commitment to protecting free speech is both a banner and an excuse for large corporations to justify their approach to content moderation — or lack thereof.
  • Reddit’s free-speech-friendly moderation stance resulted in the platform tacitly supporting pro-Gamergate subforums like r/KotakuInAction, which became a major contributor to Reddit’s growing alt-right community. Twitter rolled out a litany of moderation tools in the wake of Gamergate, intended to allow harassment targets to perpetually block, mute, and police their own harassers — without actually effectively making the site unwelcome for the harassers themselves. And YouTube and Facebook, with their algorithmic amplification of hateful and extreme content, made no effort to recognize the violence and misogyny behind pro-Gamergate content, or police them accordingly.
  • All of these platforms are wrestling with problems that seem to have grown beyond their control; it’s arguable that if they had reacted more swiftly to slow the growth of the internet’s most toxic and misogynistic communities back when those communities, particularly Gamergate, were still nascent, they could have prevented headaches in the long run — and set an early standard for how to deal with ever-broadening issues of extremist content online.
  • Violence against women is a predictor of other kinds of violence. We need to acknowledge it.
  • Somehow, the idea that all of that sexism and anti-feminist anger could be recruited, harnessed, and channeled into a broader white supremacist movement failed to generate any real alarm, even well into 2016
  • many of the perpetrators of real-world violence are radicalized online first
  • It remains difficult for many to accept the throughline from online abuse to real-world violence against women, much less the fact that violence against women, online and off, is a predictor of other kinds of real-world violence
  • Politicians and the media must take online “ironic” racism and misogyny seriously
  • Gamergate masked its misogyny in a coating of shrill yelling that had most journalists in 2014 writing off the whole incident as “satirical” and immature “trolling,” and very few correctly predicting that Gamergate’s trolling was the future of politics
  • Gamergate was all about disguising a sincere wish for violence and upheaval by dressing it up in hyperbole and irony in order to confuse outsiders and make it all seem less serious.
  • Gamergate simultaneously masqueraded as legitimate concern about ethics that demanded audiences take it seriously, and as total trolling that demanded audiences dismiss it entirely. Both these claims served to obfuscate its real aim — misogyny, and, increasingly, racist white supremacy
  • The public’s failure to understand and accept that the alt-right’s misogyny, racism, and violent rhetoric is serious goes hand in hand with its failure to understand and accept that such rhetoric is identical to that of President Trump
  • deploying offensive behavior behind a guise of mock outrage, irony, trolling, and outright misrepresentation, in order to mask the sincere extremism behind the message.
  • many members of the media, politicians, and members of the public still struggle to accept that Trump’s rhetoric is having violent consequences, despite all evidence to the contrary.
  • The movement’s insistence that it was about one thing (ethics in journalism) when it was about something else (harassing women) provided a case study for how extremists would proceed to drive ideological fissures through the foundations of democracy: by building a toxic campaign of hate beneath a veneer of denial.
Ed Webb

Anti-piracy tool will harvest and market your emotions - Computerworld Blogs - 0 views

  • After being awarded a grant, Aralia Systems teamed up with Machine Vision Lab in what seems like a massive invasion of your privacy beyond "in the name of security." Building on existing cinema anti-piracy technology, these companies plan to add the ability to harvest your emotions. This is the part where it seems that filmgoers should be eligible to charge movie theater owners. At the very least, shouldn't it result in a significantly discounted movie ticket?  Machine Vision Lab's Dr Abdul Farooq told PhysOrg, "We plan to build on the capabilities of current technology used in cinemas to detect criminals making pirate copies of films with video cameras. We want to devise instruments that will be capable of collecting data that can be used by cinemas to monitor audience reactions to films and adverts and also to gather data about attention and audience movement. ... It is envisaged that once the technology has been fine tuned it could be used by market researchers in all kinds of settings, including monitoring reactions to shop window displays."  
  • The 3D camera data will "capture the audience as a whole as a texture."
  • the technology will enable companies to cash in on your emotions and sell that personal information as marketing data
  • ...4 more annotations...
  • "Within the cinema industry this tool will feed powerful marketing data that will inform film directors, cinema advertisers and cinemas with useful data about what audiences enjoy and what adverts capture the most attention. By measuring emotion and movement film companies and cinema advertising agencies can learn so much from their audiences that will help to inform creativity and strategy.” 
  • hey plan to fine-tune it to monitor our reactions to window displays and probably anywhere else the data can be used for surveillance and marketing.
  • Muslim women have got the right idea. Soon well all be wearing privacy tents.
  • In George Orwell's novel 1984, each home has a mandatory "telescreen," a large flat panel, something like a TV, but with the ability for the authorities to observer viewers in order to ensure they are watching all the required propaganda broadcasts and reacting with appropriate emotions. Problem viewers would be brought to the attention of the Thought Police. The telescreen, of course, could not be turned off. It is reassuring to know that our technology has finally caught up with Oceania's.
Ed Webb

Project Vigilant and the government/corporate destruction of privacy - Glenn Greenwald ... - 0 views

  • it's the re-packaging and transfer of this data to the U.S. Government -- combined with the ability to link it not only to your online identity (IP address), but also your offline identity (name) -- that has made this industry particularly pernicious.  There are serious obstacles that impede the Government's ability to create these electronic dossiers themselves.  It requires both huge resources and expertise.  Various statutes enacted in the mid-1970s -- such as the Privacy Act of 1974 -- impose transparency requirements and other forms of accountability on programs whereby the Government collects data on citizens.  And the fact that much of the data about you ends up in the hands of private corporations can create further obstacles, because the tools which the Government has to compel private companies to turn over this information is limited (the fact that the FBI is sometimes unable to obtain your "transactional" Internet data without a court order -- i.e., whom you email, who emails you, what Google searches you enter, and what websites you visit --is what has caused the Obama administration to demand that Congress amend the Patriot Act to vest them with the power to obtain all of that with no judicial supervision). But the emergence of a private market that sells this data to the Government (or, in the case of Project Vigilance, is funded in order to hand it over voluntarily) has eliminated those obstacles.
  • a wide array of government agencies have created countless programs to encourage and formally train various private workers (such as cable installers, utilities workers and others who enter people's homes) to act as government informants and report any "suspicious" activity; see one example here.  Meanwhile, TIA has been replicated, and even surpassed, as a result of private industries' willingness to do the snooping work on American citizens which the Government cannot do.
  • this arrangement provides the best of all worlds for the Government and the worst for citizens: The use of private-sector data aggregators allows the government to insulate surveillance and information-handling practices from privacy laws or public scrutiny. That is sometimes an important motivation in outsourced surveillance.  Private companies are free not only from complying with the Privacy Act, but from other checks and balances, such as the Freedom of Information Act.  They are also insulated from oversight by Congress and are not subject to civil-service laws designed to ensure that government policymakers are not influenced by partisan politics. . . .
  • ...4 more annotations...
  • There is a long and unfortunate history of cooperation between government security agencies and powerful corporations to deprive individuals of their privacy and other civil liberties, and any program that institutionalizes close, secretive ties between such organizations raises serious questions about the scope of its activities, now and in the future.
  • Many people are indifferent to the disappearance of privacy -- even with regard to government officials -- because they don't perceive any real value to it.  The ways in which the loss of privacy destroys a society are somewhat abstract and difficult to articulate, though very real.  A society in which people know they are constantly being monitored is one that breeds conformism and submission, and which squashes innovation, deviation, and real dissent. 
  • that's what a Surveillance State does:  it breeds fear of doing anything out of the ordinary by creating a class of meek citizens who know they are being constantly watched.
  • The loss of privacy is entirely one-way.  Government and corporate authorities have destroyed most vestiges of privacy for you, while ensuring that they have more and more for themselves.  The extent to which you're monitored grows in direct proportion to the secrecy with which they operate.  Sir Francis Bacon's now platitudinous observation that "knowledge itself is power" is as true as ever.  That's why this severe and always-growing imbalance is so dangerous, even to those who are otherwise content to have themselves subjected to constant monitoring.
Ed Webb

Google admits cars collected email, passwords - ABC News (Australian Broadcasting Corpo... - 0 views

  • Google said collecting the additional data was a mistake resulting from a piece of computer code from an experimental project that was accidentally included.
    • Ed Webb
       
      One has to wonder what that project was...
  • Google has acknowledged a fleet of cars, equipped with wireless equipment, inadvertently collected emails and passwords of computer users in various countries, and said it was changing its privacy practices. The company said it wants to delete the data as soon as possible. Google announced the data collection in May, but said at the time the information it collected was typically limited to "fragments" of data because the cars were always moving. Since then, regulators in several of the more than 30 countries where the cars operated have inspected the data. "It's clear from those inspections that while most of the data is fragmentary, in some instances entire emails and URLs were captured, as well as passwords," said Google's vice-president of engineering and research, Alan Eustace, in a post on Google's blog.
Ed Webb

New Mexico's Sad Bet on Space Exploration - The Atlantic - 0 views

  • New Mexico spaceport is only the latest entry in a triumphant time line of military and aerospace innovation in this southwestern state. Our video narrator speeds through Spanish colonialism and westward expansion to highlight the Manhattan Project’s work in Los Alamos, to the north, and Operation Paperclip, a secret program that recruited German scientists to the United States after World War II. Among them was Wernher von Braun, who brought his V-2 rockets to the state.White Sands Missile Range, a 3,200-square-mile military-testing site in South Central New Mexico’s Tularosa Basin, hosted much of this work. It’s home to the Trinity Site, where the first atomic bomb was detonated, and von Braun’s rocket testing site, too. Spaceport America is positioned adjacent to the Army property, in a tightly protected airspace. That makes rocket-ship testing a lot easier.
  • “It feels exciting, it’s like the future is now,”
  • For now, the spaceport is a futurist tourist attraction, not an operational harbor to the cosmos. The tour buses depart from a former T or C community center twice a day every Saturday. They pass thrift stores, RV parks, and bland but durable-looking structures, defiant underdogs against the mountains. We pass the Elephant Butte Dam, a stunning example of early-20th-century Bureau of Reclamation engineering that made it possible for agriculture to thrive in southern New Mexico; even so, a fellow spaceport tourist notes that the water levels seem far lower than what he recalls from childhood
  • ...11 more annotations...
  • The complex and its buildings vaguely recall a Southwest landmark frequently mistaken for the city of the future: Arcosanti, the architect Paolo Soleri’s 1970 “urban laboratory” nestled in the mountains north of Phoenix. It’s oddly fitting: Soleri imagined a sustainable desert utopia, as well as speculative space “arcologies”—self-sustaining architectural ecologies, delicately rendered as hypothetical asteroid-belt cities or prototype ships
  • The only spacecraft we see on the tour is a model of Virgin Galactic’s SpaceShipTwo, glimpsed from a distance in an otherwise empty hangar. Even the spacecraft isn’t real.
  • the name Spaceport America suggests theatrics. There are several commercial spaceports throughout the United States, some of which sport more activity and tenants. Most of Virgin Galactic’s testing has happened at the Mojave Air and Space Port; Virginia’s Mid-Atlantic Regional Spaceport recently signed on the SpaceX competitor Vector as a customer.Others, like Oklahoma’s Air and Space Port, seem to be even more like ghost towns than this one. But New Mexico’s gambit suggests we are at the spaceport of the nation. It doesn’t feel like the frontier of private space travel so much as a movie set.
  • Many promises for technologies of future urbanism start as desert prototypes
  • New Mexico examples tend to include slightly more dystopian rehearsals: Much of the state’s existing science and defense industries emerged from bringing Manhattan Project scientists to what, at the time, was the middle of nowhere to test nuclear weapons—essentially, to practice ending the world.
  • most of my fellow tourists take the premise of ubiquitous space travel to colonies on Mars as a fait accompli. I’m not sure why people in a desert would fantasize about going somewhere even harder to inhabit
  • Humanity dreams of going to space for many of the same reasons some people went to the desert: because it is there, because they hope to get rich extracting natural resources they find there, and because they suspect mysterious, new terrains can’t be any worse than the irredeemable wreckage of the landscape they’re leaving behind
  • believing in the inevitability of Mars colonies is maybe no less idealistic than believing in the Southwest itself
  • The romance and promise of the American West was built, in part, on federal land grants to private corporations that promised to bring boomtowns to places previously otherwise deemed uninhabitable wastelands. Cities rose and fell with the rerouting of railroads
  • To manifest destiny’s proponents, to doubt the inevitability of technological and social progress via the railroad was tantamount to doubting the will of God. Today, questioning the value of (mostly) privately funded space development likewise feels like doubting human progress
  • I wonder if the future always feels like rehearsal until it arrives, or if it is always rehearsal, only seeming like it has arrived when the run-through loses its novelty. Maybe all of the impatient skeptics will be proven wrong this year, and the future will finally arrive at Spaceport America. Here in the desert, a better future always seems to be right around the corner
Ed Webb

Supreme court cellphone case puts free speech - not just privacy - at risk | Opinion | ... - 0 views

  • scholars are watching Carpenter’s case closely because it may require the supreme court to address the scope and continuing relevance of the “third-party-records doctrine”, a judicially developed rule that has sometimes been understood to mean that a person surrenders her constitutional privacy interest in information that she turns over to a third party. The government contends that Carpenter lacks a constitutionally protected privacy interest in his location data because his cellphone was continually sharing that data with his cellphone provider.
  • Privacy advocates are rightly alarmed by this argument. Much of the digital technology all of us rely on today requires us to share information passively with third parties. Visiting a website, sending an email, buying a book online – all of these things require sharing sensitive data with internet service providers, merchants, banks and others. If this kind of commonplace and unavoidable information-sharing is sufficient to extinguish constitutional privacy rights, the digital-age fourth amendment will soon be a dead letter.
  • “Awareness that the government may be watching chills associational and expressive freedoms,” Chief Justice John Roberts wrote. Left unchecked, he warned, new forms of surveillance could “alter the relationship between citizen and government in a way that is inimical to democratic society”.
Ed Webb

Clear backpacks, monitored emails: life for US students under constant surveillance | E... - 0 views

  • This level of surveillance is “not too over-the-top”, Ingrid said, and she feels her classmates are generally “accepting” of it.
  • One leading student privacy expert estimated that as many as a third of America’s roughly 15,000 school districts may already be using technology that monitors students’ emails and documents for phrases that might flag suicidal thoughts, plans for a school shooting, or a range of other offenses.
  • When Dapier talks with other teen librarians about the issue of school surveillance, “we’re very alarmed,” he said. “It sort of trains the next generation that [surveillance] is normal, that it’s not an issue. What is the next generation’s Mark Zuckerberg going to think is normal?
  • ...13 more annotations...
  • Some parents said they were alarmed and frightened by schools’ new monitoring technologies. Others said they were conflicted, seeing some benefits to schools watching over what kids are doing online, but uncertain if their schools were striking the right balance with privacy concerns. Many said they were not even sure what kind of surveillance technology their schools might be using, and that the permission slips they had signed when their kids brought home school devices had told them almost nothing
  • “They’re so unclear that I’ve just decided to cut off the research completely, to not do any of it.”
  • As of 2018, at least 60 American school districts had also spent more than $1m on separate monitoring technology to track what their students were saying on public social media accounts, an amount that spiked sharply in the wake of the 2018 Parkland school shooting, according to the Brennan Center for Justice, a progressive advocacy group that compiled and analyzed school contracts with a subset of surveillance companies.
  • “They are all mandatory, and the accounts have been created before we’ve even been consulted,” he said. Parents are given almost no information about how their children’s data is being used, or the business models of the companies involved. Any time his kids complete school work through a digital platform, they are generating huge amounts of very personal, and potentially very valuable, data. The platforms know what time his kids do their homework, and whether it’s done early or at the last minute. They know what kinds of mistakes his kids make on math problems.
  • Felix, now 12, said he is frustrated that the school “doesn’t really [educate] students on what is OK and what is not OK. They don’t make it clear when they are tracking you, or not, or what platforms they track you on. “They don’t really give you a list of things not to do,” he said. “Once you’re in trouble, they act like you knew.”
  • “It’s the school as panopticon, and the sweeping searchlight beams into homes, now, and to me, that’s just disastrous to intellectual risk-taking and creativity.”
  • Many parents also said that they wanted more transparency and more parental control over surveillance. A few years ago, Ben, a tech professional from Maryland, got a call from his son’s principal to set up an urgent meeting. His son, then about nine or 10-years old, had opened up a school Google document and typed “I want to kill myself.” It was not until he and his son were in a serious meeting with school officials that Ben found out what happened: his son had typed the words on purpose, curious about what would happen. “The smile on his face gave away that he was testing boundaries, and not considering harming himself,” Ben said. (He asked that his last name and his son’s school district not be published, to preserve his son’s privacy.) The incident was resolved easily, he said, in part because Ben’s family already had close relationships with the school administrators.
  • there is still no independent evaluation of whether this kind of surveillance technology actually works to reduce violence and suicide.
  • Certain groups of students could easily be targeted by the monitoring more intensely than others, she said. Would Muslim students face additional surveillance? What about black students? Her daughter, who is 11, loves hip-hop music. “Maybe some of that language could be misconstrued, by the wrong ears or the wrong eyes, as potentially violent or threatening,” she said.
  • The Parent Coalition for Student Privacy was founded in 2014, in the wake of parental outrage over the attempt to create a standardized national database that would track hundreds of data points about public school students, from their names and social security numbers to their attendance, academic performance, and disciplinary and behavior records, and share the data with education tech companies. The effort, which had been funded by the Gates Foundation, collapsed in 2014 after fierce opposition from parents and privacy activists.
  • “More and more parents are organizing against the onslaught of ed tech and the loss of privacy that it entails. But at the same time, there’s so much money and power and political influence behind these groups,”
  • some privacy experts – and students – said they are concerned that surveillance at school might actually be undermining students’ wellbeing
  • “I do think the constant screen surveillance has affected our anxiety levels and our levels of depression.” “It’s over-guarding kids,” she said. “You need to let them make mistakes, you know? That’s kind of how we learn.”
Ed Webb

We, The Technocrats - blprnt - Medium - 2 views

  • Silicon Valley’s go-to linguistic dodge: the collective we
  • “What kind of a world do we want to live in?”
  • Big tech’s collective we is its ‘all lives matter’, a way to soft-pedal concerns about privacy while refusing to speak directly to dangerous inequalities.
  • ...7 more annotations...
  • One two-letter word cannot possibly hold all of the varied experiences of data, specifically those of the people are at the most immediate risk: visible minorities, LGBTQ+ people, indigenous communities, the elderly, the disabled, displaced migrants, the incarcerated
  • At least twenty-six states allow the FBI to perform facial recognition searches against their databases of images from drivers licenses and state IDs, despite the fact that the FBI’s own reports have indicated that facial recognition is less accurate for black people. Black people, already at a higher risk of arrest and incarceration than other Americans, feel these data systems in a much different way than I do
  • last week, the Department of Justice passed a brief to the Supreme Court arguing that sex discrimination protections do not extend to transgender people. If this ruling were to be supported, it would immediately put trans women and men at more risk than others from the surveillant data technologies that are becoming more and more common in the workplace. Trans people will be put in distinct danger — a reality that is lost when they are folded neatly into a communal we
  • I looked at the list of speakers for the conference in Brussels to get an idea of the particular we of Cook’s audience, which included Mark Zuckerberg, Google’s CEO Sundar Pichai and the King of Spain. Of the presenters, 57% were men and 83% where white. Only 4 of the 132 people on stage were black.
  • another we that Tim Cook necessarily speaks on the behalf of: privileged men in tech. This we includes Mark and Sundar; it includes 60% of Silicon Valley and 91% of its equity. It is this we who have reaped the most benefit from Big Data and carried the least risk, all while occupying the most time on stage
  • Here’s a more urgent question for us, one that doesn’t ask what we want but instead what they need:How can this new data world be made safer for the people who are facing real risks, right now?
  • “The act of listening has greater ethical potential than speaking” — Julietta Singh
Ed Webb

Hate spreads in Trump's America: "We need to root out white supremacy just like the can... - 0 views

  • the media won’t give the same time of day or coverage to communities who are being targeted by hate violence. They are spending so much time humanizing white nationalists and humanizing white supremacy that in many ways the news media  routinely ignore the ubiquitous and every day hate that communities of color and other diverse communities experience in this country
  • a majority of white Americans feel they are victims of discrimination. What these white Americans have to do is unpack their own anxiety, discuss this rage, and understand that the project of civil rights, human rights, equality under the law are not an assault on their racial identity. I think for some white voters it is probably about what they perceive as waning demographic and economic power
  • The good news is I learned from my travels around the country that the survivors of hate, people who have lost so much, are not only rebuilding but they are coming forward and they are reclaiming their lives. These survivors are working with allies to stop the hatred, building community defense programs and are willing to engage in difficult conversations with people who see the world differently from them. And I think that is something to really admire. Given what’s transpired survivors of hate have every reason to turn their backs on this country.
  • ...7 more annotations...
  • There are very real consequences from a white supremacist holding the highest office in the land. There are very real consequences to Trump using the bully pulpit to foster hate on the basis of almost every human characteristic, be it race, faith, disability, sexual orientation, national origin, immigration status, gender, or class
  • The policies of the Trump administration cannot be divorced from the rhetoric of the Trump administration. The rhetoric and the policies are both driven by xenophobia, Islamaphobia, misogyny and white supremacy. And if the government is going to treat diverse and marginalized communities as subhuman so will everyone else
  • We have to be prepared for much worse from the Trump administration because this is what he has said that he would do all along. It is naïve and foolish of us to think that he is not going to follow through on his promises.
  • It is dystopic. I have met with survivors who have been diagnosed with post traumatic stress disorder. One survivor, Tanya Gersh, described to me how she rarely says hello to strangers and is not as gregarious and outgoing as she used to be. She described to me how after she was viciously trolled by white supremacists in Whitefish, Montana. There were something like 700 forms of communication such as emails, social media messages, voice mails. She told me she had to have a conversation with her ten year old about the Holocaust and how every Jewish parent struggles with when to have that conversation with their children about anti-Semitism. Hate crimes that target individuals send a community wide message that its members are not welcome. This undermines feelings of safety and security. It is  called "vicarious trauma." For example the vandalism and arson of house of worship, the targeting of organizations, student groups, campus communities or even state sponsored forms of hate are also designed to terrorize whole communities and groups of people. We also know that hate literally kills people by making communities physically and emotionally sick.
  • There are many victims of hate crimes who out of fear remain silent. Hate crimes are very underreported in America. The stats do not capture the scale of the problem.
  • The War on Terror must stop as well because I don’t think you can separate what the United States does abroad with what it does to its own people — especially nonwhites, Muslims, and other marginalized and discriminated against communities. Justice also involves archiving this moment, documenting what survivors and their communities have experienced
  • there is no one size fits all answer. It should ultimately be determined by the survivors. In my book there are survivors  who forgave the aggressors and culprits in open court and elsewhere because they don’t believe that prison is the answer. There are others who felt otherwise. But overwhelmingly the survivors that I met are open to reconciliation so long as there is accountability
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
Ed Webb

Google and Apple Digital Mapping | Data Collection - 0 views

  • There is a sense, in fact, in which mapping is the essence of what Google does. The company likes to talk about services such as Maps and Earth as if they were providing them for fun - a neat, free extra as a reward for using their primary offering, the search box. But a search engine, in some sense, is an attempt to map the world of information - and when you can combine that conceptual world with the geographical one, the commercial opportunities suddenly explode.
  • In a world of GPS-enabled smartphones, you're not just consulting Google's or Apple's data stores when you consult a map: you're adding to them.
  • There's no technical reason why, perhaps in return for a cheaper phone bill, you mightn't consent to be shown not the quickest route between two points, but the quickest route that passes at least one Starbucks. If you're looking at the world through Google glasses, who determines which aspects of "augmented reality" data you see - and did they pay for the privilege?
  • ...6 more annotations...
  • "The map is mapping us," says Martin Dodge, a senior lecturer in human geography at Manchester University. "I'm not paranoid, but I am quite suspicious and cynical about products that appear to be innocent and neutral, but that are actually vacuuming up all kinds of behavioural and attitudinal data."
  • it's hard to interpret the occasional aerial snapshot of your garden as a big issue when the phone in your pocket is assembling a real-time picture of your movements, preferences and behaviour
  • "There's kind of a fine line that you run," said Ed Parsons, Google's chief geospatial technologist, in a session at the Aspen Ideas Festival in Colorado, "between this being really useful, and it being creepy."
  • "Google and Apple are saying that they want control over people's real and imagined space."
  • It can be easy to assume that maps are objective: that the world is out there, and that a good map is one that represents it accurately. But that's not true. Any square kilometre of the planet can be described in an infinite number of ways: in terms of its natural features, its weather, its socio-economic profile, or what you can buy in the shops there. Traditionally, the interests reflected in maps have been those of states and their armies, because they were the ones who did the map-making, and the primary use of many such maps was military. (If you had the better maps, you stood a good chance of winning the battle. The logo of Britain's Ordnance Survey still includes a visual reference to the 18th-century War Department.) Now, the power is shifting. "Every map," the cartography curator Lucy Fellowes once said, "is someone's way of getting you to look at the world his or her way."
  • The question cartographers are always being asked at cocktail parties, says Heyman, is whether there's really any map-making still left to do: we've mapped the whole planet already, haven't we? The question could hardly be more misconceived. We are just beginning to grasp what it means to live in a world in which maps are everywhere - and in which, by using maps, we are mapped ourselves.
Ed Webb

The Web Means the End of Forgetting - NYTimes.com - 1 views

  • for a great many people, the permanent memory bank of the Web increasingly means there are no second chances — no opportunities to escape a scarlet letter in your digital past. Now the worst thing you’ve done is often the first thing everyone knows about you.
  • a collective identity crisis. For most of human history, the idea of reinventing yourself or freely shaping your identity — of presenting different selves in different contexts (at home, at work, at play) — was hard to fathom, because people’s identities were fixed by their roles in a rigid social hierarchy. With little geographic or social mobility, you were defined not as an individual but by your village, your class, your job or your guild. But that started to change in the late Middle Ages and the Renaissance, with a growing individualism that came to redefine human identity. As people perceived themselves increasingly as individuals, their status became a function not of inherited categories but of their own efforts and achievements. This new conception of malleable and fluid identity found its fullest and purest expression in the American ideal of the self-made man, a term popularized by Henry Clay in 1832.
  • the dawning of the Internet age promised to resurrect the ideal of what the psychiatrist Robert Jay Lifton has called the “protean self.” If you couldn’t flee to Texas, you could always seek out a new chat room and create a new screen name. For some technology enthusiasts, the Web was supposed to be the second flowering of the open frontier, and the ability to segment our identities with an endless supply of pseudonyms, avatars and categories of friendship was supposed to let people present different sides of their personalities in different contexts. What seemed within our grasp was a power that only Proteus possessed: namely, perfect control over our shifting identities. But the hope that we could carefully control how others view us in different contexts has proved to be another myth. As social-networking sites expanded, it was no longer quite so easy to have segmented identities: now that so many people use a single platform to post constant status updates and photos about their private and public activities, the idea of a home self, a work self, a family self and a high-school-friends self has become increasingly untenable. In fact, the attempt to maintain different selves often arouses suspicion.
  • ...20 more annotations...
  • All around the world, political leaders, scholars and citizens are searching for responses to the challenge of preserving control of our identities in a digital world that never forgets. Are the most promising solutions going to be technological? Legislative? Judicial? Ethical? A result of shifting social norms and cultural expectations? Or some mix of the above?
  • These approaches share the common goal of reconstructing a form of control over our identities: the ability to reinvent ourselves, to escape our pasts and to improve the selves that we present to the world.
  • many technological theorists assumed that self-governing communities could ensure, through the self-correcting wisdom of the crowd, that all participants enjoyed the online identities they deserved. Wikipedia is one embodiment of the faith that the wisdom of the crowd can correct most mistakes — that a Wikipedia entry for a small-town mayor, for example, will reflect the reputation he deserves. And if the crowd fails — perhaps by turning into a digital mob — Wikipedia offers other forms of redress
  • In practice, however, self-governing communities like Wikipedia — or algorithmically self-correcting systems like Google — often leave people feeling misrepresented and burned. Those who think that their online reputations have been unfairly tarnished by an isolated incident or two now have a practical option: consulting a firm like ReputationDefender, which promises to clean up your online image. ReputationDefender was founded by Michael Fertik, a Harvard Law School graduate who was troubled by the idea of young people being forever tainted online by their youthful indiscretions. “I was seeing articles about the ‘Lord of the Flies’ behavior that all of us engage in at that age,” he told me, “and it felt un-American that when the conduct was online, it could have permanent effects on the speaker and the victim. The right to new beginnings and the right to self-definition have always been among the most beautiful American ideals.”
  • In the Web 3.0 world, Fertik predicts, people will be rated, assessed and scored based not on their creditworthiness but on their trustworthiness as good parents, good dates, good employees, good baby sitters or good insurance risks.
  • “Our customers include parents whose kids have talked about them on the Internet — ‘Mom didn’t get the raise’; ‘Dad got fired’; ‘Mom and Dad are fighting a lot, and I’m worried they’ll get a divorce.’ ”
  • as facial-recognition technology becomes more widespread and sophisticated, it will almost certainly challenge our expectation of anonymity in public
  • Ohm says he worries that employers would be able to use social-network-aggregator services to identify people’s book and movie preferences and even Internet-search terms, and then fire or refuse to hire them on that basis. A handful of states — including New York, California, Colorado and North Dakota — broadly prohibit employers from discriminating against employees for legal off-duty conduct like smoking. Ohm suggests that these laws could be extended to prevent certain categories of employers from refusing to hire people based on Facebook pictures, status updates and other legal but embarrassing personal information. (In practice, these laws might be hard to enforce, since employers might not disclose the real reason for their hiring decisions, so employers, like credit-reporting agents, might also be required by law to disclose to job candidates the negative information in their digital files.)
  • There’s already a sharp rise in lawsuits known as Twittergation — that is, suits to force Web sites to remove slanderous or false posts.
  • many people aren’t worried about false information posted by others — they’re worried about true information they’ve posted about themselves when it is taken out of context or given undue weight. And defamation law doesn’t apply to true information or statements of opinion. Some legal scholars want to expand the ability to sue over true but embarrassing violations of privacy — although it appears to be a quixotic goal.
  • Researchers at the University of Washington, for example, are developing a technology called Vanish that makes electronic data “self-destruct” after a specified period of time. Instead of relying on Google, Facebook or Hotmail to delete the data that is stored “in the cloud” — in other words, on their distributed servers — Vanish encrypts the data and then “shatters” the encryption key. To read the data, your computer has to put the pieces of the key back together, but they “erode” or “rust” as time passes, and after a certain point the document can no longer be read.
  • Plenty of anecdotal evidence suggests that young people, having been burned by Facebook (and frustrated by its privacy policy, which at more than 5,000 words is longer than the U.S. Constitution), are savvier than older users about cleaning up their tagged photos and being careful about what they post.
  • norms are already developing to recreate off-the-record spaces in public, with no photos, Twitter posts or blogging allowed. Milk and Honey, an exclusive bar on Manhattan’s Lower East Side, requires potential members to sign an agreement promising not to blog about the bar’s goings on or to post photos on social-networking sites, and other bars and nightclubs are adopting similar policies. I’ve been at dinners recently where someone has requested, in all seriousness, “Please don’t tweet this” — a custom that is likely to spread.
  • research group’s preliminary results suggest that if rumors spread about something good you did 10 years ago, like winning a prize, they will be discounted; but if rumors spread about something bad that you did 10 years ago, like driving drunk, that information has staying power
  • strategies of “soft paternalism” that might nudge people to hesitate before posting, say, drunken photos from Cancún. “We could easily think about a system, when you are uploading certain photos, that immediately detects how sensitive the photo will be.”
  • It’s sobering, now that we live in a world misleadingly called a “global village,” to think about privacy in actual, small villages long ago. In the villages described in the Babylonian Talmud, for example, any kind of gossip or tale-bearing about other people — oral or written, true or false, friendly or mean — was considered a terrible sin because small communities have long memories and every word spoken about other people was thought to ascend to the heavenly cloud. (The digital cloud has made this metaphor literal.) But the Talmudic villages were, in fact, far more humane and forgiving than our brutal global village, where much of the content on the Internet would meet the Talmudic definition of gossip: although the Talmudic sages believed that God reads our thoughts and records them in the book of life, they also believed that God erases the book for those who atone for their sins by asking forgiveness of those they have wronged. In the Talmud, people have an obligation not to remind others of their past misdeeds, on the assumption they may have atoned and grown spiritually from their mistakes. “If a man was a repentant [sinner],” the Talmud says, “one must not say to him, ‘Remember your former deeds.’ ” Unlike God, however, the digital cloud rarely wipes our slates clean, and the keepers of the cloud today are sometimes less forgiving than their all-powerful divine predecessor.
  • On the Internet, it turns out, we’re not entitled to demand any particular respect at all, and if others don’t have the empathy necessary to forgive our missteps, or the attention spans necessary to judge us in context, there’s nothing we can do about it.
  • Gosling is optimistic about the implications of his study for the possibility of digital forgiveness. He acknowledged that social technologies are forcing us to merge identities that used to be separate — we can no longer have segmented selves like “a home or family self, a friend self, a leisure self, a work self.” But although he told Facebook, “I have to find a way to reconcile my professor self with my having-a-few-drinks self,” he also suggested that as all of us have to merge our public and private identities, photos showing us having a few drinks on Facebook will no longer seem so scandalous. “You see your accountant going out on weekends and attending clown conventions, that no longer makes you think that he’s not a good accountant. We’re coming to terms and reconciling with that merging of identities.”
  • a humane society values privacy, because it allows people to cultivate different aspects of their personalities in different contexts; and at the moment, the enforced merging of identities that used to be separate is leaving many casualties in its wake.
  • we need to learn new forms of empathy, new ways of defining ourselves without reference to what others say about us and new ways of forgiving one another for the digital trails that will follow us forever
1 - 20 of 65 Next › Last »
Showing 20 items per page