Skip to main content

Home/ Dystopias/ Group items tagged list

Rss Feed Group items tagged

Ed Webb

Iran Says Face Recognition Will ID Women Breaking Hijab Laws | WIRED - 0 views

  • After Iranian lawmakers suggested last year that face recognition should be used to police hijab law, the head of an Iranian government agency that enforces morality law said in a September interview that the technology would be used “to identify inappropriate and unusual movements,” including “failure to observe hijab laws.” Individuals could be identified by checking faces against a national identity database to levy fines and make arrests, he said.
  • Iran’s government has monitored social media to identify opponents of the regime for years, Grothe says, but if government claims about the use of face recognition are true, it’s the first instance she knows of a government using the technology to enforce gender-related dress law.
  • Mahsa Alimardani, who researches freedom of expression in Iran at the University of Oxford, has recently heard reports of women in Iran receiving citations in the mail for hijab law violations despite not having had an interaction with a law enforcement officer. Iran’s government has spent years building a digital surveillance apparatus, Alimardani says. The country’s national identity database, built in 2015, includes biometric data like face scans and is used for national ID cards and to identify people considered dissidents by authorities.
  • ...5 more annotations...
  • Decades ago, Iranian law required women to take off headscarves in line with modernization plans, with police sometimes forcing women to do so. But hijab wearing became compulsory in 1979 when the country became a theocracy.
  • Shajarizadeh and others monitoring the ongoing outcry have noticed that some people involved in the protests are confronted by police days after an alleged incident—including women cited for not wearing a hijab. “Many people haven't been arrested in the streets,” she says. “They were arrested at their homes one or two days later.”
  • Some face recognition in use in Iran today comes from Chinese camera and artificial intelligence company Tiandy. Its dealings in Iran were featured in a December 2021 report from IPVM, a company that tracks the surveillance and security industry.
  • US Department of Commerce placed sanctions on Tiandy, citing its role in the repression of Uyghur Muslims in China and the provision of technology originating in the US to Iran’s Revolutionary Guard. The company previously used components from Intel, but the US chipmaker told NBC last month that it had ceased working with the Chinese company.
  • When Steven Feldstein, a former US State Department surveillance expert, surveyed 179 countries between 2012 and 2020, he found that 77 now use some form of AI-driven surveillance. Face recognition is used in 61 countries, more than any other form of digital surveillance technology, he says.
Ed Webb

Where is the boundary between your phone and your mind? | US news | The Guardian - 1 views

  • Here’s a thought experiment: where do you end? Not your body, but you, the nebulous identity you think of as your “self”. Does it end at the limits of your physical form? Or does it include your voice, which can now be heard as far as outer space; your personal and behavioral data, which is spread out across the impossibly broad plane known as digital space; and your active online personas, which probably encompass dozens of different social media networks, text message conversations, and email exchanges? This is a question with no clear answer, and, as the smartphone grows ever more essential to our daily lives, that border’s only getting blurrier.
  • our minds have become even more radically extended than ever before
  • one of the essential differences between a smartphone and a piece of paper, which is that our relationship with our phones is reciprocal: we not only put information into the device, we also receive information from it, and, in that sense, it shapes our lives far more actively than would, say, a shopping list. The shopping list isn’t suggesting to us, based on algorithmic responses to our past and current shopping behavior, what we should buy; the phone is
  • ...10 more annotations...
  • American consumers spent five hours per day on their mobile devices, and showed a dizzying 69% year-over-year increase in time spent in apps like Facebook, Twitter, and YouTube. The prevalence of apps represents a concrete example of the movement away from the old notion of accessing the Internet through a browser and the new reality of the connected world and its myriad elements – news, social media, entertainment – being with us all the time
  • “In the 90s and even through the early 2000s, for many people, there was this way of thinking about cyberspace as a space that was somewhere else: it was in your computer. You went to your desktop to get there,” Weigel says. “One of the biggest shifts that’s happened and that will continue to happen is the undoing of a border that we used to perceive between the virtual and the physical world.”
  • While many of us think of the smartphone as a portal for accessing the outside world, the reciprocity of the device, as well as the larger pattern of our behavior online, means the portal goes the other way as well: it’s a means for others to access us
  • Weigel sees the unfettered access to our data, through our smartphone and browser use, of what she calls the big five tech companies – Apple, Alphabet (the parent company of Google), Microsoft, Facebook, and Amazon – as a legitimate problem for notions of democracy
  • an unfathomable amount of wealth, power, and direct influence on the consumer in the hands of just a few individuals – individuals who can affect billions of lives with a tweak in the code of their products
  • “This is where the fundamental democracy deficit comes from: you have this incredibly concentrated private power with zero transparency or democratic oversight or accountability, and then they have this unprecedented wealth of data about their users to work with,”
  • the rhetoric around the Internet was that the crowd would prevent the spread of misinformation, filtering it out like a great big hive mind; it would also help to prevent the spread of things like hate speech. Obviously, this has not been the case, and even the relatively successful experiments in this, such as Wikipedia, have a great deal of human governance that allows them to function properly
  • We should know and be aware of how these companies work, how they track our behavior, and how they make recommendations to us based on our behavior and that of others. Essentially, we need to understand the fundamental difference between our behavior IRL and in the digital sphere – a difference that, despite the erosion of boundaries, still stands
  • “Whether we know it or not, the connections that we make on the Internet are being used to cultivate an identity for us – an identity that is then sold to us afterward,” Lynch says. “Google tells you what questions to ask, and then it gives you the answers to those questions.”
  • It isn’t enough that the apps in our phone flatten all of the different categories of relationships we have into one broad group: friends, followers, connections. They go one step further than that. “You’re being told who you are all the time by Facebook and social media because which posts are coming up from your friends are due to an algorithm that is trying to get you to pay more attention to Facebook,” Lynch says. “That’s affecting our identity, because it affects who you think your friends are, because they’re the ones who are popping up higher on your feed.”
Ed Webb

The Biggest Social Media Operation You've Never Heard Of Is Run Out of Cyprus by Russia... - 0 views

  • The vast majority of the company’s content is apolitical—and that is certainly the way the company portrays itself.
  • But here’s the thing: TheSoul Publishing also posts history videos with a strong political tinge. Many of these videos are overtly pro-Russian. One video posted on Feb. 17, 2019, on the channel Smart Banana, which typically posts listicles and history videos, claims that Ukraine is part of Russia
  • the video gives a heavily sanitized version of Josef Stalin’s time in power and, bizarrely, suggests that Alaska was given to the United States by Soviet leader Nikita Khruschev
  • ...10 more annotations...
  • The video ends by displaying a future vision of Russian expansion that includes most of Europe (notably not Turkey), the Middle East and Asia
  • According to Nox Influencer, Bright Side alone is earning between $314,010 and 971,950 monthly, and 5-Minute Crafts is earning between $576,640 and $1,780,000 monthly through YouTube partner earning estimates. As a privately held company, TheSoul Publishing doesn’t have to disclose its earnings. But all the Cypriot-managed company has to do to earn money from YouTube is meet viewing thresholds and have an AdSense account. AdSense, a Google product, just requires that a company have a bank account, an email address and a phone number. To monetize to this magnitude of revenue, YouTube may have also collected tax information, if TheSoul Publishing organization is conducting what it defines as “U.S. activities.” It’s also possible that YouTube verified a physical address by sending a pin mailer.
  • According to publicly available information from the YouTube channels themselves—information provided to YouTube by the people who set up and operate the channels at TheSoul Publishing—as of August 2019, 21 of the 35 channels connected to TheSoul Publishing claim to be based in the U.S. Ten of the channels had no country listed. Zodiac Maniac was registered in the U.K, though TheSoul Publishing emphasizes that all of its operations are run out of Cyprus.
  •  Now I’ve Seen Everything was the only channel registered in the Russian Federation. That channel has more than 400 million views, which, according to the analytics tool Nox Influencer, come from a range of countries, including Russia and Eastern European and Central Asian countries—despite being an English-language channel
  • In another video on Smart Banana, which has more than 1 million views, the titular banana speculates on “12 Countries That May Not Survive the Next 20 Years”—including the United States, which the video argues may collapse because of political infighting and diverse political viewpoints
  • Facebook pages are not a direct way to increase profit unless a company is actively marketing merchandise or sales, which TheSoul Publishing does not appear to do. The pages coordinate posting, so one post will often appear on a number of different pages. To a digital advertiser, this makes perfect sense as a way to increase relevance and visibility, but it’s far from obvious what TheSoul Publishing might be advertising. Likewise, there’s no obvious financial benefit to posting original videos within Facebook. The company did not meaningfully clarify its Facebook strategy in response to questions on the subject.
  • Facebook forbids what it describes as “coordinated inauthentic behavior,” as its head of cybersecurity describes in this video. While TheSoul’s Publishing’s behavior is clearly coordinated, it is unclear that any of its behavior is inauthentic based on information I have reviewed.
  • One thing that TheSoul is definitely doing on Facebook, however, is buying ads—and, at least sometimes, it’s doing so in rubles on issues of national importance, targeting audiences in the United States. The page Bright Side has 44 million followers and currently lists no account administrators located in the United States, but as of Aug. 8, 2019, it had them in Cyprus, Russia, the United Kingdom, El Salvador, India, Ukraine and in locations “Not Available.” It used Facebook to post six political advertisements paid for in the Russian currency.
  • the point here is not that the ad buy is significant in and of itself. The point, rather, is that the company has developed a massive social media following and has a history of at least experimenting with distributing both pro-Russian and paid political content to that following
  • TheSoul’s political ads included the one below. The advertisement pushes viewers to an article about how “wonderful [it is] that Donald Trump earns less in a year than you do in a month.” The advertisement reached men, women, and people of unknown genders over the ages of 18, and began running on May 15, 2018. TheSoul Publishing spent less than a dollar on this advertisement, raising the question: why bother advertising at all?
Ed Webb

Doomsday shelters making a comeback - USATODAY.com - 0 views

  • underground shelters, almost-forgotten relics of the Cold War era, are making a comeback
  • a $41 million facility Radius built and installed underground that is suitable for 750 people, McCarthy says. He declined to disclose the client or location of the shelter.
  • Vicino, whose terravivos.com website lists 11 global catastrophes ranging from nuclear war to solar flares to comets, bristles at the notion he's profiting from people's fears. "You don't think of the person who sells you a fire extinguisher as taking advantage of your fear," he says. "The fact that you may never use that fire extinguisher doesn't make it a waste or bad. "We're not creating the fear; the fear is already out there. We're creating a solution."
Ed Webb

Tracking The Companies That Track You Online : NPR - 1 views

  • A visit to Dictionary.com resulted in 234 trackers being installed on our test computer, and only 11 of those were installed by Dictionary.com.
  • Every time I have a thought, I take an action online and Google it. So [online tracking] does build up these incredibly rich dossiers. One question is: Is knowing your name the right definition of anonymity? Right now, that is considered anonymous. If they don't know your name, they're not covered by laws that regulate personally identifiable information. And that's what the Federal Trade Commission is considering — that the definition of personal information should be expanded beyond name and Social Security number. Another thing that [online tracking] raises is sensitive information. So if you're looking at gay websites, then you're labeled as gay in some database somewhere and then you're followed around and sold on some exchange as gay, and you just may not want that to happen. So I feel like there are some categories that we as a society may not want collected: our political affiliation, our diseases, our income levels and many other things."
  • you can go to the websites of all of these tracking companies and ask them not to track you — which is absurd, because you'd have to know who they are. There is a list of all of them on the ad industry's webpage, and you can opt out of all of them at the same time. But one thing to know about tracking is they actually put a tracker on your computer saying don't track me. So you're opting in to being tracked for not being tracked
Ed Webb

S. Korea's dilemma? North's unschooled masses, not nukes | McClatchy - 0 views

  • should the North collapse, millions of undereducated, traumatized and malnourished North Koreans might come flooding across the border
  • a nation of people who would be startled, if not stunned, by the bright lights and hustle of Seoul, and might in turn overwhelm South Korea's ability to absorb them
  • Officials in Seoul acknowledge the myriad issues that a reunification of the Korean Peninsula would present, though the list of problems is more obvious than their solutions
  • ...2 more annotations...
  • After spending much of an hour trying to describe the Italian Renaissance to a classroom of North Korean defectors — "Can you imagine where Italy is?" — and having to stop to explain what she meant by "far," one teacher at the Yeomyung School said reunification would be tough.
  • "The reason that she was tortured was that her children had escaped,"
Ed Webb

We, The Technocrats - blprnt - Medium - 2 views

  • Silicon Valley’s go-to linguistic dodge: the collective we
  • “What kind of a world do we want to live in?”
  • Big tech’s collective we is its ‘all lives matter’, a way to soft-pedal concerns about privacy while refusing to speak directly to dangerous inequalities.
  • ...7 more annotations...
  • One two-letter word cannot possibly hold all of the varied experiences of data, specifically those of the people are at the most immediate risk: visible minorities, LGBTQ+ people, indigenous communities, the elderly, the disabled, displaced migrants, the incarcerated
  • At least twenty-six states allow the FBI to perform facial recognition searches against their databases of images from drivers licenses and state IDs, despite the fact that the FBI’s own reports have indicated that facial recognition is less accurate for black people. Black people, already at a higher risk of arrest and incarceration than other Americans, feel these data systems in a much different way than I do
  • last week, the Department of Justice passed a brief to the Supreme Court arguing that sex discrimination protections do not extend to transgender people. If this ruling were to be supported, it would immediately put trans women and men at more risk than others from the surveillant data technologies that are becoming more and more common in the workplace. Trans people will be put in distinct danger — a reality that is lost when they are folded neatly into a communal we
  • I looked at the list of speakers for the conference in Brussels to get an idea of the particular we of Cook’s audience, which included Mark Zuckerberg, Google’s CEO Sundar Pichai and the King of Spain. Of the presenters, 57% were men and 83% where white. Only 4 of the 132 people on stage were black.
  • another we that Tim Cook necessarily speaks on the behalf of: privileged men in tech. This we includes Mark and Sundar; it includes 60% of Silicon Valley and 91% of its equity. It is this we who have reaped the most benefit from Big Data and carried the least risk, all while occupying the most time on stage
  • Here’s a more urgent question for us, one that doesn’t ask what we want but instead what they need:How can this new data world be made safer for the people who are facing real risks, right now?
  • “The act of listening has greater ethical potential than speaking” — Julietta Singh
Ed Webb

How white male victimhood got monetised | The Independent - 0 views

  • I also learned a metric crap-tonne about how online communities of angry young nerd dudes function. Which is, to put it simply, around principles of pure toxicity. And now that toxicity has bled into wider society.
  • In a twist on the "1,000 true fans" principle worthy of Black Mirror, any alt-right demagogue who can gather 1,000 whining, bitter, angry men with zero self-awareness now has a self-sustaining full time job as an online sh*tposter.
  • Social media has been assailed by one toxic "movement" after another, from Gamergate to Incel terrorism. But the "leaders" of these movements, a ragtag band of demagogues, profiteers and charlatans, seem less interested in political change than in racking up Patreon backers.
  • ...5 more annotations...
  • Making a buck from the alt-right is quite simple. Get a blog or a YouTube channel. Then under the guise of political dialogue or pseudo-science, start spouting hate speech. You'll soon find followers flocking to your banner.
  • Publish a crappy ebook explaining why SJWs Always Lie. Or teach your followers how to “think like a silverback gorilla” (surely an arena where the far right already triumph?) via a pricey seminar. Launch a Kickstarter for a badly drawn comic packed with anti-diversity propaganda. They'll sell by the bucketload to followers eager to virtue-signal their membership in the rank and file of the alt-right
  • the seemingly bottomless reservoirs of white male victimhood
  • nowhere is there a better supply of the credulous than among the angry white men who flock to the far right. Embittered by their own life failures, the alt-right follower is eager to believe they have a genetically superior IQ and are simply the victim of a libtard conspiracy to keep them down
  • We're barely in the foothills of the mountains of madness that the internet and social media are unleashing into our political process. If you think petty demagogues like Jordan Peterson are good at milking cash from the crowd, you ain’t seen nothing yet. Because he was just the beginning – and his ideology of the white male victim is rapidly spiralling into something that even he can no longer control
Ed Webb

Clear backpacks, monitored emails: life for US students under constant surveillance | E... - 0 views

  • This level of surveillance is “not too over-the-top”, Ingrid said, and she feels her classmates are generally “accepting” of it.
  • One leading student privacy expert estimated that as many as a third of America’s roughly 15,000 school districts may already be using technology that monitors students’ emails and documents for phrases that might flag suicidal thoughts, plans for a school shooting, or a range of other offenses.
  • When Dapier talks with other teen librarians about the issue of school surveillance, “we’re very alarmed,” he said. “It sort of trains the next generation that [surveillance] is normal, that it’s not an issue. What is the next generation’s Mark Zuckerberg going to think is normal?
  • ...13 more annotations...
  • Some parents said they were alarmed and frightened by schools’ new monitoring technologies. Others said they were conflicted, seeing some benefits to schools watching over what kids are doing online, but uncertain if their schools were striking the right balance with privacy concerns. Many said they were not even sure what kind of surveillance technology their schools might be using, and that the permission slips they had signed when their kids brought home school devices had told them almost nothing
  • “They’re so unclear that I’ve just decided to cut off the research completely, to not do any of it.”
  • As of 2018, at least 60 American school districts had also spent more than $1m on separate monitoring technology to track what their students were saying on public social media accounts, an amount that spiked sharply in the wake of the 2018 Parkland school shooting, according to the Brennan Center for Justice, a progressive advocacy group that compiled and analyzed school contracts with a subset of surveillance companies.
  • “They are all mandatory, and the accounts have been created before we’ve even been consulted,” he said. Parents are given almost no information about how their children’s data is being used, or the business models of the companies involved. Any time his kids complete school work through a digital platform, they are generating huge amounts of very personal, and potentially very valuable, data. The platforms know what time his kids do their homework, and whether it’s done early or at the last minute. They know what kinds of mistakes his kids make on math problems.
  • Felix, now 12, said he is frustrated that the school “doesn’t really [educate] students on what is OK and what is not OK. They don’t make it clear when they are tracking you, or not, or what platforms they track you on. “They don’t really give you a list of things not to do,” he said. “Once you’re in trouble, they act like you knew.”
  • “It’s the school as panopticon, and the sweeping searchlight beams into homes, now, and to me, that’s just disastrous to intellectual risk-taking and creativity.”
  • Many parents also said that they wanted more transparency and more parental control over surveillance. A few years ago, Ben, a tech professional from Maryland, got a call from his son’s principal to set up an urgent meeting. His son, then about nine or 10-years old, had opened up a school Google document and typed “I want to kill myself.” It was not until he and his son were in a serious meeting with school officials that Ben found out what happened: his son had typed the words on purpose, curious about what would happen. “The smile on his face gave away that he was testing boundaries, and not considering harming himself,” Ben said. (He asked that his last name and his son’s school district not be published, to preserve his son’s privacy.) The incident was resolved easily, he said, in part because Ben’s family already had close relationships with the school administrators.
  • there is still no independent evaluation of whether this kind of surveillance technology actually works to reduce violence and suicide.
  • Certain groups of students could easily be targeted by the monitoring more intensely than others, she said. Would Muslim students face additional surveillance? What about black students? Her daughter, who is 11, loves hip-hop music. “Maybe some of that language could be misconstrued, by the wrong ears or the wrong eyes, as potentially violent or threatening,” she said.
  • The Parent Coalition for Student Privacy was founded in 2014, in the wake of parental outrage over the attempt to create a standardized national database that would track hundreds of data points about public school students, from their names and social security numbers to their attendance, academic performance, and disciplinary and behavior records, and share the data with education tech companies. The effort, which had been funded by the Gates Foundation, collapsed in 2014 after fierce opposition from parents and privacy activists.
  • “More and more parents are organizing against the onslaught of ed tech and the loss of privacy that it entails. But at the same time, there’s so much money and power and political influence behind these groups,”
  • some privacy experts – and students – said they are concerned that surveillance at school might actually be undermining students’ wellbeing
  • “I do think the constant screen surveillance has affected our anxiety levels and our levels of depression.” “It’s over-guarding kids,” she said. “You need to let them make mistakes, you know? That’s kind of how we learn.”
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
1 - 12 of 12
Showing 20 items per page