Skip to main content

Home/ Groups/ Dystopias
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
Ed Webb

I unintentionally created a biased AI algorithm 25 years ago - tech companies are still... - 0 views

  • How and why do well-educated, well-intentioned scientists produce biased AI systems? Sociological theories of privilege provide one useful lens.
  • Their training data is biased. They are designed by an unrepresentative group. They face the mathematical impossibility of treating all categories equally. They must somehow trade accuracy for fairness. And their biases are hiding behind millions of inscrutable numerical parameters.
  • fairness can still be the victim of competitive pressures in academia and industry. The flawed Bard and Bing chatbots from Google and Microsoft are recent evidence of this grim reality. The commercial necessity of building market share led to the premature release of these systems.
  • ...3 more annotations...
  • Scientists also face a nasty subconscious dilemma when incorporating diversity into machine learning models: Diverse, inclusive models perform worse than narrow models.
  • biased AI systems can still be created unintentionally and easily. It’s also clear that the bias in these systems can be harmful, hard to detect and even harder to eliminate.
  • with North American computer science doctoral programs graduating only about 23% female, and 3% Black and Latino students, there will continue to be many rooms and many algorithms in which underrepresented groups are not represented at all.
Ed Webb

Lack of Transparency over Police Forces' Covert Use of Predictive Policing Software Rai... - 0 views

  • Currently, through the use of blanket exemption clauses – and without any clear legislative oversight – public access to information on systems that may be being used to surveil them remains opaque. Companies including Palantir, NSO Group, QuaDream, Dark Matter and Gamma Group are all exempt from disclosure under the precedent set by the police, along with another entity, Dataminr.
  • has helped police in the US monitor and break up Black Lives Matter and Muslim rights activism through social media monitoring. Dataminr software has also been used by the Ministry of Defence, Foreign Commonwealth and Development Office, and the Cabinet Office,
  • New research shows that, far from being a ‘neutral’ observational tool, Dataminr produces results that reflect its clients’ politics, business goals and ways of operating.
  • ...3 more annotations...
  • teaching the software to associate certain kinds of images, text and hashtags with a ‘dangerous’ protest results in politically and racially-biased definitions of what dangerous protests look like. This is because, to make these predictions, the system has to decide whether the event resembles other previous events that were labelled ‘dangerous’ – for example, past BLM protests. 
  • When in 2016 the ACLU proved that Dataminr’s interventions were contributing to racist policing, the company was subsequently banned from granting fusion centres in the US direct access to Twitter’s API. Fusion centres are state-owned and operated facilities and serve as focal points to gather, analyse and redistribute intelligence among state, local, tribal and territorial (SLTT), federal and private sector partners to detect criminal and terrorist activity.  However, US law enforcement found  a way around these limitations by continuing to receive Dataminr alerts outside of fusion centres.
  • Use of these technologies have, in the past, not been subject to public consultation and, without basic scrutiny at either a public or legislative level, there remains no solid mechanism for independent oversight of their use by law enforcement.
Ed Webb

Iran Says Face Recognition Will ID Women Breaking Hijab Laws | WIRED - 0 views

  • After Iranian lawmakers suggested last year that face recognition should be used to police hijab law, the head of an Iranian government agency that enforces morality law said in a September interview that the technology would be used “to identify inappropriate and unusual movements,” including “failure to observe hijab laws.” Individuals could be identified by checking faces against a national identity database to levy fines and make arrests, he said.
  • Iran’s government has monitored social media to identify opponents of the regime for years, Grothe says, but if government claims about the use of face recognition are true, it’s the first instance she knows of a government using the technology to enforce gender-related dress law.
  • Mahsa Alimardani, who researches freedom of expression in Iran at the University of Oxford, has recently heard reports of women in Iran receiving citations in the mail for hijab law violations despite not having had an interaction with a law enforcement officer. Iran’s government has spent years building a digital surveillance apparatus, Alimardani says. The country’s national identity database, built in 2015, includes biometric data like face scans and is used for national ID cards and to identify people considered dissidents by authorities.
  • ...5 more annotations...
  • Decades ago, Iranian law required women to take off headscarves in line with modernization plans, with police sometimes forcing women to do so. But hijab wearing became compulsory in 1979 when the country became a theocracy.
  • Shajarizadeh and others monitoring the ongoing outcry have noticed that some people involved in the protests are confronted by police days after an alleged incident—including women cited for not wearing a hijab. “Many people haven't been arrested in the streets,” she says. “They were arrested at their homes one or two days later.”
  • Some face recognition in use in Iran today comes from Chinese camera and artificial intelligence company Tiandy. Its dealings in Iran were featured in a December 2021 report from IPVM, a company that tracks the surveillance and security industry.
  • US Department of Commerce placed sanctions on Tiandy, citing its role in the repression of Uyghur Muslims in China and the provision of technology originating in the US to Iran’s Revolutionary Guard. The company previously used components from Intel, but the US chipmaker told NBC last month that it had ceased working with the Chinese company.
  • When Steven Feldstein, a former US State Department surveillance expert, surveyed 179 countries between 2012 and 2020, he found that 77 now use some form of AI-driven surveillance. Face recognition is used in 61 countries, more than any other form of digital surveillance technology, he says.
Ed Webb

OpenAI's bot wrote my obituary. It was filled with bizarre lies. - 0 views

  • What I find so creepy about OpenAI’s bots is not that they seem to exhibit creativity; computers have been doing creative tasks such as generating original proofs in Euclidean geometry since the 1950s. It’s that I grew up with the idea of a computer as an automaton bound by its nature to follow its instructions precisely; barring a malfunction, it does exactly what its operator – and its program—tell it to do. On some level, this is still true; the bot is following its program and the instructions of its operator. But the way the program interprets the operator’s instructions are not the way the operator thinks. Computer programs are optimized not to solve problems, but instead to convince its operator that it has solved those problems. It was written on the package of the Turing test—it’s a game of imitation, of deception. For the first time, we’re forced to confront the consequences of that deception.
  • a computer program that would be sociopathic if it were alive
  • Even when it’s not supposed to, even when it has a way out, even when the truth is known to the computer and it’s easier to spit it out rather than fabricate something—the computer still lies
  • ...1 more annotation...
  • something that’s been so optimized for deception that it can’t do anything but deceive its operator
Ed Webb

TSA is adding face recognition at big airports. Here's how to opt out. - The Washington... - 0 views

  • Any time data gets collected somewhere, it could also be stolen — and you only get one face. The TSA says all its databases are encrypted to reduce hacking risk. But in 2019, the Department of Homeland Security disclosed that photos of travelers were taken in a data breach, accessed through the network of one of its subcontractors.
  • “What we often see with these biometric programs is they are only optional in the introductory phases — and over time we see them becoming standardized and nationalized and eventually compulsory,” said Cahn. “There is no place more coercive to ask people for their consent than an airport.”
  • Those who have the privilege of not having to worry their face will be misread can zip right through — whereas people who don’t consent to it pay a tax with their time. At that point, how voluntary is it, really?
Ed Webb

The trust gap: how and why news on digital platforms is viewed more sceptically versus ... - 0 views

  • Levels of trust in news on social media, search engines, and messaging apps is consistently lower than audience trust in information in the news media more generally.
  • Many of the same people who lack trust in news encountered via digital media companies – who tend to be older, less educated, and less politically interested – also express less trust in the news regardless of whether found on platforms or through more traditional offline modes.
  • Many of the most common reasons people say they use platforms have little to do with news.
  • ...3 more annotations...
  • News about politics is viewed as particularly suspect and platforms are seen by many as contentious places for political conversation – at least for those most interested in politics. Rates of trust in news in general are comparatively higher than trust in news when it pertains to coverage of political affairs.
  • Negative perceptions about journalism are widespread and social media is one of the most often-cited places people say they see or hear criticism of news and journalism
  • Despite positive feelings towards most platforms, large majorities in all four countries agree that false and misleading information, harassment, and platforms using data irresponsibly are ‘big problems’ in their country for many platforms
Ed Webb

DHS built huge database from cellphones, computers seized at border - The Washington Post - 0 views

  • U.S. government officials are adding data from as many as 10,000 electronic devices each year to a massive database they’ve compiled from cellphones, iPads and computers seized from travelers at the country’s airports, seaports and border crossings, leaders of Customs and Border Protection told congressional staff in a briefing this summer.WpGet the full experience.Choose your planArrowRightThe rapid expansion of the database and the ability of 2,700 CBP officers to access it without a warrant — two details not previously known about the database — have raised alarms in Congress
  • captured from people not suspected of any crime
  • many Americans may not understand or consent to
  • ...6 more annotations...
  • the revelation that thousands of agents have access to a searchable database without public oversight is a new development in what privacy advocates and some lawmakers warn could be an infringement of Americans’ Fourth Amendment rights against unreasonable searches and seizures.
  • CBP officials declined, however, to answer questions about how many Americans’ phone records are in the database, how many searches have been run or how long the practice has gone on, saying it has made no additional statistics available “due to law enforcement sensitivities and national security implications.”
  • Law enforcement agencies must show probable cause and persuade a judge to approve a search warrant before searching Americans’ phones. But courts have long granted an exception to border authorities, allowing them to search people’s devices without a warrant or suspicion of a crime.
  • The CBP directive gives officers the authority to look and scroll through any traveler’s device using what’s known as a “basic search,” and any traveler who refuses to unlock their phone for this process can have it confiscated for up to five days.
  • CBP officials give travelers a printed document saying that the searches are “mandatory,” but the document does not mention that data can be retained for 15 years and that thousands of officials will have access to it.
  • Officers are also not required to give the document to travelers before the search, meaning that some travelers may not fully understand their rights to refuse the search until after they’ve handed over their phones
1 - 20 of 595 Next › Last »
Showing 20 items per page