Skip to main content

Home/ Dystopias/ Group items tagged China

Rss Feed Group items tagged

Ed Webb

Hayabusa2 and the unfolding future of space exploration | Bryan Alexander - 0 views

  • What might this tell us about the future?  Let’s consider Ryugu as a datapoint or story for where space exploration might head next.
  • There isn’t a lot of press coverage beyond Japan (ah, once again I wish I read Japanese), if I go by Google News headlines.  There’s nothing on the CNN.com homepage now, other than typical spatters of dread and celebrity; the closest I can find is a link to a story about Musk’s space tourism project, which a Japanese billionaire will ride.  Nothing on Fox News or MSNBC’s main pages.  BBC News at least has a link halfway down its main page.
  • Hayabusa is a Japanese project, not an American one, and national interest counts for a lot.  No humans were involved, so human interest and story are absent.  Perhaps the whole project looks too science-y for a culture that spins into post-truthiness, contains some serious anti-science and anti-technology strands, or just finds science stories too dry.  Or maybe the American media outlets think Americans just aren’t that into space in particular in 2018.
  • ...13 more annotations...
  • Hayabusa2 reminds us that space exploration is more multinational and more disaggregated than ever.  Besides JAXA there are space programs being build up by China and India, including robot craft, astronauts (taikonauts, for China, vyomanauts, for India), and space stations.  The Indian Mars Orbiter still circles the fourth planet. The European Space Agency continues to develop satellites and launch rockets, like the JUICE (JUpiter ICy moons Explorer).  Russia is doing some mixture of commercial spaceflight, ISS maintenance, exploration, and geopoliticking.  For these nations space exploration holds out a mixture of prestige, scientific and engineering development, and possible commercial return.
  • Bezos, Musk, and others live out a Robert Heinlein story by building up their own personal space efforts.  This is, among other things, a sign of how far American wealth has grown, and how much of the elite are connected to technical skills (as opposed to inherited wealth).  It’s an effect of plutocracy, as I’ve said before.  Yuri Milner might lead the first interstellar mission with his Breakthrough Starshot plan.
  • Privatization of space seems likely to continue.
  • Uneven development is also likely, as different programs struggle to master different stations in the space path.  China may assemble a space station while Japan bypasses orbital platforms for the moon, private cubesats head into the deep solar system and private companies keep honing their Earth orbital launch skills.
  • Surely the challenges of getting humans and robots further into space will elicit interesting projects that can be used Earthside.  Think about health breakthroughs needed to keep humans alive in environments scoured by radiation, or AI to guide robots through complex situations.
  • robots continue to be cheap, far easier to operate, capable of enduring awful stresses, and happy to send gorgeous data back our way
  • Japan seems committed to creating a lunar colony.  Musk and Bezos burn with the old science fiction and NASA hunger for shipping humans into the great dark.  The lure of Mars seems to be a powerful one, and a multinational, private versus public race could seize the popular imagination.  Older people may experience a rush of nostalgia for the glorious space race of their youth.
  • This competition could turn malign, of course.  Recall that the 20th century’s space races grew out of warfare, and included many plans for combat and destruction. Nayef Al-Rodhan hints at possible strains in international cooperation: The possible fragmentation of outer space research activities in the post-ISS period would constitute a break-up of an international alliance that has fostered unprecedented cooperation between engineers and scientists from rival geopolitical powers – aside from China. The ISS represents perhaps the pinnacle of post-Cold War cooperation and has allowed for the sharing and streamlining of work methods and differing norms. In a current period of tense relations, it is worrying that the US and Russia may be ending an important phase of cooperation.
  • Space could easily become the ground for geopolitical struggles once more, and possibly a flashpoint as well.  Nationalism, neonationalism, nativism could power such stresses
  • Enough of an off-Earth settlement could lead to further forays, once we bypass the terrible problem of getting off the planet’s surface, and if we can develop new ways to fuel and sustain craft in space.  The desire to connect with that domain might help spur the kind of space elevator which will ease Earth-to-orbit challenges.
  • The 1960s space race saw the emergence of a kind of astronaut cult.  The Soviet space program’s Russian roots included a mystical tradition.  We could see a combination of nostalgia from older folks and can-do optimism from younger people, along with further growth in STEM careers and interest.  Dialectically we should expect the opposite.  A look back at the US-USSR space race shows criticism and opposition ranging from the arts (Gil Scott-Heron’s “Whitey on the Moon”, Jello Biafra’s “Why I’m Glad the Space Shuttle Blew Up”) to opinion polls (in the US NASA only won real support for the year around Apollo 11, apparently).  We can imagine all kinds of political opposition to a 21st century space race, from people repeating the old Earth versus space spending canard to nationalistic statements (“Let Japan land on Deimos.  We have enough to worry about here in Chicago”) to environmental concerns to religious ones.  Concerns about vast wealth and inequality could well target space.
  • How will we respond when, say, twenty space tourists crash into a lunar crater and die, in agony, on YouTube?
  • That’s a lot to hang on one Japanese probe landing two tiny ‘bots on an asteroid in 2018, I know.  But Hayabusa2 is such a signal event that it becomes a fine story to think through.
Ed Webb

Artificial intelligence, immune to fear or favour, is helping to make China's foreign p... - 0 views

  • Several prototypes of a diplomatic system using artificial intelligence are under development in China, according to researchers involved or familiar with the projects. One early-stage machine, built by the Chinese Academy of Sciences, is already being used by the Ministry of Foreign Affairs.
  • China’s ambition to become a world leader has significantly increased the burden and challenge to its diplomats. The “Belt and Road Initiative”, for instance, involves nearly 70 countries with 65 per cent of the world’s population. The unprecedented development strategy requires up to a US$900 billion investment each year for infrastructure construction, some in areas with high political, economic or environmental risk
  • researchers said the AI “policymaker” was a strategic decision support system, with experts stressing that it will be humans who will make any final decision
  • ...10 more annotations...
  • “Human beings can never get rid of the interference of hormones or glucose.”
  • “It would not even consider the moral factors that conflict with strategic goals,”
  • “If one side of the strategic game has artificial intelligence technology, and the other side does not, then this kind of strategic game is almost a one-way, transparent confrontation,” he said. “The actors lacking the assistance of AI will be at an absolute disadvantage in many aspects such as risk judgment, strategy selection, decision making and execution efficiency, and decision-making reliability,” he said.
  • “The entire strategic game structure will be completely out of balance.”
  • “AI can think many steps ahead of a human. It can think deeply in many possible scenarios and come up with the best strategy,”
  • A US Department of State spokesman said the agency had “many technological tools” to help it make decisions. There was, however, no specific information on AI that could be shared with the public,
  • The system, also known as geopolitical environment simulation and prediction platform, was used to vet “nearly all foreign investment projects” in recent years
  • One challenge to the development of AI policymaker is data sharing among Chinese government agencies. The foreign ministry, for instance, had been unable to get some data sets it needed because of administrative barriers
  • China is aggressively pushing AI into many sectors. The government is building a nationwide surveillance system capable of identifying any citizen by face within seconds. Research is also under way to introduce AI in nuclear submarines to help commanders making faster, more accurate decision in battle.
  • “AI can help us get more prepared for unexpected events. It can help find a scientific, rigorous solution within a short time.
Ed Webb

China's New "Social Credit Score" Brings Dystopian Science Fiction to Life - 1 views

  • The Chinese government is taking a controversial step in security, with plans to implement a system that gives and collects financial, social, political, and legal credit ratings of citizens into a social credit score
  • Proponents of the idea are already testing various aspects of the system — gathering digital records of citizens, specifically financial behavior. These will then be used to create a social credit score system, which will determine if a citizen can avail themselves of certain services based on his or her social credit rating
  • it’s going to be like an episode from Black Mirror — the social credit score of citizens will be the basis for access to services ranging from travel and education to loans and insurance coverage.
crawforz

China officials 'bought corpses' - 1 views

  •  
    However, the Chinese government has encouraged cremations to save land for farming and development.
Ed Webb

Wearing a mask won't stop facial recognition anymore - The coronavirus is prompting fac... - 0 views

  • expanding this system to a wider group of people would be hard. When a population reaches a certain scale, the system is likely to encounter people with similar eyes.This might be why most commercial facial recognition systems that can identify masked faces seem limited to small-scale applications
  • Many residential communities, especially in areas hit hardest by the virus, have been limiting entry to residents only. Minivision introduced the new algorithm to its facial recognition gate lock systems in communities in Nanjing to quickly recognize residents without the need to take off masks.
  • SenseTime, which announced the rollout of its face mask-busting tech last week, explained that its algorithm is designed to read 240 facial feature key points around the eyes, mouth and nose. It can make a match using just the parts of the face that are visible.
  • ...1 more annotation...
  • New forms of facial recognition can now recognize not just people wearing masks over their mouths, but also people in scarves and even with fake beards. And the technology is already rolling out in China because of one unexpected event: The coronavirus outbreak.
Ed Webb

Iran Says Face Recognition Will ID Women Breaking Hijab Laws | WIRED - 0 views

  • After Iranian lawmakers suggested last year that face recognition should be used to police hijab law, the head of an Iranian government agency that enforces morality law said in a September interview that the technology would be used “to identify inappropriate and unusual movements,” including “failure to observe hijab laws.” Individuals could be identified by checking faces against a national identity database to levy fines and make arrests, he said.
  • Iran’s government has monitored social media to identify opponents of the regime for years, Grothe says, but if government claims about the use of face recognition are true, it’s the first instance she knows of a government using the technology to enforce gender-related dress law.
  • Mahsa Alimardani, who researches freedom of expression in Iran at the University of Oxford, has recently heard reports of women in Iran receiving citations in the mail for hijab law violations despite not having had an interaction with a law enforcement officer. Iran’s government has spent years building a digital surveillance apparatus, Alimardani says. The country’s national identity database, built in 2015, includes biometric data like face scans and is used for national ID cards and to identify people considered dissidents by authorities.
  • ...5 more annotations...
  • Decades ago, Iranian law required women to take off headscarves in line with modernization plans, with police sometimes forcing women to do so. But hijab wearing became compulsory in 1979 when the country became a theocracy.
  • Shajarizadeh and others monitoring the ongoing outcry have noticed that some people involved in the protests are confronted by police days after an alleged incident—including women cited for not wearing a hijab. “Many people haven't been arrested in the streets,” she says. “They were arrested at their homes one or two days later.”
  • Some face recognition in use in Iran today comes from Chinese camera and artificial intelligence company Tiandy. Its dealings in Iran were featured in a December 2021 report from IPVM, a company that tracks the surveillance and security industry.
  • US Department of Commerce placed sanctions on Tiandy, citing its role in the repression of Uyghur Muslims in China and the provision of technology originating in the US to Iran’s Revolutionary Guard. The company previously used components from Intel, but the US chipmaker told NBC last month that it had ceased working with the Chinese company.
  • When Steven Feldstein, a former US State Department surveillance expert, surveyed 179 countries between 2012 and 2020, he found that 77 now use some form of AI-driven surveillance. Face recognition is used in 61 countries, more than any other form of digital surveillance technology, he says.
Ed Webb

BlackBerry's Security Approach Leads to Theories of Secret Deals - NYTimes.com - 0 views

  • R.I.M. officials flatly denied last week that the company had cut deals with certain countries to grant authorities special access to the BlackBerry system. They also said R.I.M. would not compromise the security of its system. At the same time, R.I.M. says it complies with regulatory requirements around the world.
  • law-enforcement agencies in the United States had an advantage over their counterparts overseas because many of the most popular e-mail services — Gmail, Hotmail and Yahoo — are based here, and so are subject to court orders. That means the government can often see messages in unencrypted forms, even if sent from a BlackBerry
  • “R.I.M. could be technically correct that they are not giving up anything,” said Lee Tien, a senior staff lawyer at the Electronic Frontier Foundation, a San Francisco group that promotes civil liberties online. “But their systems are not necessarily more secure because there are other places for authorities to go to.” When China first allowed BlackBerry service in the last few years, sales were restricted to hand-held devices linked to enterprise servers within the country. Many security experts say Chinese security agencies have direct access to all data stored on those servers, which are often owned by government-controlled corporations.
  • ...1 more annotation...
  • a recently changed Indian law that gives the government the power to intercept any “computer communication” without court order to carry out criminal investigations
Ed Webb

AFP: Beijing officials trained in social media: report - 2 views

  • Chinese web users frequently refer to the "50 cent army", rumoured to be a group of freelance propagandists who post pro-Communist Party entries on blogs and websites, posing as ordinary members of the public.
Ed Webb

We've Only Got America A - NYTimes.com - 1 views

  •  
    Cyberpunk has been predicting this stuff for a long time...
Ed Webb

Goodbye petabytes, hello zettabytes | Technology | The Guardian - 0 views

  • Every man, woman and child on the planet using micro-blogging site Twitter for a century. For many people that may sound like a vision of hell, but for watchers of the tremendous growth of digital communications it is a neat way of presenting the sheer scale of the so-called digital universe.
  • Mobile phones have dramatically widened the range of people who can create, store and share digital information."China now has more visible devices out on the streets being used by individuals than the US does," said McDonald. "We are seeing the democratisation and commoditisation of the use and creation of information."
  • experts estimate that all human language used since the dawn of time would take up about 5,000 petabytes if stored in digital form, which is less than 1% of the digital content created since someone first switched on a computer.
  • ...6 more annotations...
  • A zettabyte, incidentally, is roughly half a million times the entire collections of all the academic libraries in the United States.
  • the growing desire of corporations and governments to know and store ever more data about everyone
  • About 70% of the digital universe is generated by individuals, but its storage is then predominantly the job of corporations. From emails and blogs to mobile phone calls, it is corporations that are storing information on behalf of consumers.
  • actions in the offline world that individuals carry out which result in digital content being created by organisations – from cashpoint transactions which a bank must record to walking along the pavement, which is likely to result in CCTV footage
  • "unstructured"
  • "You talk to a kid these days and they have no idea what a kilobyte is. The speed things progress, we are going to need many words beyond zettabyte."
Ed Webb

Artificial Intelligence and the Future of Humans | Pew Research Center - 0 views

  • experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities
  • most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.
  • CONCERNS Human agency: Individuals are  experiencing a loss of control over their lives Decision-making on key aspects of digital life is automatically ceded to code-driven, "black box" tools. People lack input and do not learn the context about how the tools work. They sacrifice independence, privacy and power over choice; they have no control over these processes. This effect will deepen as automated systems become more prevalent and complex. Data abuse: Data use and surveillance in complex systems is designed for profit or for exercising power Most AI tools are and will be in the hands of companies striving for profits or governments striving for power. Values and ethics are often not baked into the digital systems making people's decisions for them. These systems are globally networked and not easy to regulate or rein in. Job loss: The AI takeover of jobs will widen economic divides, leading to social upheaval The efficiencies and other economic advantages of code-based machine intelligence will continue to disrupt all aspects of human work. While some expect new jobs will emerge, others worry about massive job losses, widening economic divides and social upheavals, including populist uprisings. Dependence lock-in: Reduction of individuals’ cognitive, social and survival skills Many see AI as augmenting human capacities but some predict the opposite - that people's deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others. Mayhem: Autonomous weapons, cybercrime and weaponized information Some predict further erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications and the use of weaponized information, lies and propaganda to dangerously destabilize human groups. Some also fear cybercriminals' reach into economic systems.
  • ...18 more annotations...
  • AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons
  • “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”
  • SUGGESTED SOLUTIONS Global good is No. 1: Improve human collaboration across borders and stakeholder groups Digital cooperation to serve humanity's best interests is the top priority. Ways must be found for people around the world to come to common understandings and agreements - to join forces to facilitate the innovation of widely accepted approaches aimed at tackling wicked problems and maintaining control over complex human-digital networks. Values-based system: Develop policies to assure AI will be directed at ‘humanness’ and common good Adopt a 'moonshot mentality' to build inclusive, decentralized intelligent digital networks 'imbued with empathy' that help humans aggressively ensure that technology meets social and ethical responsibilities. Some new level of regulatory and certification process will be necessary. Prioritize people: Alter economic and political systems to better help humans ‘race with the robots’ Reorganize economic and political systems toward the goal of expanding humans' capacities and capabilities in order to heighten human/AI collaboration and staunch trends that would compromise human relevance in the face of programmed intelligence.
  • “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”
  • We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals?
  • AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment
  • The record to date is that convenience overwhelms privacy
  • As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging
  • AI will eventually cause a large number of people to be permanently out of work
  • Newer generations of citizens will become more and more dependent on networked AI structures and processes
  • there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control
  • As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified
  • Given historical precedent, one would have to assume it will be our worst qualities that are augmented
  • Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing
  • We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully
  • the Orwellian nightmare realised
  • “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”
  • The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education
Ed Webb

Could fully automated luxury communism ever work? - 0 views

  • Having achieved a seamless, pervasive commodification of online sociality, Big Tech companies have turned their attention to infrastructure. Attempts by Google, Amazon and Facebook to achieve market leadership, in everything from AI to space exploration, risk a future defined by the battle for corporate monopoly.
  • The technologies are coming. They’re already here in certain instances. It’s the politics that surrounds them. We have alternatives: we can have public ownership of data in the citizen’s interest or it could be used as it is in China where you have a synthesis of corporate and state power
  • the two alternatives that big data allows is an all-consuming surveillance state where you have a deep synthesis of capitalism with authoritarian control, or a reinvigorated welfare state where more and more things are available to everyone for free or very low cost
  • ...4 more annotations...
  • we can’t begin those discussions until we say, as a society, we want to at least try subordinating these potentials to the democratic project, rather than allow capitalism to do what it wants
  • I say in FALC that this isn’t a blueprint for utopia. All I’m saying is that there is a possibility for the end of scarcity, the end of work, a coming together of leisure and labour, physical and mental work. What do we want to do with it? It’s perfectly possible something different could emerge where you have this aggressive form of social value.
  • I think the thing that’s been beaten out of everyone since 2010 is one of the prevailing tenets of neoliberalism: work hard, you can be whatever you want to be, that you’ll get a job, be well paid and enjoy yourself.  In 2010, that disappeared overnight, the rules of the game changed. For the status quo to continue to administer itself,  it had to change common sense. You see this with Jordan Peterson; he’s saying you have to know your place and that’s what will make you happy. To me that’s the only future for conservative thought, how else do you mediate the inequality and unhappiness?
  • I don’t think we can rapidly decarbonise our economies without working people understanding that it’s in their self-interest. A green economy means better quality of life. It means more work. Luxury populism feeds not only into the green transition, but the rollout of Universal Basic Services and even further.
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
1 - 14 of 14
Showing 20 items per page