Skip to main content

Home/ Dystopias/ Group items tagged assessment

Rss Feed Group items tagged

Ed Webb

A Rubric for Evaluating Student Blogs - ProfHacker - The Chronicle of Higher Education - 0 views

  • Rating Characteristics 4 Exceptional. The blog post is focused and coherently integrates examples with explanations or analysis. The post demonstrates awareness of its own limitations or implications, and it considers multiple perspectives when appropriate. The entry reflects in-depth engagement with the topic. 3 Satisfactory. The blog post is reasonably focused, and explanations or analysis are mostly based on examples or other evidence. Fewer connections are made between ideas, and though new insights are offered, they are not fully developed. The post reflects moderate engagement with the topic. 2 Underdeveloped. The blog post is mostly description or summary, without consideration of alternative perspectives, and few connections are made between ideas. The post reflects passing engagement with the topic. 1 Limited. The blog post is unfocused, or simply rehashes previous comments, and displays no evidence of student engagement with the topic. 0 No Credit. The blog post is missing or consists of one or two disconnected sentences.
  •  
    Does this strike you as a reasonable rubric for assessing blog posts?
Ed Webb

Artificial meat? Food for thought by 2050 | Environment | The Guardian - 0 views

  • even with new technologies such as genetic modification and nanotechnology, hundreds of millions of people may still go hungry owing to a combination of climate change, water shortages and increasing food consumption.
  • Many low-tech ways are considered to effectively increase yields, such as reducing the 30-40% food waste that occurs both in rich and poor countries. If developing countries had better storage facilities and supermarkets and consumers in rich countries bought only what they needed, there would be far more food available.
  • wo "wild cards" could transform global meat and milk production. "One is artificial meat, which is made in a giant vat, and the other is nanotechnology, which is expected to become more important as a vehicle for delivering medication to livestock."
  • ...4 more annotations...
  • One of the gloomiest assessments comes from a team of British and South African economists who say that a vast effort must be made in agricultural research to create a new green revolution, but that seven multinational corporations, led by Monsanto, now dominate the global technology field.
  • a threat to the global commons in agricultural technology on which the green revolution has depended
  • Up to 70% of the energy needed to grow and supply food at present is fossil-fuel based which in turn contributes to climate change
  • The 21 papers published today in a special open access edition of the philosophical transactions of the royalsociety.org are part of a UK government Foresight study on the future of the global food industry. The final report will be published later this year in advance of the UN climate talks in Cancun, Mexico.
Ed Webb

On the Web's Cutting Edge, Anonymity in Name Only - WSJ.com - 0 views

  • A Wall Street Journal investigation into online privacy has found that the analytical skill of data handlers like [x+1] is transforming the Internet into a place where people are becoming anonymous in name only. The findings offer an early glimpse of a new, personalized Internet where sites have the ability to adjust many things—look, content, prices—based on the kind of person they think you are.
  • The technology raises the prospect that different visitors to a website could see different prices as well. Price discrimination is generally legal, so long as it's not based on race, gender or geography, which can be deemed "redlining."
  • marketplaces for online data sprang up
  • ...3 more annotations...
  • In a fifth of a second, [x+1] says it can access and analyze thousands of pieces of information about a single user
  • When he saw the 3,748 lines of code that passed in an instant between his computer and Capital One's website, Mr. Burney said: "There's a shocking amount of information there."
  • [x+1]'s assessment of Mr. Burney's location and Nielsen demographic segment are specific enough that it comes extremely close to identifying him as an individual—that is, "de- anonymizing" him—according to Peter Eckersley, staff scientist at the Electronic Frontier Foundation, a privacy-advocacy group.
Ed Webb

News: Cheating and the Generational Divide - Inside Higher Ed - 0 views

  • such attitudes among students can develop from the notion that all of education can be distilled into performance on a test -- which today's college students have absorbed from years of schooling under No Child Left Behind -- and not that education is a process in which one grapples with difficult material.
    • Ed Webb
       
      Exactly so. If the focus of education is moved away from testing regurgitated factoids and toward building genuine skills of critical analysis and effective communication, the apparent 'gap' in understanding of what cheating is will surely go away.
  •  
    I'd love to know what you Dystopians think about this.
  •  
    Institutional education puts far too much pressure on students to do well in tests. This I believe forces students to cheat because if you do not perform well in this one form of evaluation you are clearly not educated well enough, not trying hard enough or just plain dumb. I doubt there are many instances outside of institutional education where you would need to memorize a number of facts for a small period of time where your very future is at stake. To me the only cheating is plagarism. If you're taking a standardized test and you don't know the answer to question 60 but the student next to you does how would it hurt anyone to share that answer? You're learning the answer to question 60. It's the same knowledge you'll learn when you get the test back and realize the answer to 60 was A not B. Again though, when will this scenario occur outside of schooling?
Ed Webb

Our Digitally Undying Memories - The Chronicle Review - The Chronicle of Higher Education - 0 views

  • as Viktor Mayer-Schönberger argues convincingly in his book Delete: The Virtue of Forgetting in the Digital Age (Princeton University Press, 2009), the costs of such powerful collective memory are often higher than we assume.
  • "Total recall" renders context, time, and distance irrelevant. Something that happened 40 years ago—whether youthful or scholarly indiscretion—still matters and can come back to harm us as if it had happened yesterday.
  • an important "third wave" of work about the digital environment. In the late 1990s and early 2000s, we saw books like Nicholas Negroponte's Being Digital (Knopf, 1995) and Howard Rhein-gold's The Virtual Community: Homesteading on the Electronic Frontier (Addison-Wesley, 1993) and Smart Mobs: The Next Social Revolution (Perseus, 2002), which idealistically described the transformative powers of digital networks. Then we saw shallow blowback, exemplified by Susan Jacoby's The Age of American Unreason (Pantheon, 2008).
  • ...14 more annotations...
  • For most of human history, forgetting was the default and remembering the challenge.
  • Chants, songs, monasteries, books, libraries, and even universities were established primarily to overcome our propensity to forget over time. The physical and economic limitations of all of those technologies and institutions served us well. Each acted not just as memory aids but also as filters or editors. They helped us remember much by helping us discard even more.
    • Ed Webb
       
      Excellent point, well made.
  • Just because we have the vessels, we fill them.
  • Even 10 years ago, we did not consider that words written for a tiny audience could reach beyond, perhaps to someone unforgiving, uninitiated in a community, or just plain unkind.
  • Remembering to forget, as Elvis argued, is also essential to getting over heartbreak. And, as Jorge Luis Borges wrote in his 1942 (yep, I Googled it to find the date) story "Funes el memorioso," it is just as important to the act of thinking. Funes, the young man in the story afflicted with an inability to forget anything, can't make sense of it. He can't think abstractly. He can't judge facts by relative weight or seriousness. He is lost in the details. Painfully, Funes cannot rest.
  • Our use of the proliferating data and rudimentary filters in our lives renders us incapable of judging, discriminating, or engaging in deductive reasoning. And inductive reasoning, which one could argue is entering a golden age with the rise of huge databases and the processing power needed to detect patterns and anomalies, is beyond the reach of lay users of the grand collective database called the Internet.
  • the default habits of our species: to record, retain, and release as much information as possible
  • Perhaps we just have to learn to manage wisely how we digest, discuss, and publicly assess the huge archive we are building. We must engender cultural habits that ensure perspective, calm deliberation, and wisdom. That's hard work.
  • we choose the nature of technologies. They don't choose us. We just happen to choose unwisely with some frequency
  • surveillance as the chief function of electronic government
  • critical information studies
  • Siva Vaidhyanathan is an associate professor of media studies and law at the University of Virginia. His next book, The Googlization of Everything, is forthcoming from the University of California Press.
  • Nietzsche's _On the Use and Disadvantage of History for Life_
  • Google compresses, if not eliminates, temporal context. This is likely only to exacerbate the existing problem in politics of taking one's statements out of context. A politician whose views on a subject have evolved quite logically over decades in light of changing knowledge and/or circumstances is held up in attack ads as a flip-flopper because consecutive Google entries have him/her saying two opposite things about the same subject -- and never mind that between the two statements, the Berlin Wall may have fallen or the economy crashed harder than at any other time since 1929.
Ed Webb

The Web Means the End of Forgetting - NYTimes.com - 1 views

  • for a great many people, the permanent memory bank of the Web increasingly means there are no second chances — no opportunities to escape a scarlet letter in your digital past. Now the worst thing you’ve done is often the first thing everyone knows about you.
  • a collective identity crisis. For most of human history, the idea of reinventing yourself or freely shaping your identity — of presenting different selves in different contexts (at home, at work, at play) — was hard to fathom, because people’s identities were fixed by their roles in a rigid social hierarchy. With little geographic or social mobility, you were defined not as an individual but by your village, your class, your job or your guild. But that started to change in the late Middle Ages and the Renaissance, with a growing individualism that came to redefine human identity. As people perceived themselves increasingly as individuals, their status became a function not of inherited categories but of their own efforts and achievements. This new conception of malleable and fluid identity found its fullest and purest expression in the American ideal of the self-made man, a term popularized by Henry Clay in 1832.
  • the dawning of the Internet age promised to resurrect the ideal of what the psychiatrist Robert Jay Lifton has called the “protean self.” If you couldn’t flee to Texas, you could always seek out a new chat room and create a new screen name. For some technology enthusiasts, the Web was supposed to be the second flowering of the open frontier, and the ability to segment our identities with an endless supply of pseudonyms, avatars and categories of friendship was supposed to let people present different sides of their personalities in different contexts. What seemed within our grasp was a power that only Proteus possessed: namely, perfect control over our shifting identities. But the hope that we could carefully control how others view us in different contexts has proved to be another myth. As social-networking sites expanded, it was no longer quite so easy to have segmented identities: now that so many people use a single platform to post constant status updates and photos about their private and public activities, the idea of a home self, a work self, a family self and a high-school-friends self has become increasingly untenable. In fact, the attempt to maintain different selves often arouses suspicion.
  • ...20 more annotations...
  • All around the world, political leaders, scholars and citizens are searching for responses to the challenge of preserving control of our identities in a digital world that never forgets. Are the most promising solutions going to be technological? Legislative? Judicial? Ethical? A result of shifting social norms and cultural expectations? Or some mix of the above?
  • These approaches share the common goal of reconstructing a form of control over our identities: the ability to reinvent ourselves, to escape our pasts and to improve the selves that we present to the world.
  • many technological theorists assumed that self-governing communities could ensure, through the self-correcting wisdom of the crowd, that all participants enjoyed the online identities they deserved. Wikipedia is one embodiment of the faith that the wisdom of the crowd can correct most mistakes — that a Wikipedia entry for a small-town mayor, for example, will reflect the reputation he deserves. And if the crowd fails — perhaps by turning into a digital mob — Wikipedia offers other forms of redress
  • In practice, however, self-governing communities like Wikipedia — or algorithmically self-correcting systems like Google — often leave people feeling misrepresented and burned. Those who think that their online reputations have been unfairly tarnished by an isolated incident or two now have a practical option: consulting a firm like ReputationDefender, which promises to clean up your online image. ReputationDefender was founded by Michael Fertik, a Harvard Law School graduate who was troubled by the idea of young people being forever tainted online by their youthful indiscretions. “I was seeing articles about the ‘Lord of the Flies’ behavior that all of us engage in at that age,” he told me, “and it felt un-American that when the conduct was online, it could have permanent effects on the speaker and the victim. The right to new beginnings and the right to self-definition have always been among the most beautiful American ideals.”
  • In the Web 3.0 world, Fertik predicts, people will be rated, assessed and scored based not on their creditworthiness but on their trustworthiness as good parents, good dates, good employees, good baby sitters or good insurance risks.
  • “Our customers include parents whose kids have talked about them on the Internet — ‘Mom didn’t get the raise’; ‘Dad got fired’; ‘Mom and Dad are fighting a lot, and I’m worried they’ll get a divorce.’ ”
  • as facial-recognition technology becomes more widespread and sophisticated, it will almost certainly challenge our expectation of anonymity in public
  • Ohm says he worries that employers would be able to use social-network-aggregator services to identify people’s book and movie preferences and even Internet-search terms, and then fire or refuse to hire them on that basis. A handful of states — including New York, California, Colorado and North Dakota — broadly prohibit employers from discriminating against employees for legal off-duty conduct like smoking. Ohm suggests that these laws could be extended to prevent certain categories of employers from refusing to hire people based on Facebook pictures, status updates and other legal but embarrassing personal information. (In practice, these laws might be hard to enforce, since employers might not disclose the real reason for their hiring decisions, so employers, like credit-reporting agents, might also be required by law to disclose to job candidates the negative information in their digital files.)
  • research group’s preliminary results suggest that if rumors spread about something good you did 10 years ago, like winning a prize, they will be discounted; but if rumors spread about something bad that you did 10 years ago, like driving drunk, that information has staying power
  • many people aren’t worried about false information posted by others — they’re worried about true information they’ve posted about themselves when it is taken out of context or given undue weight. And defamation law doesn’t apply to true information or statements of opinion. Some legal scholars want to expand the ability to sue over true but embarrassing violations of privacy — although it appears to be a quixotic goal.
  • Researchers at the University of Washington, for example, are developing a technology called Vanish that makes electronic data “self-destruct” after a specified period of time. Instead of relying on Google, Facebook or Hotmail to delete the data that is stored “in the cloud” — in other words, on their distributed servers — Vanish encrypts the data and then “shatters” the encryption key. To read the data, your computer has to put the pieces of the key back together, but they “erode” or “rust” as time passes, and after a certain point the document can no longer be read.
  • Plenty of anecdotal evidence suggests that young people, having been burned by Facebook (and frustrated by its privacy policy, which at more than 5,000 words is longer than the U.S. Constitution), are savvier than older users about cleaning up their tagged photos and being careful about what they post.
  • norms are already developing to recreate off-the-record spaces in public, with no photos, Twitter posts or blogging allowed. Milk and Honey, an exclusive bar on Manhattan’s Lower East Side, requires potential members to sign an agreement promising not to blog about the bar’s goings on or to post photos on social-networking sites, and other bars and nightclubs are adopting similar policies. I’ve been at dinners recently where someone has requested, in all seriousness, “Please don’t tweet this” — a custom that is likely to spread.
  • There’s already a sharp rise in lawsuits known as Twittergation — that is, suits to force Web sites to remove slanderous or false posts.
  • strategies of “soft paternalism” that might nudge people to hesitate before posting, say, drunken photos from Cancún. “We could easily think about a system, when you are uploading certain photos, that immediately detects how sensitive the photo will be.”
  • It’s sobering, now that we live in a world misleadingly called a “global village,” to think about privacy in actual, small villages long ago. In the villages described in the Babylonian Talmud, for example, any kind of gossip or tale-bearing about other people — oral or written, true or false, friendly or mean — was considered a terrible sin because small communities have long memories and every word spoken about other people was thought to ascend to the heavenly cloud. (The digital cloud has made this metaphor literal.) But the Talmudic villages were, in fact, far more humane and forgiving than our brutal global village, where much of the content on the Internet would meet the Talmudic definition of gossip: although the Talmudic sages believed that God reads our thoughts and records them in the book of life, they also believed that God erases the book for those who atone for their sins by asking forgiveness of those they have wronged. In the Talmud, people have an obligation not to remind others of their past misdeeds, on the assumption they may have atoned and grown spiritually from their mistakes. “If a man was a repentant [sinner],” the Talmud says, “one must not say to him, ‘Remember your former deeds.’ ” Unlike God, however, the digital cloud rarely wipes our slates clean, and the keepers of the cloud today are sometimes less forgiving than their all-powerful divine predecessor.
  • On the Internet, it turns out, we’re not entitled to demand any particular respect at all, and if others don’t have the empathy necessary to forgive our missteps, or the attention spans necessary to judge us in context, there’s nothing we can do about it.
  • Gosling is optimistic about the implications of his study for the possibility of digital forgiveness. He acknowledged that social technologies are forcing us to merge identities that used to be separate — we can no longer have segmented selves like “a home or family self, a friend self, a leisure self, a work self.” But although he told Facebook, “I have to find a way to reconcile my professor self with my having-a-few-drinks self,” he also suggested that as all of us have to merge our public and private identities, photos showing us having a few drinks on Facebook will no longer seem so scandalous. “You see your accountant going out on weekends and attending clown conventions, that no longer makes you think that he’s not a good accountant. We’re coming to terms and reconciling with that merging of identities.”
  • a humane society values privacy, because it allows people to cultivate different aspects of their personalities in different contexts; and at the moment, the enforced merging of identities that used to be separate is leaving many casualties in its wake.
  • we need to learn new forms of empathy, new ways of defining ourselves without reference to what others say about us and new ways of forgiving one another for the digital trails that will follow us forever
Ed Webb

Artificial Intelligence and the Future of Humans | Pew Research Center - 0 views

  • experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities
  • most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.
  • CONCERNS Human agency: Individuals are  experiencing a loss of control over their lives Decision-making on key aspects of digital life is automatically ceded to code-driven, "black box" tools. People lack input and do not learn the context about how the tools work. They sacrifice independence, privacy and power over choice; they have no control over these processes. This effect will deepen as automated systems become more prevalent and complex. Data abuse: Data use and surveillance in complex systems is designed for profit or for exercising power Most AI tools are and will be in the hands of companies striving for profits or governments striving for power. Values and ethics are often not baked into the digital systems making people's decisions for them. These systems are globally networked and not easy to regulate or rein in. Job loss: The AI takeover of jobs will widen economic divides, leading to social upheaval The efficiencies and other economic advantages of code-based machine intelligence will continue to disrupt all aspects of human work. While some expect new jobs will emerge, others worry about massive job losses, widening economic divides and social upheavals, including populist uprisings. Dependence lock-in: Reduction of individuals’ cognitive, social and survival skills Many see AI as augmenting human capacities but some predict the opposite - that people's deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others. Mayhem: Autonomous weapons, cybercrime and weaponized information Some predict further erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications and the use of weaponized information, lies and propaganda to dangerously destabilize human groups. Some also fear cybercriminals' reach into economic systems.
  • ...18 more annotations...
  • AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons
  • “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”
  • SUGGESTED SOLUTIONS Global good is No. 1: Improve human collaboration across borders and stakeholder groups Digital cooperation to serve humanity's best interests is the top priority. Ways must be found for people around the world to come to common understandings and agreements - to join forces to facilitate the innovation of widely accepted approaches aimed at tackling wicked problems and maintaining control over complex human-digital networks. Values-based system: Develop policies to assure AI will be directed at ‘humanness’ and common good Adopt a 'moonshot mentality' to build inclusive, decentralized intelligent digital networks 'imbued with empathy' that help humans aggressively ensure that technology meets social and ethical responsibilities. Some new level of regulatory and certification process will be necessary. Prioritize people: Alter economic and political systems to better help humans ‘race with the robots’ Reorganize economic and political systems toward the goal of expanding humans' capacities and capabilities in order to heighten human/AI collaboration and staunch trends that would compromise human relevance in the face of programmed intelligence.
  • “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”
  • We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals?
  • AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment
  • The record to date is that convenience overwhelms privacy
  • As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging
  • AI will eventually cause a large number of people to be permanently out of work
  • Newer generations of citizens will become more and more dependent on networked AI structures and processes
  • there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control
  • As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified
  • Given historical precedent, one would have to assume it will be our worst qualities that are augmented
  • Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing
  • We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully
  • the Orwellian nightmare realised
  • “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”
  • The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
1 - 8 of 8
Showing 20 items per page