Skip to main content

Home/ Dystopias/ Group items tagged education technology

Rss Feed Group items tagged

Ed Webb

Clear backpacks, monitored emails: life for US students under constant surveillance | E... - 0 views

  • This level of surveillance is “not too over-the-top”, Ingrid said, and she feels her classmates are generally “accepting” of it.
  • One leading student privacy expert estimated that as many as a third of America’s roughly 15,000 school districts may already be using technology that monitors students’ emails and documents for phrases that might flag suicidal thoughts, plans for a school shooting, or a range of other offenses.
  • When Dapier talks with other teen librarians about the issue of school surveillance, “we’re very alarmed,” he said. “It sort of trains the next generation that [surveillance] is normal, that it’s not an issue. What is the next generation’s Mark Zuckerberg going to think is normal?
  • ...13 more annotations...
  • Some parents said they were alarmed and frightened by schools’ new monitoring technologies. Others said they were conflicted, seeing some benefits to schools watching over what kids are doing online, but uncertain if their schools were striking the right balance with privacy concerns. Many said they were not even sure what kind of surveillance technology their schools might be using, and that the permission slips they had signed when their kids brought home school devices had told them almost nothing
  • “They’re so unclear that I’ve just decided to cut off the research completely, to not do any of it.”
  • As of 2018, at least 60 American school districts had also spent more than $1m on separate monitoring technology to track what their students were saying on public social media accounts, an amount that spiked sharply in the wake of the 2018 Parkland school shooting, according to the Brennan Center for Justice, a progressive advocacy group that compiled and analyzed school contracts with a subset of surveillance companies.
  • “They are all mandatory, and the accounts have been created before we’ve even been consulted,” he said. Parents are given almost no information about how their children’s data is being used, or the business models of the companies involved. Any time his kids complete school work through a digital platform, they are generating huge amounts of very personal, and potentially very valuable, data. The platforms know what time his kids do their homework, and whether it’s done early or at the last minute. They know what kinds of mistakes his kids make on math problems.
  • Felix, now 12, said he is frustrated that the school “doesn’t really [educate] students on what is OK and what is not OK. They don’t make it clear when they are tracking you, or not, or what platforms they track you on. “They don’t really give you a list of things not to do,” he said. “Once you’re in trouble, they act like you knew.”
  • “It’s the school as panopticon, and the sweeping searchlight beams into homes, now, and to me, that’s just disastrous to intellectual risk-taking and creativity.”
  • Many parents also said that they wanted more transparency and more parental control over surveillance. A few years ago, Ben, a tech professional from Maryland, got a call from his son’s principal to set up an urgent meeting. His son, then about nine or 10-years old, had opened up a school Google document and typed “I want to kill myself.” It was not until he and his son were in a serious meeting with school officials that Ben found out what happened: his son had typed the words on purpose, curious about what would happen. “The smile on his face gave away that he was testing boundaries, and not considering harming himself,” Ben said. (He asked that his last name and his son’s school district not be published, to preserve his son’s privacy.) The incident was resolved easily, he said, in part because Ben’s family already had close relationships with the school administrators.
  • there is still no independent evaluation of whether this kind of surveillance technology actually works to reduce violence and suicide.
  • Certain groups of students could easily be targeted by the monitoring more intensely than others, she said. Would Muslim students face additional surveillance? What about black students? Her daughter, who is 11, loves hip-hop music. “Maybe some of that language could be misconstrued, by the wrong ears or the wrong eyes, as potentially violent or threatening,” she said.
  • The Parent Coalition for Student Privacy was founded in 2014, in the wake of parental outrage over the attempt to create a standardized national database that would track hundreds of data points about public school students, from their names and social security numbers to their attendance, academic performance, and disciplinary and behavior records, and share the data with education tech companies. The effort, which had been funded by the Gates Foundation, collapsed in 2014 after fierce opposition from parents and privacy activists.
  • “More and more parents are organizing against the onslaught of ed tech and the loss of privacy that it entails. But at the same time, there’s so much money and power and political influence behind these groups,”
  • some privacy experts – and students – said they are concerned that surveillance at school might actually be undermining students’ wellbeing
  • “I do think the constant screen surveillance has affected our anxiety levels and our levels of depression.” “It’s over-guarding kids,” she said. “You need to let them make mistakes, you know? That’s kind of how we learn.”
Ed Webb

Stephen Downes: A World to Change - 0 views

  • we need, first, to take charge of our own learning, and next, help others take charge of their own learning. We need to move beyond the idea that an education is something that is provided for us, and toward the idea that an education is something that we create for ourselves. It is time, in other words, that we change out attitude toward learning and the educational system in general. That is not to advocate throwing learners off the bus to fend for themselves. It is hard to be self-reliant, to take charge of one's own learning, and people shouldn't have to do it alone. It is instead to articulate a way we as a society approach education and learning, beginning with an attitude, though the development of supports and a system, through to the techniques and technologies that support that.
  •  
    For those interested in blogging further about education, more food for thought
Ed Webb

Why Doesn't Anyone Pay Attention Anymore? | HASTAC - 0 views

  • We also need to distinguish what scientists know about human neurophysiology from our all-too-human discomfort with cultural and social change.  I've been an English professor for over twenty years and have heard how students don't pay attention, can't read a long novel anymore, and are in decline against some unspecified norm of an idealized past quite literally every year that I have been in this profession. In fact, how we educators should address this dire problem was the focus of the very first faculty meeting I ever attended.
  • Whenever I hear about attentional issues in debased contemporary society, whether blamed on television, VCR's, rock music, or the desktop, I assume that the critic was probably, like me, the one student who actually read Moby Dick and who had little awareness that no one else did.
  • This is not really a discussion about the biology of attention; it is about the sociology of change.
  • ...3 more annotations...
  • The brain is always changed by what it does.  That's how we learn, from infancy on, and that's how a baby born in New York has different cultural patterns of behavior, language, gesture, interaction, socialization, and attention than a baby born the same day in Beijing. That's as true for the historical moment into which we are born as it is for the geographical location.  Our attention is shaped by all we do, and reshaped by all we do.  That is what learning is.  The best we can do as educators is find ways to improve our institutions of learning to help our kids be prepared for their future--not for our past.
  • I didn't find the article nearly as stigmatizing and retrograde as I do the knee-jerk Don't Tread on Me reactions of everyone I've seen respond--most of which amount to foolish technolibertarian celebrations of the anonymous savior Technology (Cathy, you don't do that there, even if you also have nothing good to say about the NYT piece).If anything, the article showed that these kids (like all of us!) are profoundly distressed by today's media ecology. They seem to have a far more subtle perspective on things than most others. Frankly I'm a bit gobstopped that everyone hates this article so much. As for the old chestnut that "we need new education for the information age," it's worth pointing out that there was no formal, standardized education system before the industrial age. Compulsory education is a century old experiment. And yes, it ought to be discarded. But that's a frightening prospect for almost everyone, including those who advocate for it. I wonder how many of the intelligentsia who raise their fists and cry, "We need a different education system!" still partake of the old system for their own kids. We don't in my house, for what it's worth, and it's a huge pain in the ass.
  • Cathy -- I really appreciate the distinctions you make between the "the biology of attention" and "the sociology of change." And I agree that more complex and nuanced conversations about technology's relationship to attention, diverstion, focus, and immersion will be more productive (than either nostalgia or utopic futurism). For example, it seems like a strange oversight (in the NYT piece) to bemoan the ability of "kids these days" to focus, read immersively, or Pay Attention, yet report without comment that these same kids can edit video for hours on end -- creative, immersive work which, I would imagine, requires more than a little focus. It seems that perhaps the question is not whether we can still pay attention or focus, but what those diverse forms of immersion within different media (will) look like.
  •  
    I recommend both this commentary and the original NYT piece to which it links and on which it comments.
Ed Webb

Does the Digital Classroom Enfeeble the Mind? - NYTimes.com - 0 views

  • My father would have been unable to “teach to the test.” He once complained about errors in a sixth-grade math textbook, so he had the class learn math by designing a spaceship. My father would have been spat out by today’s test-driven educational regime.
  • A career in computer science makes you see the world in its terms. You start to see money as a form of information display instead of as a store of value. Money flows are the computational output of a lot of people planning, promising, evaluating, hedging and scheming, and those behaviors start to look like a set of algorithms. You start to see the weather as a computer processing bits tweaked by the sun, and gravity as a cosmic calculation that keeps events in time and space consistent. This way of seeing is becoming ever more common as people have experiences with computers. While it has its glorious moments, the computational perspective can at times be uniquely unromantic. Nothing kills music for me as much as having some algorithm calculate what music I will want to hear. That seems to miss the whole point. Inventing your musical taste is the point, isn’t it? Bringing computers into the middle of that is like paying someone to program a robot to have sex on your behalf so you don’t have to. And yet it seems we benefit from shining an objectifying digital light to disinfect our funky, lying selves once in a while. It’s heartless to have music chosen by digital algorithms. But at least there are fewer people held hostage to the tastes of bad radio D.J.’s than there once were. The trick is being ambidextrous, holding one hand to the heart while counting on the digits of the other.
  • The future of education in the digital age will be determined by our judgment of which aspects of the information we pass between generations can be represented in computers at all. If we try to represent something digitally when we actually can’t, we kill the romance and make some aspect of the human condition newly bland and absurd. If we romanticize information that shouldn’t be shielded from harsh calculations, we’ll suffer bad teachers and D.J.’s and their wares.
  • ...5 more annotations...
  • Some of the top digital designs of the moment, both in school and in the rest of life, embed the underlying message that we understand the brain and its workings. That is false. We don’t know how information is represented in the brain. We don’t know how reason is accomplished by neurons. There are some vaguely cool ideas floating around, and we might know a lot more about these things any moment now, but at this moment, we don’t. You could spend all day reading literature about educational technology without being reminded that this frontier of ignorance lies before us. We are tempted by the demons of commercial and professional ambition to pretend we know more than we do.
  • Outside school, something similar happens. Students spend a lot of time acting as trivialized relays in giant schemes designed for the purposes of advertising and other revenue-minded manipulations. They are prompted to create databases about themselves and then trust algorithms to assemble streams of songs and movies and stories for their consumption. We see the embedded philosophy bloom when students assemble papers as mash-ups from online snippets instead of thinking and composing on a blank piece of screen. What is wrong with this is not that students are any lazier now or learning less. (It is probably even true, I admit reluctantly, that in the presence of the ambient Internet, maybe it is not so important anymore to hold an archive of certain kinds of academic trivia in your head.) The problem is that students could come to conceive of themselves as relays in a transpersonal digital structure. Their job is then to copy and transfer data around, to be a source of statistics, whether to be processed by tests at school or by advertising schemes elsewhere.
  • If students don’t learn to think, then no amount of access to information will do them any good.
  • To the degree that education is about the transfer of the known between generations, it can be digitized, analyzed, optimized and bottled or posted on Twitter. To the degree that education is about the self-invention of the human race, the gargantuan process of steering billions of brains into unforeseeable states and configurations in the future, it can continue only if each brain learns to invent itself. And that is beyond computation because it is beyond our comprehension.
  • Roughly speaking, there are two ways to use computers in the classroom. You can have them measure and represent the students and the teachers, or you can have the class build a virtual spaceship. Right now the first way is ubiquitous, but the virtual spaceships are being built only by tenacious oddballs in unusual circumstances. More spaceships, please.
  •  
    How do we get this right - use the tech for what it can do well, develop our brains for what the tech can't do? Who's up for building a spaceship?
Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
Ed Webb

Artificial Intelligence and the Future of Humans | Pew Research Center - 0 views

  • experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities
  • most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.
  • CONCERNS Human agency: Individuals are  experiencing a loss of control over their lives Decision-making on key aspects of digital life is automatically ceded to code-driven, "black box" tools. People lack input and do not learn the context about how the tools work. They sacrifice independence, privacy and power over choice; they have no control over these processes. This effect will deepen as automated systems become more prevalent and complex. Data abuse: Data use and surveillance in complex systems is designed for profit or for exercising power Most AI tools are and will be in the hands of companies striving for profits or governments striving for power. Values and ethics are often not baked into the digital systems making people's decisions for them. These systems are globally networked and not easy to regulate or rein in. Job loss: The AI takeover of jobs will widen economic divides, leading to social upheaval The efficiencies and other economic advantages of code-based machine intelligence will continue to disrupt all aspects of human work. While some expect new jobs will emerge, others worry about massive job losses, widening economic divides and social upheavals, including populist uprisings. Dependence lock-in: Reduction of individuals’ cognitive, social and survival skills Many see AI as augmenting human capacities but some predict the opposite - that people's deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others. Mayhem: Autonomous weapons, cybercrime and weaponized information Some predict further erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications and the use of weaponized information, lies and propaganda to dangerously destabilize human groups. Some also fear cybercriminals' reach into economic systems.
  • ...18 more annotations...
  • AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons
  • “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”
  • SUGGESTED SOLUTIONS Global good is No. 1: Improve human collaboration across borders and stakeholder groups Digital cooperation to serve humanity's best interests is the top priority. Ways must be found for people around the world to come to common understandings and agreements - to join forces to facilitate the innovation of widely accepted approaches aimed at tackling wicked problems and maintaining control over complex human-digital networks. Values-based system: Develop policies to assure AI will be directed at ‘humanness’ and common good Adopt a 'moonshot mentality' to build inclusive, decentralized intelligent digital networks 'imbued with empathy' that help humans aggressively ensure that technology meets social and ethical responsibilities. Some new level of regulatory and certification process will be necessary. Prioritize people: Alter economic and political systems to better help humans ‘race with the robots’ Reorganize economic and political systems toward the goal of expanding humans' capacities and capabilities in order to heighten human/AI collaboration and staunch trends that would compromise human relevance in the face of programmed intelligence.
  • “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”
  • We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals?
  • AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment
  • The record to date is that convenience overwhelms privacy
  • As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging
  • AI will eventually cause a large number of people to be permanently out of work
  • Newer generations of citizens will become more and more dependent on networked AI structures and processes
  • there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control
  • As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified
  • Given historical precedent, one would have to assume it will be our worst qualities that are augmented
  • Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing
  • We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully
  • the Orwellian nightmare realised
  • “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”
  • The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education
Ed Webb

The Coronavirus and Our Future | The New Yorker - 0 views

  • I’ve spent my life writing science-fiction novels that try to convey some of the strangeness of the future. But I was still shocked by how much had changed, and how quickly.
  • the change that struck me seemed more abstract and internal. It was a change in the way we were looking at things, and it is still ongoing. The virus is rewriting our imaginations. What felt impossible has become thinkable. We’re getting a different sense of our place in history. We know we’re entering a new world, a new era. We seem to be learning our way into a new structure of feeling.
  • The Anthropocene, the Great Acceleration, the age of climate change—whatever you want to call it, we’ve been out of synch with the biosphere, wasting our children’s hopes for a normal life, burning our ecological capital as if it were disposable income, wrecking our one and only home in ways that soon will be beyond our descendants’ ability to repair. And yet we’ve been acting as though it were 2000, or 1990—as though the neoliberal arrangements built back then still made sense. We’ve been paralyzed, living in the world without feeling it.
  • ...24 more annotations...
  • We realize that what we do now, well or badly, will be remembered later on. This sense of enacting history matters. For some of us, it partly compensates for the disruption of our lives.
  • Actually, we’ve already been living in a historic moment. For the past few decades, we’ve been called upon to act, and have been acting in a way that will be scrutinized by our descendants. Now we feel it. The shift has to do with the concentration and intensity of what’s happening. September 11th was a single day, and everyone felt the shock of it, but our daily habits didn’t shift, except at airports; the President even urged us to keep shopping. This crisis is different. It’s a biological threat, and it’s global. Everyone has to change together to deal with it. That’s really history.
  • There are 7.8 billion people alive on this planet—a stupendous social and technological achievement that’s unnatural and unstable. It’s made possible by science, which has already been saving us. Now, though, when disaster strikes, we grasp the complexity of our civilization—we feel the reality, which is that the whole system is a technical improvisation that science keeps from crashing down
  • Today, in theory, everyone knows everything. We know that our accidental alteration of the atmosphere is leading us into a mass-extinction event, and that we need to move fast to dodge it. But we don’t act on what we know. We don’t want to change our habits. This knowing-but-not-acting is part of the old structure of feeling.
  • remember that you must die. Older people are sometimes better at keeping this in mind than younger people. Still, we’re all prone to forgetting death. It never seems quite real until the end, and even then it’s hard to believe. The reality of death is another thing we know about but don’t feel.
  • it is the first of many calamities that will likely unfold throughout this century. Now, when they come, we’ll be familiar with how they feel.
  • water shortages. And food shortages, electricity outages, devastating storms, droughts, floods. These are easy calls. They’re baked into the situation we’ve already created, in part by ignoring warnings that scientists have been issuing since the nineteen-sixties
  • Imagine what a food scare would do. Imagine a heat wave hot enough to kill anyone not in an air-conditioned space, then imagine power failures happening during such a heat wave.
  • science fiction is the realism of our time
  • Science-fiction writers don’t know anything more about the future than anyone else. Human history is too unpredictable; from this moment, we could descend into a mass-extinction event or rise into an age of general prosperity. Still, if you read science fiction, you may be a little less surprised by whatever does happen. Often, science fiction traces the ramifications of a single postulated change; readers co-create, judging the writers’ plausibility and ingenuity, interrogating their theories of history. Doing this repeatedly is a kind of training. It can help you feel more oriented in the history we’re making now. This radical spread of possibilities, good to bad, which creates such a profound disorientation; this tentative awareness of the emerging next stage—these are also new feelings in our time.
  • Do we believe in science? Go outside and you’ll see the proof that we do everywhere you look. We’re learning to trust our science as a society. That’s another part of the new structure of feeling.
  • This mixture of dread and apprehension and normality is the sensation of plague on the loose. It could be part of our new structure of feeling, too.
  • there are charismatic mega-ideas. “Flatten the curve” could be one of them. Immediately, we get it. There’s an infectious, deadly plague that spreads easily, and, although we can’t avoid it entirely, we can try to avoid a big spike in infections, so that hospitals won’t be overwhelmed and fewer people will die. It makes sense, and it’s something all of us can help to do. When we do it—if we do it—it will be a civilizational achievement: a new thing that our scientific, educated, high-tech species is capable of doing. Knowing that we can act in concert when necessary is another thing that will change us.
  • People who study climate change talk about “the tragedy of the horizon.” The tragedy is that we don’t care enough about those future people, our descendants, who will have to fix, or just survive on, the planet we’re now wrecking. We like to think that they’ll be richer and smarter than we are and so able to handle their own problems in their own time. But we’re creating problems that they’ll be unable to solve. You can’t fix extinctions, or ocean acidification, or melted permafrost, no matter how rich or smart you are. The fact that these problems will occur in the future lets us take a magical view of them. We go on exacerbating them, thinking—not that we think this, but the notion seems to underlie our thinking—that we will be dead before it gets too serious. The tragedy of the horizon is often something we encounter, without knowing it, when we buy and sell. The market is wrong; the prices are too low. Our way of life has environmental costs that aren’t included in what we pay, and those costs will be borne by our descendents. We are operating a multigenerational Ponzi scheme.
  • We’ve decided to sacrifice over these months so that, in the future, people won’t suffer as much as they would otherwise. In this case, the time horizon is so short that we are the future people.
  • Amid the tragedy and death, this is one source of pleasure. Even though our economic system ignores reality, we can act when we have to. At the very least, we are all freaking out together. To my mind, this new sense of solidarity is one of the few reassuring things to have happened in this century. If we can find it in this crisis, to save ourselves, then maybe we can find it in the big crisis, to save our children and theirs.
  • Thatcher said that “there is no such thing as society,” and Ronald Reagan said that “government is not the solution to our problem; government is the problem.” These stupid slogans marked the turn away from the postwar period of reconstruction and underpin much of the bullshit of the past forty years
  • We are individuals first, yes, just as bees are, but we exist in a larger social body. Society is not only real; it’s fundamental. We can’t live without it. And now we’re beginning to understand that this “we” includes many other creatures and societies in our biosphere and even in ourselves. Even as an individual, you are a biome, an ecosystem, much like a forest or a swamp or a coral reef. Your skin holds inside it all kinds of unlikely coöperations, and to survive you depend on any number of interspecies operations going on within you all at once. We are societies made of societies; there are nothing but societies. This is shocking news—it demands a whole new world view.
  • It’s as if the reality of citizenship has smacked us in the face.
  • The neoliberal structure of feeling totters. What might a post-capitalist response to this crisis include? Maybe rent and debt relief; unemployment aid for all those laid off; government hiring for contact tracing and the manufacture of necessary health equipment; the world’s militaries used to support health care; the rapid construction of hospitals.
  • If the project of civilization—including science, economics, politics, and all the rest of it—were to bring all eight billion of us into a long-term balance with Earth’s biosphere, we could do it. By contrast, when the project of civilization is to create profit—which, by definition, goes to only a few—much of what we do is actively harmful to the long-term prospects of our species.
  • Economics is a system for optimizing resources, and, if it were trying to calculate ways to optimize a sustainable civilization in balance with the biosphere, it could be a helpful tool. When it’s used to optimize profit, however, it encourages us to live within a system of destructive falsehoods. We need a new political economy by which to make our calculations. Now, acutely, we feel that need.
  • We’ll remember this even if we pretend not to. History is happening now, and it will have happened. So what will we do with that?
  • How we feel is shaped by what we value, and vice versa. Food, water, shelter, clothing, education, health care: maybe now we value these things more, along with the people whose work creates them. To survive the next century, we need to start valuing the planet more, too, since it’s our only home.
Ed Webb

Our Digitally Undying Memories - The Chronicle Review - The Chronicle of Higher Education - 0 views

  • as Viktor Mayer-Schönberger argues convincingly in his book Delete: The Virtue of Forgetting in the Digital Age (Princeton University Press, 2009), the costs of such powerful collective memory are often higher than we assume.
  • "Total recall" renders context, time, and distance irrelevant. Something that happened 40 years ago—whether youthful or scholarly indiscretion—still matters and can come back to harm us as if it had happened yesterday.
  • an important "third wave" of work about the digital environment. In the late 1990s and early 2000s, we saw books like Nicholas Negroponte's Being Digital (Knopf, 1995) and Howard Rhein-gold's The Virtual Community: Homesteading on the Electronic Frontier (Addison-Wesley, 1993) and Smart Mobs: The Next Social Revolution (Perseus, 2002), which idealistically described the transformative powers of digital networks. Then we saw shallow blowback, exemplified by Susan Jacoby's The Age of American Unreason (Pantheon, 2008).
  • ...14 more annotations...
  • For most of human history, forgetting was the default and remembering the challenge.
  • Chants, songs, monasteries, books, libraries, and even universities were established primarily to overcome our propensity to forget over time. The physical and economic limitations of all of those technologies and institutions served us well. Each acted not just as memory aids but also as filters or editors. They helped us remember much by helping us discard even more.
    • Ed Webb
       
      Excellent point, well made.
  • Just because we have the vessels, we fill them.
  • Even 10 years ago, we did not consider that words written for a tiny audience could reach beyond, perhaps to someone unforgiving, uninitiated in a community, or just plain unkind.
  • Remembering to forget, as Elvis argued, is also essential to getting over heartbreak. And, as Jorge Luis Borges wrote in his 1942 (yep, I Googled it to find the date) story "Funes el memorioso," it is just as important to the act of thinking. Funes, the young man in the story afflicted with an inability to forget anything, can't make sense of it. He can't think abstractly. He can't judge facts by relative weight or seriousness. He is lost in the details. Painfully, Funes cannot rest.
  • Our use of the proliferating data and rudimentary filters in our lives renders us incapable of judging, discriminating, or engaging in deductive reasoning. And inductive reasoning, which one could argue is entering a golden age with the rise of huge databases and the processing power needed to detect patterns and anomalies, is beyond the reach of lay users of the grand collective database called the Internet.
  • the default habits of our species: to record, retain, and release as much information as possible
  • Perhaps we just have to learn to manage wisely how we digest, discuss, and publicly assess the huge archive we are building. We must engender cultural habits that ensure perspective, calm deliberation, and wisdom. That's hard work.
  • we choose the nature of technologies. They don't choose us. We just happen to choose unwisely with some frequency
  • surveillance as the chief function of electronic government
  • critical information studies
  • Siva Vaidhyanathan is an associate professor of media studies and law at the University of Virginia. His next book, The Googlization of Everything, is forthcoming from the University of California Press.
  • Nietzsche's _On the Use and Disadvantage of History for Life_
  • Google compresses, if not eliminates, temporal context. This is likely only to exacerbate the existing problem in politics of taking one's statements out of context. A politician whose views on a subject have evolved quite logically over decades in light of changing knowledge and/or circumstances is held up in attack ads as a flip-flopper because consecutive Google entries have him/her saying two opposite things about the same subject -- and never mind that between the two statements, the Berlin Wall may have fallen or the economy crashed harder than at any other time since 1929.
Ed Webb

Programmed for Love: The Unsettling Future of Robotics - The Chronicle Review - The Chr... - 0 views

  • Her prediction: Companies will soon sell robots designed to baby-sit children, replace workers in nursing homes, and serve as companions for people with disabilities. All of which to Turkle is demeaning, "transgressive," and damaging to our collective sense of humanity. It's not that she's against robots as helpers—building cars, vacuuming floors, and helping to bathe the sick are one thing. She's concerned about robots that want to be buddies, implicitly promising an emotional connection they can never deliver.
  • y: We are already cyborgs, reliant on digital devices in ways that many of us could not have imagined just a few years ago
  • "We are hard-wired that if something meets extremely primitive standards, either eye contact or recognition or very primitive mutual signaling, to accept it as an Other because as animals that's how we're hard-wired—to recognize other creatures out there."
  • ...4 more annotations...
  • "Can a broken robot break a child?" they asked. "We would not consider the ethics of having children play with a damaged copy of Microsoft Word or a torn Raggedy Ann doll. But sociable robots provoke enough emotion to make this ethical question feel very real."
  • "The concept of robots as baby sitters is, intellectually, one that ought to appeal to parents more than the idea of having a teenager or similarly inexperienced baby sitter responsible for the safety of their infants," he writes. "Their smoke-detection capabilities will be better than ours, and they will never be distracted for the brief moment it can take an infant to do itself some terrible damage or be snatched by a deranged stranger."
  • "What if we get used to relationships that are made to measure?" Turkle asks. "Is that teaching us that relationships can be just the way we want them?" After all, if a robotic partner were to become annoying, we could just switch it off.
  • We've reached a moment, she says, when we should make "corrections"—to develop social norms to help offset the feeling that we must check for messages even when that means ignoring the people around us. "Today's young people have a special vulnerability: Although always connected, they feel deprived of attention," she writes. "Some, as children, were pushed on swings while their parents spoke on cellphones. Now these same parents do their e-mail at the dinner table." One 17-year-old boy even told her that at least a robot would remember everything he said, contrary to his father, who often tapped at a BlackBerry during conversations.
Ed Webb

A woman first wrote the prescient ideas Huxley and Orwell made famous - Quartzy - 1 views

  • In 1919, a British writer named Rose Macaulay published What Not, a novel about a dystopian future—a brave new world if you will—where people are ranked by intelligence, the government mandates mind training for all citizens, and procreation is regulated by the state.You’ve probably never heard of Macaulay or What Not. However, Aldous Huxley, author of the science fiction classic Brave New World, hung out in the same London literary circles as her and his 1932 book contains many concepts that Macaulay first introduced in her work. In 2019, you’ll be able to read Macaulay’s book yourself and compare the texts as the British publisher Handheld Press is planning to re- release the forgotten novel in March. It’s been out of print since the year it was first released.
  • The resurfacing of What Not also makes this a prime time to consider another work that influenced Huxley’s Brave New World, the 1923 novel We by Yvgeny Zamyatin. What Not and We are lost classics about a future that foreshadows our present. Notably, they are also hidden influences on some of the most significant works of 20th century fiction, Brave New World and George Orwell’s 1984.
  • In Macaulay’s book—which is a hoot and well worth reading—a democratically elected British government has been replaced with a “United Council, five minds with but a single thought—if that,” as she put it. Huxley’s Brave New World is run by a similarly small group of elites known as “World Controllers.”
  • ...12 more annotations...
  • citizens of What Not are ranked based on their intelligence from A to C3 and can’t marry or procreate with someone of the same rank to ensure that intelligence is evenly distributed
  • Brave New World is more futuristic and preoccupied with technology than What Not. In Huxley’s world, procreation and education have become completely mechanized and emotions are strictly regulated pharmaceutically. Macaulay’s Britain is just the beginning of this process, and its characters are not yet completely indoctrinated into the new ways of the state—they resist it intellectually and question its endeavors, like the newly-passed Mental Progress Act. She writes:He did not like all this interfering, socialist what-not, which was both upsetting the domestic arrangements of his tenants and trying to put into their heads more learning than was suitable for them to have. For his part he thought every man had a right to be a fool if he chose, yes, and to marry another fool, and to bring up a family of fools too.
  • Where Huxley pairs dumb but pretty and “pneumatic” ladies with intelligent gentlemen, Macaulay’s work is decidedly less sexist.
  • We was published in French, Dutch, and German. An English version was printed and sold only in the US. When Orwell wrote about We in 1946, it was only because he’d managed to borrow a hard-to-find French translation.
  • While Orwell never indicated that he read Macaulay, he shares her subversive and subtle linguistic skills and satirical sense. His protagonist, Winston—like Kitty—works for the government in its Ministry of Truth, or Minitrue in Newspeak, where he rewrites historical records to support whatever Big Brother currently says is good for the regime. Macaulay would no doubt have approved of Orwell’s wit. And his state ministries bear a striking similarity to those she wrote about in What Not.
  • Orwell was familiar with Huxley’s novel and gave it much thought before writing his own blockbuster. Indeed, in 1946, before the release of 1984, he wrote a review of Zamyatin’s We (pdf), comparing the Russian novel with Huxley’s book. Orwell declared Huxley’s text derivative, writing in his review of We in The Tribune:The first thing anyone would notice about We is the fact—never pointed out, I believe—that Aldous Huxley’s Brave New World must be partly derived from it. Both books deal with the rebellion of the primitive human spirit against a rationalised, mechanized, painless world, and both stories are supposed to take place about six hundred years hence. The atmosphere of the two books is similar, and it is roughly speaking the same kind of society that is being described, though Huxley’s book shows less political awareness and is more influenced by recent biological and psychological theories.
  • In We, the story is told by D-503, a male engineer, while in Brave New World we follow Bernard Marx, a protagonist with a proper name. Both characters live in artificial worlds, separated from nature, and they recoil when they first encounter people who exist outside of the state’s constructed and controlled cities.
  • Although We is barely known compared to Orwell and Huxley’s later works, I’d argue that it’s among the best literary science fictions of all time, and it’s highly relevant, as it was when first written. Noam Chomsky calls it “more perceptive” than both 1984 and Brave New World. Zamyatin’s futuristic society was so on point, he was exiled from the Soviet Union because it was such an accurate description of life in a totalitarian regime, though he wrote it before Stalin took power.
  • Macaulay’s work is more subtle and funny than Huxley’s. Despite being a century old, What Not is remarkably relevant and readable, a satire that only highlights how little has changed in the years since its publication and how dangerous and absurd state policies can be. In this sense then, What Not reads more like George Orwell’s 1949 novel 1984 
  • Orwell was critical of Zamyatin’s technique. “[We] has a rather weak and episodic plot which is too complex to summarize,” he wrote. Still, he admired the work as a whole. “[Its] intuitive grasp of the irrational side of totalitarianism—human sacrifice, cruelty as an end in itself, the worship of a Leader who is credited with divine attributes—[…] makes Zamyatin’s book superior to Huxley’s,”
  • Like our own tech magnates and nations, the United State of We is obsessed with going to space.
  • Perhaps in 2019 Macaulay’s What Not, a clever and subversive book, will finally get its overdue recognition.
Ed Webb

China's New "Social Credit Score" Brings Dystopian Science Fiction to Life - 1 views

  • The Chinese government is taking a controversial step in security, with plans to implement a system that gives and collects financial, social, political, and legal credit ratings of citizens into a social credit score
  • Proponents of the idea are already testing various aspects of the system — gathering digital records of citizens, specifically financial behavior. These will then be used to create a social credit score system, which will determine if a citizen can avail themselves of certain services based on his or her social credit rating
  • it’s going to be like an episode from Black Mirror — the social credit score of citizens will be the basis for access to services ranging from travel and education to loans and insurance coverage.
Ed Webb

I unintentionally created a biased AI algorithm 25 years ago - tech companies are still... - 0 views

  • How and why do well-educated, well-intentioned scientists produce biased AI systems? Sociological theories of privilege provide one useful lens.
  • Their training data is biased. They are designed by an unrepresentative group. They face the mathematical impossibility of treating all categories equally. They must somehow trade accuracy for fairness. And their biases are hiding behind millions of inscrutable numerical parameters.
  • fairness can still be the victim of competitive pressures in academia and industry. The flawed Bard and Bing chatbots from Google and Microsoft are recent evidence of this grim reality. The commercial necessity of building market share led to the premature release of these systems.
  • ...3 more annotations...
  • Scientists also face a nasty subconscious dilemma when incorporating diversity into machine learning models: Diverse, inclusive models perform worse than narrow models.
  • biased AI systems can still be created unintentionally and easily. It’s also clear that the bias in these systems can be harmful, hard to detect and even harder to eliminate.
  • with North American computer science doctoral programs graduating only about 23% female, and 3% Black and Latino students, there will continue to be many rooms and many algorithms in which underrepresented groups are not represented at all.
1 - 14 of 14
Showing 20 items per page