Skip to main content

Home/ Dystopias/ Group items tagged skills

Rss Feed Group items tagged

Ed Webb

Students Lack Basic Research Skills, Study Finds - Wired Campus - The Chronicle of High... - 0 views

  • today’s students struggle with a feeling of information overload. “They feel overwhelmed, and they’re developing a strategy for not drowning in all information out there,” she said. “They’re basically taking how they learned to research in high school with them to college, since it’s worked for them in the past.”
  • college students approach research as a hunt for the right answer instead of a process of evaluating different arguments and coming up with their own interpretation.
  •  
    Comments?
Ed Webb

Hayabusa2 and the unfolding future of space exploration | Bryan Alexander - 0 views

  • What might this tell us about the future?  Let’s consider Ryugu as a datapoint or story for where space exploration might head next.
  • robots continue to be cheap, far easier to operate, capable of enduring awful stresses, and happy to send gorgeous data back our way
  • Hayabusa is a Japanese project, not an American one, and national interest counts for a lot.  No humans were involved, so human interest and story are absent.  Perhaps the whole project looks too science-y for a culture that spins into post-truthiness, contains some serious anti-science and anti-technology strands, or just finds science stories too dry.  Or maybe the American media outlets think Americans just aren’t that into space in particular in 2018.
  • ...13 more annotations...
  • Hayabusa2 reminds us that space exploration is more multinational and more disaggregated than ever.  Besides JAXA there are space programs being build up by China and India, including robot craft, astronauts (taikonauts, for China, vyomanauts, for India), and space stations.  The Indian Mars Orbiter still circles the fourth planet. The European Space Agency continues to develop satellites and launch rockets, like the JUICE (JUpiter ICy moons Explorer).  Russia is doing some mixture of commercial spaceflight, ISS maintenance, exploration, and geopoliticking.  For these nations space exploration holds out a mixture of prestige, scientific and engineering development, and possible commercial return.
  • Bezos, Musk, and others live out a Robert Heinlein story by building up their own personal space efforts.  This is, among other things, a sign of how far American wealth has grown, and how much of the elite are connected to technical skills (as opposed to inherited wealth).  It’s an effect of plutocracy, as I’ve said before.  Yuri Milner might lead the first interstellar mission with his Breakthrough Starshot plan.
  • Privatization of space seems likely to continue.
  • Uneven development is also likely, as different programs struggle to master different stations in the space path.  China may assemble a space station while Japan bypasses orbital platforms for the moon, private cubesats head into the deep solar system and private companies keep honing their Earth orbital launch skills.
  • Surely the challenges of getting humans and robots further into space will elicit interesting projects that can be used Earthside.  Think about health breakthroughs needed to keep humans alive in environments scoured by radiation, or AI to guide robots through complex situations.
  • There isn’t a lot of press coverage beyond Japan (ah, once again I wish I read Japanese), if I go by Google News headlines.  There’s nothing on the CNN.com homepage now, other than typical spatters of dread and celebrity; the closest I can find is a link to a story about Musk’s space tourism project, which a Japanese billionaire will ride.  Nothing on Fox News or MSNBC’s main pages.  BBC News at least has a link halfway down its main page.
  • Japan seems committed to creating a lunar colony.  Musk and Bezos burn with the old science fiction and NASA hunger for shipping humans into the great dark.  The lure of Mars seems to be a powerful one, and a multinational, private versus public race could seize the popular imagination.  Older people may experience a rush of nostalgia for the glorious space race of their youth.
  • This competition could turn malign, of course.  Recall that the 20th century’s space races grew out of warfare, and included many plans for combat and destruction. Nayef Al-Rodhan hints at possible strains in international cooperation: The possible fragmentation of outer space research activities in the post-ISS period would constitute a break-up of an international alliance that has fostered unprecedented cooperation between engineers and scientists from rival geopolitical powers – aside from China. The ISS represents perhaps the pinnacle of post-Cold War cooperation and has allowed for the sharing and streamlining of work methods and differing norms. In a current period of tense relations, it is worrying that the US and Russia may be ending an important phase of cooperation.
  • Space could easily become the ground for geopolitical struggles once more, and possibly a flashpoint as well.  Nationalism, neonationalism, nativism could power such stresses
  • Enough of an off-Earth settlement could lead to further forays, once we bypass the terrible problem of getting off the planet’s surface, and if we can develop new ways to fuel and sustain craft in space.  The desire to connect with that domain might help spur the kind of space elevator which will ease Earth-to-orbit challenges.
  • The 1960s space race saw the emergence of a kind of astronaut cult.  The Soviet space program’s Russian roots included a mystical tradition.  We could see a combination of nostalgia from older folks and can-do optimism from younger people, along with further growth in STEM careers and interest.  Dialectically we should expect the opposite.  A look back at the US-USSR space race shows criticism and opposition ranging from the arts (Gil Scott-Heron’s “Whitey on the Moon”, Jello Biafra’s “Why I’m Glad the Space Shuttle Blew Up”) to opinion polls (in the US NASA only won real support for the year around Apollo 11, apparently).  We can imagine all kinds of political opposition to a 21st century space race, from people repeating the old Earth versus space spending canard to nationalistic statements (“Let Japan land on Deimos.  We have enough to worry about here in Chicago”) to environmental concerns to religious ones.  Concerns about vast wealth and inequality could well target space.
  • How will we respond when, say, twenty space tourists crash into a lunar crater and die, in agony, on YouTube?
  • That’s a lot to hang on one Japanese probe landing two tiny ‘bots on an asteroid in 2018, I know.  But Hayabusa2 is such a signal event that it becomes a fine story to think through.
Ed Webb

The Digital Maginot Line - 0 views

  • The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless). In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.
  • There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.
  • There’s very little incentive not to try everything: this is a revolution that is being A/B tested.
  • ...17 more annotations...
  • The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.
  • Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target Russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure
  • Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle. But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.
  • The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead
  • Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble. But ultimately the information war is about territory — just not the geographic kind. In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.
  • This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”
  • The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.
  • If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it.
  • Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.
  • Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.
  • The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare
  • Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups
  • platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors
  • What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation. We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.
  • We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.
  • Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.
  • Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets.
Ed Webb

On the Web's Cutting Edge, Anonymity in Name Only - WSJ.com - 0 views

  • A Wall Street Journal investigation into online privacy has found that the analytical skill of data handlers like [x+1] is transforming the Internet into a place where people are becoming anonymous in name only. The findings offer an early glimpse of a new, personalized Internet where sites have the ability to adjust many things—look, content, prices—based on the kind of person they think you are.
  • The technology raises the prospect that different visitors to a website could see different prices as well. Price discrimination is generally legal, so long as it's not based on race, gender or geography, which can be deemed "redlining."
  • marketplaces for online data sprang up
  • ...3 more annotations...
  • In a fifth of a second, [x+1] says it can access and analyze thousands of pieces of information about a single user
  • When he saw the 3,748 lines of code that passed in an instant between his computer and Capital One's website, Mr. Burney said: "There's a shocking amount of information there."
  • [x+1]'s assessment of Mr. Burney's location and Nielsen demographic segment are specific enough that it comes extremely close to identifying him as an individual—that is, "de- anonymizing" him—according to Peter Eckersley, staff scientist at the Electronic Frontier Foundation, a privacy-advocacy group.
Ed Webb

Sleep isn't priority on campus, but experts say it should be | Connect2Mason - 0 views

  • Sleep deprivation affects 80 to 90 percent of college students, and getting a good night’s sleep is essential to staying healthy but is often overlooked
  • sleep deprivation has short-term effects such as increased blood pressure and desires for fatty foods, a weakened immune system, a harder time remembering things and a decrease in sense of humor
  • the idea that staying up late or pulling all nighters will help prepare for a test is a misconception because it negatively affects cognitive skills. “Pride in not sleeping much is like pride in not exercising," Gartenberg said. “It doesn’t make sense.”
  • ...1 more annotation...
  • people should only go to sleep if they are tired, caffeine and exercising before bed disturbs sleep, and alcohol does not help people fall asleep, but rather disrupts sleep and decreases its quality
Ed Webb

News: Cheating and the Generational Divide - Inside Higher Ed - 0 views

  • such attitudes among students can develop from the notion that all of education can be distilled into performance on a test -- which today's college students have absorbed from years of schooling under No Child Left Behind -- and not that education is a process in which one grapples with difficult material.
    • Ed Webb
       
      Exactly so. If the focus of education is moved away from testing regurgitated factoids and toward building genuine skills of critical analysis and effective communication, the apparent 'gap' in understanding of what cheating is will surely go away.
  •  
    I'd love to know what you Dystopians think about this.
  •  
    Institutional education puts far too much pressure on students to do well in tests. This I believe forces students to cheat because if you do not perform well in this one form of evaluation you are clearly not educated well enough, not trying hard enough or just plain dumb. I doubt there are many instances outside of institutional education where you would need to memorize a number of facts for a small period of time where your very future is at stake. To me the only cheating is plagarism. If you're taking a standardized test and you don't know the answer to question 60 but the student next to you does how would it hurt anyone to share that answer? You're learning the answer to question 60. It's the same knowledge you'll learn when you get the test back and realize the answer to 60 was A not B. Again though, when will this scenario occur outside of schooling?
Ed Webb

The stories of Ray Bradbury. - By Nathaniel Rich - Slate Magazine - 0 views

  • Thanks to Fahrenheit 451, now required reading for every American middle-schooler, Bradbury is generally thought of as a writer of novels, but his talents—particularly his mastery of the diabolical premise and the brain-exploding revelation—are best suited to the short form.
  • The best stories have a strange familiarity about them. They're like long-forgotten acquaintances—you know you've met them somewhere before. There is, for instance, the tale of the time traveler who goes back into time and accidentally steps on a butterfly, thereby changing irrevocably the course of history ("A Sound of Thunder"). There's the one about the man who buys a robotic husband to live with his wife so that he can be free to travel and pursue adventure—that's "Marionettes, Inc." (Not to be confused with "I Sing the Body Electric!" about the man who buys a robotic grandmother to comfort his children after his wife dies.) Or "The Playground," about the father who changes places with his son so that he can spare his boy the cruelty of childhood—forgetting exactly how cruel childhood can be. The stories are familiar because they've been adapted, and plundered from, by countless other writers—in books, television shows, and films. To the extent that there is a mythology of our age, Bradbury is one of its creators.
  • "But Bradbury's skill is in evoking exactly how soul-annihilating that world is."    Of course, this also displays one of the key facts of Bradbury's work -- and a trend in science fiction that is often ignored. He's a reactionary of the first order, deeply distrustful of technology and even the notion of progress. Many science fiction writers had begun to rewrite the rules of women in space by the time Bradbury had women in long skirts hauling pots and pans over the Martian landscape. And even he wouldn't disagree. In his famous Playboy interview he responded to a question about predicting the future with, "It's 'prevent the future', that's the way I put it. Not predict it, prevent it."
  • ...5 more annotations...
  • And for the record, I've never understood why a writer who recognizes technology is labeled a "sci-fi writer", as if being a "sci-fi writer" were equal to being some sort of substandard, second-rate hack. The great Kurt Vonnegut managed to get stuck in that drawer after he recognized technolgy in his 1st novel "Player Piano". No matter that he turned out to be (imo) one of the greatest authors of the 20th century, perio
  • it's chilling how prescient he was about modern media culture in Fahrenheit 451. It's not a Luddite screed against TV. It's a speculative piece on what happens when we become divorced from the past and more attuned to images on the screen than we are to each other.
  • ite author of mine since I was in elementary school way back when mammoths roamed the earth. To me, he was an ardent enthusiast of technology, but also recognized its potential for seperating us from one another while at the same time seemingly making us more "connected" in a superficial and transitory way
  • Bradbury is undeniably skeptical of technology and the risks it brings, particularly the risk that what we'd now call "virtualization" will replace actual emotional, intellectual or physical experience. On the other hand, however, I don't think there's anybody who rhapsodizes about the imaginative possibilities of rocketships and robots the way Bradbury does, and he's built entire setpieces around the idea of technological wonders creating new experiences.    I'm not saying he doesn't have a Luddite streak, more that he has feet in both camps and is harder to pin down than a single label allows. And I'll also add that in his public pronouncements of late, the Luddite streak has come out more strongly--but I tend to put much of that down to the curmudgeonliness of a ninety-year-old man.
  • I don't think he is a luddite so much as he is the little voice that whispers "be careful what you wish for." We have been sold the beautiful myth that technology will buy us free time, but we are busier than ever. TV was supposed to enlighten the masses, instead we have "reality TV" and a news network that does not let facts get in the way of its ideological agenda. We romanticize childhood, ignoring children's aggressive impulses, then feed them on a steady diet of violent video games.  
Ed Webb

Artificial Intelligence and the Future of Humans | Pew Research Center - 0 views

  • experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities
  • most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.
  • CONCERNS Human agency: Individuals are  experiencing a loss of control over their lives Decision-making on key aspects of digital life is automatically ceded to code-driven, "black box" tools. People lack input and do not learn the context about how the tools work. They sacrifice independence, privacy and power over choice; they have no control over these processes. This effect will deepen as automated systems become more prevalent and complex. Data abuse: Data use and surveillance in complex systems is designed for profit or for exercising power Most AI tools are and will be in the hands of companies striving for profits or governments striving for power. Values and ethics are often not baked into the digital systems making people's decisions for them. These systems are globally networked and not easy to regulate or rein in. Job loss: The AI takeover of jobs will widen economic divides, leading to social upheaval The efficiencies and other economic advantages of code-based machine intelligence will continue to disrupt all aspects of human work. While some expect new jobs will emerge, others worry about massive job losses, widening economic divides and social upheavals, including populist uprisings. Dependence lock-in: Reduction of individuals’ cognitive, social and survival skills Many see AI as augmenting human capacities but some predict the opposite - that people's deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others. Mayhem: Autonomous weapons, cybercrime and weaponized information Some predict further erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications and the use of weaponized information, lies and propaganda to dangerously destabilize human groups. Some also fear cybercriminals' reach into economic systems.
  • ...18 more annotations...
  • AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons
  • “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”
  • SUGGESTED SOLUTIONS Global good is No. 1: Improve human collaboration across borders and stakeholder groups Digital cooperation to serve humanity's best interests is the top priority. Ways must be found for people around the world to come to common understandings and agreements - to join forces to facilitate the innovation of widely accepted approaches aimed at tackling wicked problems and maintaining control over complex human-digital networks. Values-based system: Develop policies to assure AI will be directed at ‘humanness’ and common good Adopt a 'moonshot mentality' to build inclusive, decentralized intelligent digital networks 'imbued with empathy' that help humans aggressively ensure that technology meets social and ethical responsibilities. Some new level of regulatory and certification process will be necessary. Prioritize people: Alter economic and political systems to better help humans ‘race with the robots’ Reorganize economic and political systems toward the goal of expanding humans' capacities and capabilities in order to heighten human/AI collaboration and staunch trends that would compromise human relevance in the face of programmed intelligence.
  • “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”
  • We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals?
  • AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment
  • The record to date is that convenience overwhelms privacy
  • As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging
  • AI will eventually cause a large number of people to be permanently out of work
  • Newer generations of citizens will become more and more dependent on networked AI structures and processes
  • there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control
  • As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified
  • Given historical precedent, one would have to assume it will be our worst qualities that are augmented
  • Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing
  • We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully
  • the Orwellian nightmare realised
  • “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”
  • The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education
Ed Webb

A woman first wrote the prescient ideas Huxley and Orwell made famous - Quartzy - 1 views

  • In 1919, a British writer named Rose Macaulay published What Not, a novel about a dystopian future—a brave new world if you will—where people are ranked by intelligence, the government mandates mind training for all citizens, and procreation is regulated by the state.You’ve probably never heard of Macaulay or What Not. However, Aldous Huxley, author of the science fiction classic Brave New World, hung out in the same London literary circles as her and his 1932 book contains many concepts that Macaulay first introduced in her work. In 2019, you’ll be able to read Macaulay’s book yourself and compare the texts as the British publisher Handheld Press is planning to re- release the forgotten novel in March. It’s been out of print since the year it was first released.
  • The resurfacing of What Not also makes this a prime time to consider another work that influenced Huxley’s Brave New World, the 1923 novel We by Yvgeny Zamyatin. What Not and We are lost classics about a future that foreshadows our present. Notably, they are also hidden influences on some of the most significant works of 20th century fiction, Brave New World and George Orwell’s 1984.
  • In Macaulay’s book—which is a hoot and well worth reading—a democratically elected British government has been replaced with a “United Council, five minds with but a single thought—if that,” as she put it. Huxley’s Brave New World is run by a similarly small group of elites known as “World Controllers.”
  • ...12 more annotations...
  • citizens of What Not are ranked based on their intelligence from A to C3 and can’t marry or procreate with someone of the same rank to ensure that intelligence is evenly distributed
  • Brave New World is more futuristic and preoccupied with technology than What Not. In Huxley’s world, procreation and education have become completely mechanized and emotions are strictly regulated pharmaceutically. Macaulay’s Britain is just the beginning of this process, and its characters are not yet completely indoctrinated into the new ways of the state—they resist it intellectually and question its endeavors, like the newly-passed Mental Progress Act. She writes:He did not like all this interfering, socialist what-not, which was both upsetting the domestic arrangements of his tenants and trying to put into their heads more learning than was suitable for them to have. For his part he thought every man had a right to be a fool if he chose, yes, and to marry another fool, and to bring up a family of fools too.
  • Where Huxley pairs dumb but pretty and “pneumatic” ladies with intelligent gentlemen, Macaulay’s work is decidedly less sexist.
  • We was published in French, Dutch, and German. An English version was printed and sold only in the US. When Orwell wrote about We in 1946, it was only because he’d managed to borrow a hard-to-find French translation.
  • While Orwell never indicated that he read Macaulay, he shares her subversive and subtle linguistic skills and satirical sense. His protagonist, Winston—like Kitty—works for the government in its Ministry of Truth, or Minitrue in Newspeak, where he rewrites historical records to support whatever Big Brother currently says is good for the regime. Macaulay would no doubt have approved of Orwell’s wit. And his state ministries bear a striking similarity to those she wrote about in What Not.
  • Orwell was familiar with Huxley’s novel and gave it much thought before writing his own blockbuster. Indeed, in 1946, before the release of 1984, he wrote a review of Zamyatin’s We (pdf), comparing the Russian novel with Huxley’s book. Orwell declared Huxley’s text derivative, writing in his review of We in The Tribune:The first thing anyone would notice about We is the fact—never pointed out, I believe—that Aldous Huxley’s Brave New World must be partly derived from it. Both books deal with the rebellion of the primitive human spirit against a rationalised, mechanized, painless world, and both stories are supposed to take place about six hundred years hence. The atmosphere of the two books is similar, and it is roughly speaking the same kind of society that is being described, though Huxley’s book shows less political awareness and is more influenced by recent biological and psychological theories.
  • In We, the story is told by D-503, a male engineer, while in Brave New World we follow Bernard Marx, a protagonist with a proper name. Both characters live in artificial worlds, separated from nature, and they recoil when they first encounter people who exist outside of the state’s constructed and controlled cities.
  • Although We is barely known compared to Orwell and Huxley’s later works, I’d argue that it’s among the best literary science fictions of all time, and it’s highly relevant, as it was when first written. Noam Chomsky calls it “more perceptive” than both 1984 and Brave New World. Zamyatin’s futuristic society was so on point, he was exiled from the Soviet Union because it was such an accurate description of life in a totalitarian regime, though he wrote it before Stalin took power.
  • Macaulay’s work is more subtle and funny than Huxley’s. Despite being a century old, What Not is remarkably relevant and readable, a satire that only highlights how little has changed in the years since its publication and how dangerous and absurd state policies can be. In this sense then, What Not reads more like George Orwell’s 1949 novel 1984 
  • Orwell was critical of Zamyatin’s technique. “[We] has a rather weak and episodic plot which is too complex to summarize,” he wrote. Still, he admired the work as a whole. “[Its] intuitive grasp of the irrational side of totalitarianism—human sacrifice, cruelty as an end in itself, the worship of a Leader who is credited with divine attributes—[…] makes Zamyatin’s book superior to Huxley’s,”
  • Like our own tech magnates and nations, the United State of We is obsessed with going to space.
  • Perhaps in 2019 Macaulay’s What Not, a clever and subversive book, will finally get its overdue recognition.
Ed Webb

Beware thought leaders and the wealthy purveying answers to our social ills - 0 views

  • “Just as the worst slave-owners were those who were kind to their slaves, and so prevented the horror of the system being realized by those who suffered from it, and understood by those who contemplated it,” Wilde wrote, “so, in the present state of things in England, the people who do most harm are the people who try to do most good.”
  • “For when elites assume leadership of social change, they are able to reshape what social change is — above all, to present it as something that should never threaten winners,”
  • to question the system that allows people to make money in predatory ways and compensate for that through philanthropy. “Instead of asking them to make their firms less monopolistic, greedy or harmful to children, it urged them to create side hustles to ‘change the world,’ ”
  • ...9 more annotations...
  • Andrew Carnegie, the famed American industrialist, who advocated that people be as aggressive as possible in their pursuit of wealth and then give it back through private philanthropy
  • “the poor might not need so much help had they been better paid.”
  • “MarketWorld.” In essence, this is the cultlike belief that intractable social problems can be solved in market-friendly ways that result in “win-wins” for everyone involved, and that those who have succeeded under the status quo are also those best equipped to fix the world’s problems.
  • Among the denizens of MarketWorld are so-called “thought leaders,” the speakers who populate the conference circuit, like TED, PopTech and, of course, the Clinton Global Initiative. (When you pause to think about it, “thought leader” is appallingly Orwellian.)
  • Giridharadas argues that the rise of thought leaders, whose views are sanctioned and sanitized by their patrons — the big corporations that support conferences — has come at the expense of public intellectuals, who are willing to voice controversial arguments that shake up the system and don’t have easy solutions. Thought leaders, on the other hand, always offer a small but actionable “tweak,” one that makes conference-goers feel like they’ve learned something but that doesn’t actually threaten anyone.
  • giving MarketWorld what it craved in a thinker: a way of framing a problem that made it about giving bits of power to those who lack it without taking power away from those who hold it
  • In a nod to Wilde, he argues that the person who “seeks to ‘change the world’ by doing what can be done within a bad system, but who is relatively silent about that system” is “putting himself in the difficult moral position of the kindhearted slave master.”
  • He’s come to big conclusions: that MarketWorld, along with its philosophical antecedents, like Carnegie-ism and neoliberalism (which anthropologist David Harvey defines as the idea that “human well being can best be advanced by liberating individual entrepreneurial freedoms and skills within an institutional framework characterized by strong property rights, free markets and free trade”), has been an abject failure
  • His key idea is to reinvigorate governments, which he believes could fix the world’s problems if they just had enough power and money. For readers who are cynical about the private sector but also versed enough in history to be cynical about governments, the book would have been more powerful if Giridharadas had stayed within his definition of an old-school public intellectual: someone who is willing to throw bombs at the current state of affairs, but lacks the arrogance and self-righteousness that comes with believing you have the solution
1 - 11 of 11
Showing 20 items per page