Skip to main content

Home/ Dystopias/ Group items tagged AI

Rss Feed Group items tagged

Ed Webb

AI Causes Real Harm. Let's Focus on That over the End-of-Humanity Hype - Scientific Ame... - 0 views

  • Wrongful arrests, an expanding surveillance dragnet, defamation and deep-fake pornography are all actually existing dangers of so-called “artificial intelligence” tools currently on the market. That, and not the imagined potential to wipe out humanity, is the real threat from artificial intelligence.
  • Beneath the hype from many AI firms, their technology already enables routine discrimination in housing, criminal justice and health care, as well as the spread of hate speech and misinformation in non-English languages. Already, algorithmic management programs subject workers to run-of-the-mill wage theft, and these programs are becoming more prevalent.
  • Corporate AI labs justify this posturing with pseudoscientific research reports that misdirect regulatory attention to such imaginary scenarios using fear-mongering terminology, such as “existential risk.”
  • ...9 more annotations...
  • Because the term “AI” is ambiguous, it makes having clear discussions more difficult. In one sense, it is the name of a subfield of computer science. In another, it can refer to the computing techniques developed in that subfield, most of which are now focused on pattern matching based on large data sets and the generation of new media based on those patterns. Finally, in marketing copy and start-up pitch decks, the term “AI” serves as magic fairy dust that will supercharge your business.
  • output can seem so plausible that without a clear indication of its synthetic origins, it becomes a noxious and insidious pollutant of our information ecosystem
  • Not only do we risk mistaking synthetic text for reliable information, but also that noninformation reflects and amplifies the biases encoded in its training data—in this case, every kind of bigotry exhibited on the Internet. Moreover the synthetic text sounds authoritative despite its lack of citations back to real sources. The longer this synthetic text spill continues, the worse off we are, because it gets harder to find trustworthy sources and harder to trust them when we do.
  • the people selling this technology propose that text synthesis machines could fix various holes in our social fabric: the lack of teachers in K–12 education, the inaccessibility of health care for low-income people and the dearth of legal aid for people who cannot afford lawyers, just to name a few
  • the systems rely on enormous amounts of training data that are stolen without compensation from the artists and authors who created it in the first place
  • the task of labeling data to create “guardrails” that are intended to prevent an AI system’s most toxic output from seeping out is repetitive and often traumatic labor carried out by gig workers and contractors, people locked in a global race to the bottom for pay and working conditions.
  • employers are looking to cut costs by leveraging automation, laying off people from previously stable jobs and then hiring them back as lower-paid workers to correct the output of the automated systems. This can be seen most clearly in the current actors’ and writers’ strikes in Hollywood, where grotesquely overpaid moguls scheme to buy eternal rights to use AI replacements of actors for the price of a day’s work and, on a gig basis, hire writers piecemeal to revise the incoherent scripts churned out by AI.
  • too many AI publications come from corporate labs or from academic groups that receive disproportionate industry funding. Much is junk science—it is nonreproducible, hides behind trade secrecy, is full of hype and uses evaluation methods that lack construct validity
  • We urge policymakers to instead draw on solid scholarship that investigates the harms and risks of AI—and the harms caused by delegating authority to automated systems, which include the unregulated accumulation of data and computing power, climate costs of model training and inference, damage to the welfare state and the disempowerment of the poor, as well as the intensification of policing against Black and Indigenous families. Solid research in this domain—including social science and theory building—and solid policy based on that research will keep the focus on the people hurt by this technology.
Ed Webb

Artificial Intelligence and the Future of Humans | Pew Research Center - 0 views

  • experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities
  • most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.
  • CONCERNS Human agency: Individuals are  experiencing a loss of control over their lives Decision-making on key aspects of digital life is automatically ceded to code-driven, "black box" tools. People lack input and do not learn the context about how the tools work. They sacrifice independence, privacy and power over choice; they have no control over these processes. This effect will deepen as automated systems become more prevalent and complex. Data abuse: Data use and surveillance in complex systems is designed for profit or for exercising power Most AI tools are and will be in the hands of companies striving for profits or governments striving for power. Values and ethics are often not baked into the digital systems making people's decisions for them. These systems are globally networked and not easy to regulate or rein in. Job loss: The AI takeover of jobs will widen economic divides, leading to social upheaval The efficiencies and other economic advantages of code-based machine intelligence will continue to disrupt all aspects of human work. While some expect new jobs will emerge, others worry about massive job losses, widening economic divides and social upheavals, including populist uprisings. Dependence lock-in: Reduction of individuals’ cognitive, social and survival skills Many see AI as augmenting human capacities but some predict the opposite - that people's deepening dependence on machine-driven networks will erode their abilities to think for themselves, take action independent of automated systems and interact effectively with others. Mayhem: Autonomous weapons, cybercrime and weaponized information Some predict further erosion of traditional sociopolitical structures and the possibility of great loss of lives due to accelerated growth of autonomous military applications and the use of weaponized information, lies and propaganda to dangerously destabilize human groups. Some also fear cybercriminals' reach into economic systems.
  • ...18 more annotations...
  • AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons
  • “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”
  • SUGGESTED SOLUTIONS Global good is No. 1: Improve human collaboration across borders and stakeholder groups Digital cooperation to serve humanity's best interests is the top priority. Ways must be found for people around the world to come to common understandings and agreements - to join forces to facilitate the innovation of widely accepted approaches aimed at tackling wicked problems and maintaining control over complex human-digital networks. Values-based system: Develop policies to assure AI will be directed at ‘humanness’ and common good Adopt a 'moonshot mentality' to build inclusive, decentralized intelligent digital networks 'imbued with empathy' that help humans aggressively ensure that technology meets social and ethical responsibilities. Some new level of regulatory and certification process will be necessary. Prioritize people: Alter economic and political systems to better help humans ‘race with the robots’ Reorganize economic and political systems toward the goal of expanding humans' capacities and capabilities in order to heighten human/AI collaboration and staunch trends that would compromise human relevance in the face of programmed intelligence.
  • As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging
  • We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals?
  • AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment
  • The record to date is that convenience overwhelms privacy
  • “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”
  • AI will eventually cause a large number of people to be permanently out of work
  • Newer generations of citizens will become more and more dependent on networked AI structures and processes
  • there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control
  • As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified
  • Given historical precedent, one would have to assume it will be our worst qualities that are augmented
  • Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing
  • We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully
  • the Orwellian nightmare realised
  • “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”
  • The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education
Ed Webb

Artificial intelligence, immune to fear or favour, is helping to make China's foreign p... - 0 views

  • Several prototypes of a diplomatic system using artificial intelligence are under development in China, according to researchers involved or familiar with the projects. One early-stage machine, built by the Chinese Academy of Sciences, is already being used by the Ministry of Foreign Affairs.
  • China’s ambition to become a world leader has significantly increased the burden and challenge to its diplomats. The “Belt and Road Initiative”, for instance, involves nearly 70 countries with 65 per cent of the world’s population. The unprecedented development strategy requires up to a US$900 billion investment each year for infrastructure construction, some in areas with high political, economic or environmental risk
  • researchers said the AI “policymaker” was a strategic decision support system, with experts stressing that it will be humans who will make any final decision
  • ...10 more annotations...
  • “Human beings can never get rid of the interference of hormones or glucose.”
  • “It would not even consider the moral factors that conflict with strategic goals,”
  • “If one side of the strategic game has artificial intelligence technology, and the other side does not, then this kind of strategic game is almost a one-way, transparent confrontation,” he said. “The actors lacking the assistance of AI will be at an absolute disadvantage in many aspects such as risk judgment, strategy selection, decision making and execution efficiency, and decision-making reliability,” he said.
  • “The entire strategic game structure will be completely out of balance.”
  • “AI can think many steps ahead of a human. It can think deeply in many possible scenarios and come up with the best strategy,”
  • A US Department of State spokesman said the agency had “many technological tools” to help it make decisions. There was, however, no specific information on AI that could be shared with the public,
  • The system, also known as geopolitical environment simulation and prediction platform, was used to vet “nearly all foreign investment projects” in recent years
  • One challenge to the development of AI policymaker is data sharing among Chinese government agencies. The foreign ministry, for instance, had been unable to get some data sets it needed because of administrative barriers
  • China is aggressively pushing AI into many sectors. The government is building a nationwide surveillance system capable of identifying any citizen by face within seconds. Research is also under way to introduce AI in nuclear submarines to help commanders making faster, more accurate decision in battle.
  • “AI can help us get more prepared for unexpected events. It can help find a scientific, rigorous solution within a short time.
Ed Webb

I unintentionally created a biased AI algorithm 25 years ago - tech companies are still... - 0 views

  • How and why do well-educated, well-intentioned scientists produce biased AI systems? Sociological theories of privilege provide one useful lens.
  • Their training data is biased. They are designed by an unrepresentative group. They face the mathematical impossibility of treating all categories equally. They must somehow trade accuracy for fairness. And their biases are hiding behind millions of inscrutable numerical parameters.
  • fairness can still be the victim of competitive pressures in academia and industry. The flawed Bard and Bing chatbots from Google and Microsoft are recent evidence of this grim reality. The commercial necessity of building market share led to the premature release of these systems.
  • ...3 more annotations...
  • Scientists also face a nasty subconscious dilemma when incorporating diversity into machine learning models: Diverse, inclusive models perform worse than narrow models.
  • biased AI systems can still be created unintentionally and easily. It’s also clear that the bias in these systems can be harmful, hard to detect and even harder to eliminate.
  • with North American computer science doctoral programs graduating only about 23% female, and 3% Black and Latino students, there will continue to be many rooms and many algorithms in which underrepresented groups are not represented at all.
Ed Webb

An Algorithm Summarizes Lengthy Text Surprisingly Well - MIT Technology Review - 0 views

  • As information overload grows ever worse, computers may become our only hope for handling a growing deluge of documents. And it may become routine to rely on a machine to analyze and paraphrase articles, research papers, and other text for you.
  • Parsing language remains one of the grand challenges of artificial intelligence (see “AI’s Language Problem”). But it’s a challenge with enormous commercial potential. Even limited linguistic intelligence—the ability to parse spoken or written queries, and to respond in more sophisticated and coherent ways—could transform personal computing. In many specialist fields—like medicine, scientific research, and law—condensing information and extracting insights could have huge commercial benefits.
  • The system experiments in order to generate summaries of its own using a process called reinforcement learning. Inspired by the way animals seem to learn, this involves providing positive feedback for actions that lead toward a particular objective. Reinforcement learning has been used to train computers to do impressive new things, like playing complex games or controlling robots (see “10 Breakthrough Technologies 2017: Reinforcement Learning”). Those working on conversational interfaces are increasingly now looking at reinforcement learning as a way to improve their systems.
  • ...1 more annotation...
  • “At some point, we have to admit that we need a little bit of semantics and a little bit of syntactic knowledge in these systems in order for them to be fluid and fluent,”
Ed Webb

Zoom urged by rights groups to rule out 'creepy' AI emotion tech - 0 views

  • Human rights groups have urged video-conferencing company Zoom to scrap research on integrating emotion recognition tools into its products, saying the technology can infringe users' privacy and perpetuate discrimination
  • "If Zoom advances with these plans, this feature will discriminate against people of certain ethnicities and people with disabilities, hardcoding stereotypes into millions of devices,"
  • The company has already built tools that purport to analyze the sentiment of meetings based on text transcripts of video calls
  • ...1 more annotation...
  • "This move to mine users for emotional data points based on the false idea that AI can track and analyze human emotions is a violation of privacy and human rights,"
Ed Webb

AI Tweets "Little Beetles Is An Arthropod," and Other Facts About The World, As It Lear... - 0 views

  • By saying that NELL has "adopted" the human behaviour of tweeting you are misleading the reader. It is more likely that the software was specifically progremmed to do so and therefore has "adopted" no "human behavior". FAIL.
  •  
    sloppy journalism
weismans95

Ex Machina move trailer - 1 views

  •  
    Yet another take on the cyborg/robot/AI story
Ed Webb

The Digital Maginot Line - 0 views

  • The Information World War has already been going on for several years. We called the opening skirmishes “media manipulation” and “hoaxes”, assuming that we were dealing with ideological pranksters doing it for the lulz (and that lulz were harmless). In reality, the combatants are professional, state-employed cyberwarriors and seasoned amateur guerrillas pursuing very well-defined objectives with military precision and specialized tools. Each type of combatant brings a different mental model to the conflict, but uses the same set of tools.
  • There are also small but highly-skilled cadres of ideologically-motivated shitposters whose skill at information warfare is matched only by their fundamental incomprehension of the real damage they’re unleashing for lulz. A subset of these are conspiratorial — committed truthers who were previously limited to chatter on obscure message boards until social platform scaffolding and inadvertently-sociopathic algorithms facilitated their evolution into leaderless cults able to spread a gospel with ease.
  • There’s very little incentive not to try everything: this is a revolution that is being A/B tested.
  • ...17 more annotations...
  • The combatants view this as a Hobbesian information war of all against all and a tactical arms race; the other side sees it as a peacetime civil governance problem.
  • Our most technically-competent agencies are prevented from finding and countering influence operations because of the concern that they might inadvertently engage with real U.S. citizens as they target Russia’s digital illegals and ISIS’ recruiters. This capability gap is eminently exploitable; why execute a lengthy, costly, complex attack on the power grid when there is relatively no cost, in terms of dollars as well as consequences, to attack a society’s ability to operate with a shared epistemology? This leaves us in a terrible position, because there are so many more points of failure
  • Cyberwar, most people thought, would be fought over infrastructure — armies of state-sponsored hackers and the occasional international crime syndicate infiltrating networks and exfiltrating secrets, or taking over critical systems. That’s what governments prepared and hired for; it’s what defense and intelligence agencies got good at. It’s what CSOs built their teams to handle. But as social platforms grew, acquiring standing audiences in the hundreds of millions and developing tools for precision targeting and viral amplification, a variety of malign actors simultaneously realized that there was another way. They could go straight for the people, easily and cheaply. And that’s because influence operations can, and do, impact public opinion. Adversaries can target corporate entities and transform the global power structure by manipulating civilians and exploiting human cognitive vulnerabilities at scale. Even actual hacks are increasingly done in service of influence operations: stolen, leaked emails, for example, were profoundly effective at shaping a national narrative in the U.S. election of 2016.
  • The substantial time and money spent on defense against critical-infrastructure hacks is one reason why poorly-resourced adversaries choose to pursue a cheap, easy, low-cost-of-failure psy-ops war instead
  • Information war combatants have certainly pursued regime change: there is reasonable suspicion that they succeeded in a few cases (Brexit) and clear indications of it in others (Duterte). They’ve targeted corporations and industries. And they’ve certainly gone after mores: social media became the main battleground for the culture wars years ago, and we now describe the unbridgeable gap between two polarized Americas using technological terms like filter bubble. But ultimately the information war is about territory — just not the geographic kind. In a warm information war, the human mind is the territory. If you aren’t a combatant, you are the territory. And once a combatant wins over a sufficient number of minds, they have the power to influence culture and society, policy and politics.
  • This shift from targeting infrastructure to targeting the minds of civilians was predictable. Theorists  like Edward Bernays, Hannah Arendt, and Marshall McLuhan saw it coming decades ago. As early as 1970, McLuhan wrote, in Culture is our Business, “World War III is a guerrilla information war with no division between military and civilian participation.”
  • The 2014-2016 influence operation playbook went something like this: a group of digital combatants decided to push a specific narrative, something that fit a long-term narrative but also had a short-term news hook. They created content: sometimes a full blog post, sometimes a video, sometimes quick visual memes. The content was posted to platforms that offer discovery and amplification tools. The trolls then activated collections of bots and sockpuppets to blanket the biggest social networks with the content. Some of the fake accounts were disposable amplifiers, used mostly to create the illusion of popular consensus by boosting like and share counts. Others were highly backstopped personas run by real human beings, who developed standing audiences and long-term relationships with sympathetic influencers and media; those accounts were used for precision messaging with the goal of reaching the press. Israeli company Psy Group marketed precisely these services to the 2016 Trump Presidential campaign; as their sales brochure put it, “Reality is a Matter of Perception”.
  • If an operation is effective, the message will be pushed into the feeds of sympathetic real people who will amplify it themselves. If it goes viral or triggers a trending algorithm, it will be pushed into the feeds of a huge audience. Members of the media will cover it, reaching millions more. If the content is false or a hoax, perhaps there will be a subsequent correction article – it doesn’t matter, no one will pay attention to it.
  • Combatants are now focusing on infiltration rather than automation: leveraging real, ideologically-aligned people to inadvertently spread real, ideologically-aligned content instead. Hostile state intelligence services in particular are now increasingly adept at operating collections of human-operated precision personas, often called sockpuppets, or cyborgs, that will escape punishment under the the bot laws. They will simply work harder to ingratiate themselves with real American influencers, to join real American retweet rings. If combatants need to quickly spin up a digital mass movement, well-placed personas can rile up a sympathetic subreddit or Facebook Group populated by real people, hijacking a community in the way that parasites mobilize zombie armies.
  • Attempts to legislate away 2016 tactics primarily have the effect of triggering civil libertarians, giving them an opportunity to push the narrative that regulators just don’t understand technology, so any regulation is going to be a disaster.
  • The entities best suited to mitigate the threat of any given emerging tactic will always be the platforms themselves, because they can move fast when so inclined or incentivized. The problem is that many of the mitigation strategies advanced by the platforms are the information integrity version of greenwashing; they’re a kind of digital security theater, the TSA of information warfare
  • Algorithmic distribution systems will always be co-opted by the best resourced or most technologically capable combatants. Soon, better AI will rewrite the playbook yet again — perhaps the digital equivalent of  Blitzkrieg in its potential for capturing new territory. AI-generated audio and video deepfakes will erode trust in what we see with our own eyes, leaving us vulnerable both to faked content and to the discrediting of the actual truth by insinuation. Authenticity debates will commandeer media cycles, pushing us into an infinite loop of perpetually investigating basic facts. Chronic skepticism and the cognitive DDoS will increase polarization, leading to a consolidation of trust in distinct sets of right and left-wing authority figures – thought oligarchs speaking to entirely separate groups
  • platforms aren’t incentivized to engage in the profoundly complex arms race against the worst actors when they can simply point to transparency reports showing that they caught a fair number of the mediocre actors
  • What made democracies strong in the past — a strong commitment to free speech and the free exchange of ideas — makes them profoundly vulnerable in the era of democratized propaganda and rampant misinformation. We are (rightfully) concerned about silencing voices or communities. But our commitment to free expression makes us disproportionately vulnerable in the era of chronic, perpetual information war. Digital combatants know that once speech goes up, we are loathe to moderate it; to retain this asymmetric advantage, they push an all-or-nothing absolutist narrative that moderation is censorship, that spammy distribution tactics and algorithmic amplification are somehow part of the right to free speech.
  • We need an understanding of free speech that is hardened against the environment of a continuous warm war on a broken information ecosystem. We need to defend the fundamental value from itself becoming a prop in a malign narrative.
  • Unceasing information war is one of the defining threats of our day. This conflict is already ongoing, but (so far, in the United States) it’s largely bloodless and so we aren’t acknowledging it despite the huge consequences hanging in the balance. It is as real as the Cold War was in the 1960s, and the stakes are staggeringly high: the legitimacy of government, the persistence of societal cohesion, even our ability to respond to the impending climate crisis.
  • Influence operations exploit divisions in our society using vulnerabilities in our information ecosystem. We have to move away from treating this as a problem of giving people better facts, or stopping some Russian bots, and move towards thinking about it as an ongoing battle for the integrity of our information infrastructure – easily as critical as the integrity of our financial markets.
Ed Webb

Can Sci-Fi Writers Prepare Us for an Uncertain Future? | WIRED - 0 views

  • a growing contingent of sci-fi writers being hired by think tanks, politicians, and corporations to imagine—and predict—the future
  • Harvard Business Review made the corporate case for reading sci-fi years ago, and mega consulting firm Price Waterhouse Cooper published a guide on how to use sci-fi to “explore innovation.” The New Yorker has touted “better business through sci-fi.” As writer Brian Merchant put it, “Welcome to the Sci-Fi industrial complex.”
  • The use of sci-fi has bled into government and public policy spheres. The New America Foundation recently held an all-day event discussing “What Sci-Fi Futures Can (and Can't) Teach Us About AI Policy.” And Nesta, an organization that generates speculative fiction, has committed $24 million to grow “new models of public services” in collaboration with the UK government
  • ...8 more annotations...
  • Some argue that there is power in narrative stories that can’t be found elsewhere. Others assert that in our quest for imagination and prediction, we’re deluding ourselves into thinking that we can predict what’s coming
  • The World Future Society and the Association of Professional Futurists represent a small but growing group of professionals, many of whom have decades of experience thinking about long-term strategy and “scenario planning”—a method used by organizations to try and prepare for possible futures.
  • true Futurism is often pretty unsexy. It involves sifting through a lot of data and research and models and spreadsheets. Nobody is going to write a profile of your company or your government project based on a dry series of models outlining carefully caveated possibilities. On the other hand, worldbuilding—the process of imagining a universe in which your fictional stories can exist—is fun. People want stories, and science fiction writers can provide them.
  • Are those who write epic space operas (no matter how good those space operas might be) really the right people to ask about the future of work or water policy or human rights?
  • critics worry that writers are so good at spinning stories that they might even convince you those stories are true. In actuality, history shows us that predictions are nearly impossible to make and that humans are catastrophically bad at guessing what the future will hold
  • it's important to distinguish between prediction and impact. Did Star Trek anticipate the cell phone, or were the inventors of the cell phone inspired by Star Trek? Listicles of “all the things sci-fi has predicted” are largely exercises in cherry picking—they never include the things that sci-fi got wrong
  • In this line of work, specifics matter. It’s one thing to write a book about a refugee crisis, but quite another to predict exactly how the Syrian refugee crisis unfolded
  • It’s tempting to turn to storytelling in times of crisis—and it’s hard to argue that we’re not in a time of crisis now. Within dystopian pieces of fiction there are heroes and survivors, characters we can identify with who come out the other side and make it out OK. Companies and governments and individuals all want to believe that they will be among those lucky few, the heroes of the story. And science fiction writers can deliver that, for a fee.
Ed Webb

Saudi Crown Prince Asks: What if a City, But It's a 105-Mile Line - 0 views

  • Vicious Saudi autocrat Mohamed bin Salman has a new vision for Neom, his plan for a massive, $500 billion, AI-powered, nominally legally independent city-state of the future on the border with Egypt and Jordan. When we last left the crown prince, he had reportedly commissioned 2,300-pages’ worth of proposals from Boston Consulting Group, McKinsey & Co. and Oliver Wyman boasting of possible amenities like holographic schoolteachers, cloud seeding to create rain, flying taxis, glow-in-the-dark beaches, a giant NASA-built artificial moon, and lots of robots: maids, cage fighters, and dinosaurs.
  • Now Salman has a bold new idea: One of the cities in Neom is a line. A line roughly 105-miles (170-kilometers) long and a five-minute walk wide, to be exact. No, really, it’s a line. The proposed city is a line that stretches across all of Saudi Arabia. That’s the plan.
  • “With zero cars, zero streets, and zero carbon emissions, you can fulfill all your daily requirements within a five minute walk,” the crown prince continued. “And you can travel from end to end within 20 minutes.”AdvertisementThe end-to-end in 20 minutes boast likely refers to some form of mass transit that doesn’t yet exist. That works out to a transit system running at about 317 mph (510 kph). That would be much faster than Japan’s famous Shinkansen train network, which is capped at 200 mph (321 kph). Some Japanese rail companies have tested maglev trains that have gone up to 373 mph (600 kph), though it’s nowhere near ready for primetime.
  • ...3 more annotations...
  • According to Bloomberg, Saudi officials project the Line will cost around $100-$200 billion of the $500 billion planned to be spent on Neom and will have a population of 1 million with 380,000 jobs by the year 2030. It will have one of the biggest airports in the world for some reason, which seems like a strange addition to a supposedly climate-friendly city.
  • The site also makes numerous hand wavy and vaguely menacing claims, including that “all businesses and communities” will have “over 90%” of their data processed by AI and robots:
  • Don’t pay attention to Saudi war crimes in Yemen, the prince’s brutal crackdowns on dissent, the hit squad that tortured journalist Jamal Khashoggi to death, and the other habitual human rights abuses that allow the Saudi monarchy to remain in power. Also, ignore that obstacles facing Neom include budgetary constraints, the forced eviction of tens of thousands of existing residents such as the Huwaitat tribe, coronavirus and oil shock, investor flight over human rights concerns, and the lingering questions of whether the whole project is a distraction from pressing domestic issues and/or a mirage conjured up by consulting firms pandering to the crown prince’s ego and hungry for lucrative fees. Nevermind you that there are numerous ways we could ensure the cities people already live in are prepared for climate change rather than blowing billions of dollars on a vanity project.
1 - 20 of 31 Next ›
Showing 20 items per page