Skip to main content

Home/ Dystopias/ Group items matching "machines" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
Ed Webb

Piper at the Gates of Hell: An Interview with Cyberpunk Legend John Shirley | Motherboard - 0 views

    • Ed Webb
       
      City Come A Walking is one of the most punk of the cyberpunk novels and short stories I have ever read, and I have read quite a few...
  • I'll press your buttons here by positing that if "we" (humankind) are too dumb to self-regulate our own childbirth output, too dim to recognize that we are polluting ourselves and neighbors out of sustainable existence, we are, in fact, a ridiculous parasite on this Earth and that the planet on which we live will simply slough us off—as it well should—and will bounce back without evidence of we even being here, come two or three thousand years. Your thoughts (in as much detail as you wish)?I would recommend reading my "the next 50 years" piece here. Basically I think that
 climate change, which in this case genuinely is caused mostly by humanity, 
is just one part of the environmental problem. Overfishing, toxification of 
the seas, pesticide use, weedkillers, prescription drugs in water,
 fracking, continued air pollution, toxicity in food, destruction of animal
 habitat, attrition on bee colonies—all this is converging. And we'll be 
facing the consequences for several hundred years.
  • I believe humanity will
 survive, and it won't be surviving like Road Warrior or the Morlocks from The Time Machine, but I think we'll have some cruelly ugly social consequences. We'll have famines the like of which we've never seen before, along with higher risk of wars—I do predict a third world war in the second half of this century but I don't think it will be a nuclear war—and I think we'll suffer so hugely we'll be forced to have a change in consciousness to adapt. 
  • ...1 more annotation...
  • We may end up having to "terraform" the Earth itself, to some extent.
Ed Webb

Artificial intelligence, immune to fear or favour, is helping to make China's foreign policy | South China Morning Post - 0 views

  • Several prototypes of a diplomatic system using artificial intelligence are under development in China, according to researchers involved or familiar with the projects. One early-stage machine, built by the Chinese Academy of Sciences, is already being used by the Ministry of Foreign Affairs.
  • China’s ambition to become a world leader has significantly increased the burden and challenge to its diplomats. The “Belt and Road Initiative”, for instance, involves nearly 70 countries with 65 per cent of the world’s population. The unprecedented development strategy requires up to a US$900 billion investment each year for infrastructure construction, some in areas with high political, economic or environmental risk
  • researchers said the AI “policymaker” was a strategic decision support system, with experts stressing that it will be humans who will make any final decision
  • ...10 more annotations...
  • “Human beings can never get rid of the interference of hormones or glucose.”
  • “It would not even consider the moral factors that conflict with strategic goals,”
  • “If one side of the strategic game has artificial intelligence technology, and the other side does not, then this kind of strategic game is almost a one-way, transparent confrontation,” he said. “The actors lacking the assistance of AI will be at an absolute disadvantage in many aspects such as risk judgment, strategy selection, decision making and execution efficiency, and decision-making reliability,” he said.
  • “The entire strategic game structure will be completely out of balance.”
  • “AI can think many steps ahead of a human. It can think deeply in many possible scenarios and come up with the best strategy,”
  • A US Department of State spokesman said the agency had “many technological tools” to help it make decisions. There was, however, no specific information on AI that could be shared with the public,
  • The system, also known as geopolitical environment simulation and prediction platform, was used to vet “nearly all foreign investment projects” in recent years
  • One challenge to the development of AI policymaker is data sharing among Chinese government agencies. The foreign ministry, for instance, had been unable to get some data sets it needed because of administrative barriers
  • China is aggressively pushing AI into many sectors. The government is building a nationwide surveillance system capable of identifying any citizen by face within seconds. Research is also under way to introduce AI in nuclear submarines to help commanders making faster, more accurate decision in battle.
  • “AI can help us get more prepared for unexpected events. It can help find a scientific, rigorous solution within a short time.
Ed Webb

An Algorithm Summarizes Lengthy Text Surprisingly Well - MIT Technology Review - 0 views

  • As information overload grows ever worse, computers may become our only hope for handling a growing deluge of documents. And it may become routine to rely on a machine to analyze and paraphrase articles, research papers, and other text for you.
  • Parsing language remains one of the grand challenges of artificial intelligence (see “AI’s Language Problem”). But it’s a challenge with enormous commercial potential. Even limited linguistic intelligence—the ability to parse spoken or written queries, and to respond in more sophisticated and coherent ways—could transform personal computing. In many specialist fields—like medicine, scientific research, and law—condensing information and extracting insights could have huge commercial benefits.
  • The system experiments in order to generate summaries of its own using a process called reinforcement learning. Inspired by the way animals seem to learn, this involves providing positive feedback for actions that lead toward a particular objective. Reinforcement learning has been used to train computers to do impressive new things, like playing complex games or controlling robots (see “10 Breakthrough Technologies 2017: Reinforcement Learning”). Those working on conversational interfaces are increasingly now looking at reinforcement learning as a way to improve their systems.
  • ...1 more annotation...
  • “At some point, we have to admit that we need a little bit of semantics and a little bit of syntactic knowledge in these systems in order for them to be fluid and fluent,”
Ed Webb

Sci-Fi Author J.G. Ballard Predicts the Rise of Social Media (1977) | Open Culture - 0 views

  • Ballard was a brilliant futurist and his dystopian novels and short stories anticipated the 80s cyberpunk of William Gibson, exploring with a twisted sense of humor what Jean Lyotard famously dubbed in 1979 The Postmodern Condition: a state of ideological, scientific, personal, and social disintegration under the reign of a technocratic, hypercapitalist, “computerized society.” Ballard had his own term for it: “media landscape,” and his dark visions of the future often correspond to the virtual world we inhabit today.
  • Ballard made several disturbingly accurate predictions in interviews he gave over the decades (collected in a book titled Extreme Metaphors)
  • he gave an interview to I-D magazine in which he predicted the internet as “invisible streams of data pulsing down lines to produce an invisible loom of world commerce and information.” This may not seem especially prescient (see, for example, E.M. Forster’s 1909 “The Machine Stops” for a chilling futuristic scenario much further ahead of its time). But Ballard went on to describe in detail the rise of the Youtube celebrity: Every home will be transformed into its own TV studio. We'll all be simultaneously actor, director and screenwriter in our own soap opera. People will start screening themselves. They will become their own TV programmes.
  • ...4 more annotations...
  • ten years earlier, in an essay for Vogue, he described in detail the spread of social media and its totalizing effects on our lives. In the technological future, he wrote, “each of us will be both star and supporting player.” Every one of our actions during the day, across the entire spectrum of domestic life, will be instantly recorded on video-tape. In the evening we will sit back to scan the rushes, selected by a computer trained to pick out only our best profiles, our wittiest dialogue, our most affecting expressions filmed through the kindest filters, and then stitch these together into a heightened re-enactment of the day. Regardless of our place in the family pecking order, each of us within the privacy of our own rooms will be the star in a continually unfolding domestic saga, with parents, husbands, wives and children demoted to an appropriate supporting role.
  • this description almost perfectly captures the behavior of the average user of Facebook, Instagram, etc.
  • Ballard wrote a 1977 short story called “The Intensive Care Unit,” in which—writes the site Ballardian---“ordinances are in place to prevent people from meeting in person. All interaction is mediated through personal cameras and TV screens.”
  • “Now everybody can document themselves in a way that was inconceivable 30, 40, 50 years ago,” Ballard notes, “I think this reflects a tremendous hunger among people for ‘reality’—for ordinary reality. It’s very difficult to find the ‘real,’ because the environment is totally manufactured.” Like Jean Baudrillard, another prescient theorist of postmodernity, Ballard saw this loss of the "real" coming many decades ago. As he told I-D in 1987, “in the media landscape it’s almost impossible to separate fact from fiction.”
Ed Webb

How ethical is it for advertisers to target your mood? | Emily Bell | Opinion | The Guardian - 0 views

  • The effectiveness of psychographic targeting is one bet being made by an increasing number of media companies when it comes to interrupting your viewing experience with advertising messages.
  • “Across the board, articles that were in top emotional categories, such as love, sadness and fear, performed significantly better than articles that were not.”
  • ESPN and USA Today are also using psychographic rather than demographic targeting to sell to advertisers, including in ESPN’s case, the decision to not show you advertising at all if your team is losing.
  • ...9 more annotations...
  • Media companies using this technology claim it is now possible for the “mood” of the reader or viewer to be tracked in real time and the content of the advertising to be changed accordingly
  • ads targeted at readers based on their predicted moods rather than their previous behaviour improved the click-through rate by 40%.
  • Given that the average click through rate (the number of times anyone actually clicks on an ad) is about 0.4%, this number (in gross terms) is probably less impressive than it sounds.
  • Cambridge Analytica, the company that misused Facebook data and, according to its own claims, helped Donald Trump win the 2016 election, used psychographic segmentation.
  • For many years “contextual” ads served by not very intelligent algorithms were the bane of digital editors’ lives. Improvements in machine learning should help eradicate the horrible business of showing insurance advertising to readers in the middle of an article about a devastating fire.
  • The words “brand safety” are increasingly used by publishers when demonstrating products such as Project Feels. It is a way publishers can compete on micro-targeting with platforms such as Facebook and YouTube by pointing out that their targeting will not land you next to a conspiracy theory video about the dangers of chemtrails.
  • the exploitation of psychographics is not limited to the responsible and transparent scientists at the NYT. While publishers were showing these shiny new tools to advertisers, Amazon was advertising for a managing editor for its surveillance doorbell, Ring, which contacts your device when someone is at your door. An editor for a doorbell, how is that going to work? In all kinds of perplexing ways according to the ad. It’s “an exciting new opportunity within Ring to manage a team of news editors who deliver breaking crime news alerts to our neighbours. This position is best suited for a candidate with experience and passion for journalism, crime reporting, and people management.” So if instead of thinking about crime articles inspiring fear and advertising doorbells in the middle of them, what if you took the fear that the surveillance-device-cum-doorbell inspires and layered a crime reporting newsroom on top of it to make sure the fear is properly engaging?
  • The media has arguably already played an outsized role in making sure that people are irrationally scared, and now that practice is being strapped to the considerably more powerful engine of an Amazon product.
  • This will not be the last surveillance-based newsroom we see. Almost any product that produces large data feeds can also produce its own “news”. Imagine the Fitbit newsroom or the managing editor for traffic reports from dashboard cams – anything that has a live data feed emanating from it, in the age of the Internet of Things, can produce news.
Ed Webb

I unintentionally created a biased AI algorithm 25 years ago - tech companies are still making the same mistake - 0 views

  • How and why do well-educated, well-intentioned scientists produce biased AI systems? Sociological theories of privilege provide one useful lens.
  • Their training data is biased. They are designed by an unrepresentative group. They face the mathematical impossibility of treating all categories equally. They must somehow trade accuracy for fairness. And their biases are hiding behind millions of inscrutable numerical parameters.
  • fairness can still be the victim of competitive pressures in academia and industry. The flawed Bard and Bing chatbots from Google and Microsoft are recent evidence of this grim reality. The commercial necessity of building market share led to the premature release of these systems.
  • ...3 more annotations...
  • Scientists also face a nasty subconscious dilemma when incorporating diversity into machine learning models: Diverse, inclusive models perform worse than narrow models.
  • biased AI systems can still be created unintentionally and easily. It’s also clear that the bias in these systems can be harmful, hard to detect and even harder to eliminate.
  • with North American computer science doctoral programs graduating only about 23% female, and 3% Black and Latino students, there will continue to be many rooms and many algorithms in which underrepresented groups are not represented at all.
Ed Webb

Zoom urged by rights groups to rule out 'creepy' AI emotion tech - 0 views

  • Human rights groups have urged video-conferencing company Zoom to scrap research on integrating emotion recognition tools into its products, saying the technology can infringe users' privacy and perpetuate discrimination
  • "If Zoom advances with these plans, this feature will discriminate against people of certain ethnicities and people with disabilities, hardcoding stereotypes into millions of devices,"
  • The company has already built tools that purport to analyze the sentiment of meetings based on text transcripts of video calls
  • ...1 more annotation...
  • "This move to mine users for emotional data points based on the false idea that AI can track and analyze human emotions is a violation of privacy and human rights,"
‹ Previous 21 - 27 of 27
Showing 20 items per page