Skip to main content

Home/ Mindamp/ Group items tagged scenarios

Rss Feed Group items tagged

Charles van der Haegen

Internet Society (ISOC) All About The Internet: Legal Guide - 1 views

  •  
    "About the Internet Histories of the Internet ISOC has gathered links and resources on a range of important Internet topics. Please explore this collection of background material. Internet Scenario The Internet Society has developed four future Internet scenarios that reveal plausible courses of events that could impact the health of the Internet in the future. All about the History of the Internet. Articles from various organisations and personalities. Read more ... Guide to Internet Law The Internet Society provides this guide as a public service for all interested parties. The guide offers links to many useful legal research sites on the Internet, along with brief descriptions. Read more ... Market Research/Statistics Statistics, surveys, and Market research regarding the Internet. Read more ... About the Internet Learn more about the Internet Infrastructure Descriptions of the Internet's infrastructure. How is the Internet organised? What are the bodies involved at different levels? Read more ... Internet Code of Conduct Guidelines on conduct and use of the Internet. Read more "
David McGavock

Do Babies Have a Moral Compass? Debate Heats Up | LiveScience - 0 views

  • In the original study, conducted by Yale researchers in 2007, groups of 6-month-olds and 10-month-olds watched a puppet show with neutral wooden figures, where one figure, the climber, was trying to get up a hill. In one scenario, one of the other figures, called the helper, assisted the climber up the hill. In the other scenario, a third figure, called the hinderer, pushed the climber down. Babies were then presented with the helper and hinderer figures so they could pick which one they preferred, and 14 out of 16 babies in the older group (10 months old) and all 12 of the 6-month-olds picked the helper. The study, which was published in the journal Nature, seemed to imply that infants could be good judges of character. [In Photos: How Babies Learn]
  • discrepancies would seem to make it tricky for infants to know that the climber needed help, and if they did, for them to know that the helper was helping. As such, it's possible the infants in the new study looked to these other variables (collisions and bounces) to make their decisions, Hamlin suggests.
  • Even if flaws did exist in their study, Hamlin and her colleagues point to various independent studies, one of which uses a similar setup without the "bouncing" of the climber, that support the "babies have a moral compass" theory. The researchers go on to note they have replicated their findings, that infants prefer prosocial others, in a range of social scenarios that don't include climbing, colliding or bouncing. Hamlin's other studies have shown babies are good judges of character.
  • ...1 more annotation...
  • "On the help and hinder trials, the toys collided with one another, an event we thought infants may not like," lead researchers Damian Scarf said in a statement from New Zealand's University of Otago. "Furthermore, only on the help trials, the climber bounced up and down at the top of hill, an event we thought infants may enjoy."
  •  
    "An experiment five years ago suggested that babies are equipped with an innate moral compass, which drives them to choose good individuals over the bad in a wooden puppet show. But new research casts doubt on those findings, demonstrating that a baby's apparent preference for what's right might just reflect a fondness for bouncy things."
Charles van der Haegen

TEDx Duke Talk is Posted! « Learning Matters! Tony O'Driscoll - 2 views

  •  
    Here is the talk I gave at TEDx Duke " Preparing Your Children for a World you can Barely Imagine." Some of the images are a bit pixelated but you can get a copy of the slides in my previous post.
  •  
    I viewed this presentation again... Love it so much... Wanted to share my admiration and joy with you... It's all about how we can manage Jerk with MindAmp.
David McGavock

The Myth Of AI | Edge.org - 1 views

  • what I'm proposing is that if AI was a real thing, then it probably would be less of a threat to us than it is as a fake thing.
  • it adds a layer of religious thinking to what otherwise should be a technical field.
  • we can talk about pattern classification.
  • ...38 more annotations...
  • But when you add to it this religious narrative that's a version of the Frankenstein myth, where you say well, but these things are all leading to a creation of life, and this life will be superior to us and will be dangerous
  • I'm going to go through a couple of layers of how the mythology does harm.
  • this overall atmosphere of accepting the algorithms as doing a lot more than they do. In the case of Netflix, the recommendation engine is serving to distract you from the fact that there's not much choice anyway.
  • If a program tells you, well, this is how things are, this is who you are, this is what you like, or this is what you should do, we have a tendency to accept that.
  • our economy has shifted to what I call a surveillance economy, but let's say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on
  • people often accept that
  • all this overpromising that AIs will be about to do this or that. It might be to become fully autonomous driving vehicles instead of only partially autonomous, or it might be being able to fully have a conversation as opposed to only having a useful part of a conversation to help you interface with the device.
  • other cases where the recommendation engine is not serving that function, because there is a lot of choice, and yet there's still no evidence that the recommendations are particularly good.
  • there's no way to tell where the border is between measurement and manipulation in these systems.
  • if the preponderance of those people have grown up in the system and are responding to whatever choices it gave them, there's not enough new data coming into it for even the most ideal or intelligent recommendation engine to do anything meaningful.
  • it simply turns into a system that measures which manipulations work, as opposed to which ones don't work, which is very different from a virginal and empirically careful system that's trying to tell what recommendations would work had it not intervened
  • What's not clear is where the boundary is.
  • If you ask: is a recommendation engine like Amazon more manipulative, or more of a legitimate measurement device? There's no way to know.
  • we don't know to what degree they're measurement versus manipulation.
  • If people are deciding what books to read based on a momentum within the recommendation engine that isn't going back to a virgin population, that hasn't been manipulated, then the whole thing is spun out of control and doesn't mean anything anymore
  • not so much a rise of evil as a rise of nonsense.
  • because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth.
  • Cortana or a Siri
  • This pattern—of AI only working when there's what we call big data, but then using big data in order to not pay large numbers of people who are contributing—is a rising trend in our civilization, which is totally non-sustainable
    • David McGavock
       
      Key relationship between automation of tasks, downsides, and expectation for AI
  • If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I've just gone over, which include acceptance of bad user interfaces, where you can't tell if you're being manipulated or not, and everything is ambiguous.
  • It creates incompetence, because you don't know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you're gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.
  • I'm going to give you two scenarios.
  • let's suppose somebody comes up with a way to 3-D print a little assassination drone that can go buzz around and kill somebody. Let's suppose that these are cheap to make.
  • Having said all that, let's address directly this problem of whether AI is going to destroy civilization and people, and take over the planet and everything.
  • some disaffected teenagers, or terrorists, or whoever start making a bunch of them, and they go out and start killing people randomly
  • This idea that some lab somewhere is making these autonomous algorithms that can take over the world is a way of avoiding the profoundly uncomfortable political problem, which is that if there's some actuator that can do harm, we have to figure out some way that people don't do harm with it.
    • David McGavock
       
      Another key - focus on the actuator, not the agent that exploits it.
  • the part that causes the problem is the actuator. It's the interface to physicality
  • not so much whether it's a bunch of teenagers or terrorists behind it or some AI
  • The sad fact is that, as a society, we have to do something to not have little killer drones proliferate.
  • What we don't have to worry about is the AI algorithm running them, because that's speculative.
  • another one where there's so-called artificial intelligence, some kind of big data scheme, that's doing exactly the same thing, that is self-directed and taking over 3-D printers, and sending these things off to kill people.
  • There's a whole other problem area that has to do with neuroscience, where if we pretend we understand things before we do, we do damage to science,
  • You have to be able to accept what your ignorances are in order to do good science. To reject your own ignorance just casts you into a silly state where you're a lesser scientist.
  • To my mind, the mythology around AI is a re-creation of some of the traditional ideas about religion, but applied to the technical world.
  • The notion of this particular threshold—which is sometimes called the singularity, or super-intelligence, or all sorts of different terms in different periods—is similar to divinity.
  • In the history of organized religion, it's often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.
    • David McGavock
       
      Technical priesthood.
  • If AI means this mythology of this new creature we're creating, then it's just a stupid mess that's confusing everybody, and harming the future of the economy. If what we're talking about is a set of algorithms and actuators that we can improve and apply in useful ways, then I'm very interested, and I'm very much a participant in the community that's improving those things.
  • A lot of people in the religious world are just great, and I respect and like them. That goes hand-in-hand with my feeling that some of the mythology in big religion still leads us into trouble that we impose on ourselves and don't need.
  •  
    "The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture-that's been the most wealthy, prolific, and influential subculture in the technical world-that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us."
1 - 4 of 4
Showing 20 items per page