Skip to main content

Home/ Mindamp/ Group items tagged augmentation

Rss Feed Group items tagged

David McGavock

New Device Allows Brain to Bypass Spinal Cord, Move Paralyzed Limbs - ScienceNewsline - 0 views

  • For the first time ever, a paralyzed man can move his fingers and hand with his own thoughts thanks to an innovative partnership between The Ohio State University Wexner Medical Center and Battelle.
  • “It’s much like a heart bypass, but instead of bypassing blood, we’re actually bypassing electrical signals,” said Chad Bouton, research leader at Battelle. “We’re taking those signals from the brain, going around the injury, and actually going directly to the muscles.”
    • David McGavock
       
      Like bypass
  • During a three-hour surgery on April 22, Rezai implanted a chip smaller than a pea onto the motor cortex of Burkhart’s brain. The tiny chip interprets brain signals and sends them to a computer, which recodes and sends them to the high-definition electrode stimulation sleeve that stimulates the proper muscles to execute his desired movements. Within a tenth of a second, Burkhart’s thoughts are translated into action.
  • ...3 more annotations...
  • Ian’s brain signals bypass his injured spinal cord and move his hand, hence the name Neurobridge.
  • Battelle also developed a non-invasive neurostimulation technology in the form of a wearable sleeve that allows for precise activation of small muscle segments in the arm to enable individual finger movement, along with software that forms a ‘virtual spinal cord’ to allow for coordination of dynamic hand and wrist movements.
  • As part of the study, Burkhart worked for months using the electrode sleeve to stimulate his forearm to rebuild his atrophied muscles so they would be more responsive to the electric stimulation.
  •  
    Example of innovation in technology and biology
David McGavock

The Myth Of AI | Edge.org - 1 views

  • what I'm proposing is that if AI was a real thing, then it probably would be less of a threat to us than it is as a fake thing.
  • it adds a layer of religious thinking to what otherwise should be a technical field.
  • we can talk about pattern classification.
  • ...38 more annotations...
  • But when you add to it this religious narrative that's a version of the Frankenstein myth, where you say well, but these things are all leading to a creation of life, and this life will be superior to us and will be dangerous
  • I'm going to go through a couple of layers of how the mythology does harm.
  • this overall atmosphere of accepting the algorithms as doing a lot more than they do. In the case of Netflix, the recommendation engine is serving to distract you from the fact that there's not much choice anyway.
  • If a program tells you, well, this is how things are, this is who you are, this is what you like, or this is what you should do, we have a tendency to accept that.
  • our economy has shifted to what I call a surveillance economy, but let's say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on
  • people often accept that
  • all this overpromising that AIs will be about to do this or that. It might be to become fully autonomous driving vehicles instead of only partially autonomous, or it might be being able to fully have a conversation as opposed to only having a useful part of a conversation to help you interface with the device.
  • other cases where the recommendation engine is not serving that function, because there is a lot of choice, and yet there's still no evidence that the recommendations are particularly good.
  • there's no way to tell where the border is between measurement and manipulation in these systems.
  • if the preponderance of those people have grown up in the system and are responding to whatever choices it gave them, there's not enough new data coming into it for even the most ideal or intelligent recommendation engine to do anything meaningful.
  • it simply turns into a system that measures which manipulations work, as opposed to which ones don't work, which is very different from a virginal and empirically careful system that's trying to tell what recommendations would work had it not intervened
  • What's not clear is where the boundary is.
  • If you ask: is a recommendation engine like Amazon more manipulative, or more of a legitimate measurement device? There's no way to know.
  • we don't know to what degree they're measurement versus manipulation.
  • If people are deciding what books to read based on a momentum within the recommendation engine that isn't going back to a virgin population, that hasn't been manipulated, then the whole thing is spun out of control and doesn't mean anything anymore
  • not so much a rise of evil as a rise of nonsense.
  • because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth.
  • Cortana or a Siri
  • This pattern—of AI only working when there's what we call big data, but then using big data in order to not pay large numbers of people who are contributing—is a rising trend in our civilization, which is totally non-sustainable
    • David McGavock
       
      Key relationship between automation of tasks, downsides, and expectation for AI
  • If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I've just gone over, which include acceptance of bad user interfaces, where you can't tell if you're being manipulated or not, and everything is ambiguous.
  • It creates incompetence, because you don't know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you're gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work.
  • I'm going to give you two scenarios.
  • let's suppose somebody comes up with a way to 3-D print a little assassination drone that can go buzz around and kill somebody. Let's suppose that these are cheap to make.
  • Having said all that, let's address directly this problem of whether AI is going to destroy civilization and people, and take over the planet and everything.
  • some disaffected teenagers, or terrorists, or whoever start making a bunch of them, and they go out and start killing people randomly
  • This idea that some lab somewhere is making these autonomous algorithms that can take over the world is a way of avoiding the profoundly uncomfortable political problem, which is that if there's some actuator that can do harm, we have to figure out some way that people don't do harm with it.
    • David McGavock
       
      Another key - focus on the actuator, not the agent that exploits it.
  • the part that causes the problem is the actuator. It's the interface to physicality
  • not so much whether it's a bunch of teenagers or terrorists behind it or some AI
  • The sad fact is that, as a society, we have to do something to not have little killer drones proliferate.
  • What we don't have to worry about is the AI algorithm running them, because that's speculative.
  • another one where there's so-called artificial intelligence, some kind of big data scheme, that's doing exactly the same thing, that is self-directed and taking over 3-D printers, and sending these things off to kill people.
  • There's a whole other problem area that has to do with neuroscience, where if we pretend we understand things before we do, we do damage to science,
  • You have to be able to accept what your ignorances are in order to do good science. To reject your own ignorance just casts you into a silly state where you're a lesser scientist.
  • To my mind, the mythology around AI is a re-creation of some of the traditional ideas about religion, but applied to the technical world.
  • The notion of this particular threshold—which is sometimes called the singularity, or super-intelligence, or all sorts of different terms in different periods—is similar to divinity.
  • In the history of organized religion, it's often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.
    • David McGavock
       
      Technical priesthood.
  • If AI means this mythology of this new creature we're creating, then it's just a stupid mess that's confusing everybody, and harming the future of the economy. If what we're talking about is a set of algorithms and actuators that we can improve and apply in useful ways, then I'm very interested, and I'm very much a participant in the community that's improving those things.
  • A lot of people in the religious world are just great, and I respect and like them. That goes hand-in-hand with my feeling that some of the mythology in big religion still leads us into trouble that we impose on ourselves and don't need.
  •  
    "The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There's always been a question about whether a program is something alive or not since it intrinsically has some kind of autonomy at the very least, or it wouldn't be a program. There has been a domineering subculture-that's been the most wealthy, prolific, and influential subculture in the technical world-that for a long time has not only promoted the idea that there's an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we're inevitably making computers that will be smarter and better than us and will take over from us."
David McGavock

Tip for Getting More Organized: Don't - Michael Schrage - Harvard Business Review - 1 views

  • When it comes to investing time, thought and effort into productively organizing oneself, less is more. In fact, not only is less more, research suggests it may be faster, better and cheaper.
  • IBM researchers observed that email users who “searched” rather than set up files and folders for their correspondence typically found what they were looking for faster and with fewer errors. Time and overhead associated with creating and managing email folders were, effectively, a waste.
  • The personal productivity issue knowledge workers and effective executives need to ponder is whether habits of efficiency that once improved performance have decayed into mindless ruts that delay or undermine desired outcomes.
  • ...6 more annotations...
  • what would really prove more personally productive — folders that sort 15% faster? Or key phrase search capabilities that were 20% better?
  • Ongoing improvement in email/document/desktop and cloud-centric search frees them from legacy information management behaviors like filing.
  • They’re “organizing” for flexibility, adaptiveness and immediate response. More accurately, their technologies exist to give them greater speed and flexibility. Their personal organizational ethos reflects a Toyota Production System “just-in-time” attitude.
  • nstead of better tools for better organizing, people want their organization done for them. Organizing is wasteful; getting its benefits is productivity.
  • They want what I’ve described earlier as “promptware” — a cue and intervention that creates measurable value in the moment, rather than promised efficiencies in the future.
  • We’ll likely get more done better if we give less time and thought to organization and greater reflection and care to desired outcomes. Our job today and tomorrow isn’t to organize ourselves better; it’s to get the right technologies that respond to our personal productivity needs. It’s not that we’re becoming too dependent on our technologies to organize us; it’s that we haven’t become dependent enough.
  •  
    Suggests that we use just-in-time features built into our smart devices rather than take time to manually organize files and folders.
‹ Previous 21 - 23 of 23
Showing 20 items per page