Skip to main content

Home/ Artificial Intelligence Research/ Group items tagged science

Rss Feed Group items tagged

Matvey Ezhov

On Biological and Digital Intelligence - 0 views

  • In essence, Hawkins argues that, to whatever extent the concept of “consciousness” can’t be boiled down to brain theory, it’s simply a bunch of hooey.
    • Matvey Ezhov
       
      Not true!
  • in which conscious experience is more foundational than physical systems or linguistic communications
  • Conscious experiences are associated with patterns, and patterns are associated with physical systems, but none of these is fully reducible to the other. 
  • ...22 more annotations...
  • He makes the correct point that roughly-human-level AI’s will have dramatically different strengths and weaknesses from human being, due to different sensors and actuators and different physical infrastructures for their cognitive dynamics.  But he doesn’t even touch the notion of self-modifying AI – the concept that once an AI gets smart enough to modify its own code, it’s likely to get exponentially smarter and smarter until it’s left us humans in the dust.
    • Matvey Ezhov
       
      Совершенно не имеет отношения к теме, подход Хокинса легко масштабируется до сверх- и сверх-сверх-сверхчеловеческого интеллекта.
  • therefore if AI closely enough emulates the human brain it won’t radically self-modify either
  • Rather, I think the problem is that the field of AI has come to focus on “narrow AI” – programs that solve particularly, narrowly-defined problems – rather than “artificial general intelligence” (AGI). 
  • cognitive science, artificial general intelligence, philosophy of mind and abstract mathematics
    • Matvey Ezhov
       
      т.о. Гортзел признается, что вообще принимает и не считает нужным принимать нейронауку в расчет, т.е. опирается только на эмпирические представления о том, как работает сознание.
  • So what we’re doing is creating commercial narrow AI programs, using the software framework that we’re building out with our AGI design in mind.
    • Matvey Ezhov
       
      и в этом его большое отличие от платформы Хокинса, которая имеет одинаковую структуру для всех ее применений
  • I tend to largely agree with his take on the brain
  • I think he oversimplifies some things fairly seriously – giving them very brief mention when they’re actually quite long and complicated stories.  And some of these omissions, in my view, are not mere “biological details” but are rather points of serious importance for his program of abstracting principles from brain science and then re-concretizing these principles in the context of digital software.
  • One point Hawkins doesn’t really cover is how a mind/brain chooses which predictions to make, from among the many possible predictions that exist.
    • Matvey Ezhov
       
      тут он вроде бы прав...
  • Hawkins proposes that there are neurons or neuronal groups that represent patterns as “tokens,” and that these tokens are then incorporated along with other neurons or neuronal groups into larger groupings representing more abstract patterns.  This seems clearly to be correct, but he doesn’t give much information on how these tokens are supposed to be formed. 
  • So, what’s wrong with Hawkins’ picture of brain function?  Nothing’s exactly wrong with it, so far as I can tell.
  • But Edelman then takes the concept one step further and talks about “neural maps” – assemblies of neuronal groups that carry out particular perception, cognition or action functions.  Neural maps, in essence, are sets of neuronal groups that host attractors of neurodynamics.  And Edelman then observes, astutely, that the dynamics of the population of neuronal groups, over time, is likely to obey a form of evolution by natural selection.
  • How fascinating if the brain also operates in this way!
    • Matvey Ezhov
       
      да нифига... слов нет
  • Hawkins argues that creativity is essentially just metaphorical thinking, generalization based on memory.  While this is true in a grand sense, it’s not a very penetrating statement.
  • Evolutionary learning is the most powerful general search mechanism known to computer science, and is also hypothesized by Edelman to underly neural intelligence.  This sort of idea, it seems to me, should be part of any synthetic approach to brain function.
  • Hawkins mentions the notion, and observes correctly that Hebbian learning in the brain is a lot subtler than the simple version that Donald Hebb laid out in the late 40’s.   But he largely portrays these variations as biological details, and then shifts focus to the hierarchical architecture of the cortex. 
  • Hawkins’ critique of AI, which in my view is overly harsh.  He dismisses work on formal logic based reasoning as irrelevant to “real intelligence.” 
  • So – to sum up – I think Hawkins’ statements about brain function are pretty much correct
  • What he omits are, for instance,   The way the brain displays evolutionary learning as a consequence of the dynamics of multiple attractors involving sets of neural clusters The way the brain may emergently give rise to probabilistic reasoning via the statistical coordination of Hebbian learning
  • Learning of predictive patterns requires an explicit or implicit search through a large space of predictive patterns; evolutionary learning provides one approach to this problem, with computer science foundations and plausible connections to brain function; again, Hawkins does not propose any concrete alternative.
  • crucial question of how far one has to abstract away from brain function, to get to something that can be re-specialized into efficient computer software.  My intuition is that this will require a higher level of abstraction than Hawkins seems to believe.  But I stress that this is a matter of intuitive judgment – neither of us really knows.
  • Of course, to interpret the Novamente design as an “abstraction from the brain” is to interpret this phrase in a fairly extreme sense – we’re abstracting general processes like probabilistic inference and evolutionary learning and general properties like hierarchical structure from the brain, rather than particular algorithms. 
    • Matvey Ezhov
       
      наконец-то он сказал это
  • Although I’m (unsurprisingly) most psyched about the Novamente approach, I think it’s also quite worthwhile to pursue AGI approaches that are closer to the brain level – there’s a large space between detailed brain simulation and Novamente, including neuron-level simulations, neural-cluster-level simulations, and so forth. 
Matvey Ezhov

Recursive Self-Improvement - The Transhumanist Wiki - 2 views

  • True Artificial Intelligence would bypass problems of biological complexity and ethics, growing up on a substrate ideal for initiating Recursive Self-Improvement. (fully reprogrammable, ultrafast, the AI's "natural habitat".) This Artificial Intelligence would be based upon: 1) our current understanding of the central algorithms of intelligence, 2) our current knowledge of the brain, obtained through high-resolution fMRI and delicate Cognitive Science experiments, and 3) the kind of computing hardware available to AI designers.
  • Humans cannot conduct any of these enhancements to ourselves; the inherent structure of our biology and the limited level of our current technology makes this impossible.
  • Recursive Self-Improvement is the ability of a mind to genuinely improve its own intelligence. This might be accomplished through a variety of means; speeding up one's own hardware, redesigning one's own cognitive architecture for optimal intelligence, adding new components into one's own hardware, custom-designing specialized modules for recurrent tasks, and so on.
  • ...2 more annotations...
  • Unfortunately, the neurological structures corresponding to human intelligence are likely to be highly intricate, delicate, and biologically very complex (unnecessarily so; evolution exhibits no foresight, and most of the brain evolved in the absence of human General Intelligence).
  • 2) advances in Cognitive Science that indicate the complexity of certain brain areas is largely extraneous to intelligence,
    • Matvey Ezhov
       
      Очень серьезно допущение, которое может быть ошибочно. Нам известно, что все зоны кортекса участвуют в формировании модели мира индивида, а значит и сознания.
davidjones29

FICC 2018 - Future of Information and Communication Conference (FICC) 2018 - 0 views

  •  
    FICC 2018 aims to provide a forum for researchers from both academia and industry to share their latest research contributions and exchange knowledge with the common goal of shaping the future of Information and Communication. Join us, April 5-6, to explore discovery, progress, and achievements related to Communication, Data Science, Computing and Internet of Things. ficc@saiconference.com saiconference.com/ficc https://groups.diigo.com/group/communication-conference https://youtu.be/7Qw-ovNd7A8
mikhail-miguel

WolframAlpha - Compute expert-level answers in Math, Science, Society, Culture & Everyd... - 0 views

  •  
    Wolfram|Alpha brings expert-level knowledge and capabilities to the broadest possible range of people-spanning all professions and education levels. WolframAlpha: Compute expert-level answers in Math, Science, Society, Culture & Everyday Life (wolframalpha.com).
mikhail-miguel

WolframAlpha - Compute expert-level answers in Math, Science, Society, Culture & Everyd... - 0 views

  •  
    WolframAlpha: Compute expert-level answers in Math, Science, Society, Culture & Everyday Life (wolframalpha.com).
mikhail-miguel

Coach Marlee - Artificial Intelligence coach using science-based methods to help everyo... - 0 views

  •  
    Coach Marlee: Artificial Intelligence coach using science-based methods to help everyone achieve success (fingerprintforsuccess.com).
mikhail-miguel

Coach Marlee - Artificial Intelligence coach using science-based methods to help everyo... - 0 views

  •  
    Coach Marlee: Artificial Intelligence coach using science-based methods to help everyone achieve success (fingerprintforsuccess.com).
thinkahol *

Being No One - The MIT Press - 0 views

  •  
    According to Thomas Metzinger, no such things as selves exist in the world: nobody ever had or was a self. All that exists are phenomenal selves, as they appear in conscious experience. The phenomenal self, however, is not a thing but an ongoing process; it is the content of a "transparent self-model." In Being No One, Metzinger, a German philosopher, draws strongly on neuroscientific research to present a representationalist and functional analysis of what a consciously experienced first-person perspective actually is. Building a bridge between the humanities and the empirical sciences of the mind, he develops new conceptual toolkits and metaphors; uses case studies of unusual states of mind such as agnosia, neglect, blindsight, and hallucinations; and offers new sets of multilevel constraints for the concept of consciousness. Metzinger's central question is: How exactly does strong, consciously experienced subjectivity emerge out of objective events in the natural world? His epistemic goal is to determine whether conscious experience, in particular the experience of being someone that results from the emergence of a phenomenal self, can be analyzed on subpersonal levels of description. He also asks if and how our Cartesian intuitions that subjective experiences as such can never be reductively explained are themselves ultimately rooted in the deeper representational structure of our conscious minds.
thinkahol *

YouTube - Jeff Hawkins on Artificial Intelligence - Part 1/5 - 0 views

  •  
    June 23, 2008 - The founder of Palm, Jeff Hawkins, solves the mystery of Artificial Intelligence and presents his theory at the RSA Conference 2008. He gives a brief tutorial on the neocortex and then explains how the brain stores memory and then describes how to use that knowledge to create artificial intelligence. This lecture is insightful and his theory will revolutionize computer science.
thinkahol *

A: This Computer Could Defeat You at 'Jeopardy!' Q: What is Watson? - 0 views

  •  
    From: PBSNewsHour | February 14, 2011  | 1,877 viewsRead the transcript: http://to.pbs.org/enCpW3NewsHour Science correspondent Miles O'Brien goes head-to-circuit board with IBM's computer Watson on the game show "Jeopardy!" to explore the limits of language and artificial intelligence for machines.
thinkahol *

Jeff Hawkins on Artificial Intelligence - Part 1/5 - YouTube - 0 views

  •  
    The founder of Palm, Jeff Hawkins, solves the mystery of Artificial Intelligence and presents his theory at the RSA Conference 2008. He gives a brief tutorial on the neocortex and then explains how the brain stores memory and then describes how to use that knowledge to create artificial intelligence. This lecture is insightful and his theory will revolutionize computer science.
davidjones29

cfp communication conference - 0 views

FICC 2018 aims to provide a forum for researchers from both academia and industry to share their latest research contributions and exchange knowledge with the common goal of shaping the future of I...

ai science intelligence artificial technolgy future-technology

started by davidjones29 on 28 Jun 17 no follow-up yet
mikhail-miguel

@ Huggingface.co - 0 views

  •  
    We're on a journey to advance and democratize artificial intelligence through open source and open science.
mikhail-miguel

Artificial Intelligence: A Guide for Thinking Humans - 0 views

  •  
    This program includes an introduction read by the author. No recent scientific enterprise has proved as alluring, terrifying, and filled with extravagant promise and frustrating setbacks as artificial intelligence. The award-winning author Melanie Mitchell, a leading computer scientist, now reveals its turbulent history and the recent surge of apparent successes, grand hopes, and emerging fears that surround AI. In Artificial Intelligence, Mitchell turns to the most urgent questions concerning AI today: How intelligent - really - are the best AI programs? How do they work? What can they actually do, and when do they fail? How humanlike do we expect them to become, and how soon do we need to worry about them surpassing us? Along the way, she introduces the dominant methods of modern AI and machine learning, describing cutting-edge AI programs, their human inventors, and the historical lines of thought that led to recent achievements. She meets with fellow experts like Douglas Hofstadter, the cognitive scientist and Pulitzer Prize - winning author of the modern classic Gödel, Escher, Bach, who explains why he is "terrified" about the future of AI. She explores the profound disconnect between the hype and the actual achievements in AI, providing a clear sense of what the field has accomplished and how much farther it has to go. Interweaving stories about the science and the people behind it, Artificial Intelligence brims with clear-sighted, captivating, and approachable accounts of the most interesting and provocative modern work in AI, flavored with Mitchell's humor and personal observations. This frank, lively book will prove an indispensable guide to understanding today's AI, its quest for "human-level" intelligence, and its impacts on all of our futures. PLEASE NOTE: When you purchase this title, the accompanying PDF will be available in your Audible Library along with the audio.
Matvey Ezhov

Memristor minds: The future of artificial intelligence - tech - 08 July 2009 - New Scie... - 0 views

  •  
    Memristor minds
Matvey Ezhov

Mapping the brain - MIT news - 2 views

  • To find connectomes, researchers will need to employ vast computing power to process images of the brain. But first, they need to teach the computers what to look for.
  • to manually trace connections between neurons
  • want to speed up the process dramatically by enlisting the help of high-powered computers.
  • ...5 more annotations...
  • To do that, they are teaching the computers to analyze the brain slices, using a common computer science technique called automated machine learning, which allows computers to change their behavior in response to new data.
  • With machine learning, the researchers teach computers to learn by example. They feed their computer electron micrographs as well as human tracings of these images. The computer then searches for an algorithm that allows it to imitate human performance.
  • Their eventual goal is to use computers to process the bulk of the images needed to create connectomes, but they expect that humans will still need to proofread the computers’ work.
  • Last year, the National Institutes of Health announced a five-year, $30 million Human Connectome Project to develop new techniques to figure out the connectivity of the human brain. That project is focused mainly on higher level, region-to-region connections. Sporns says he believes that a good draft of higher-level connections could be achieved within the five-year timeline of the NIH project, and that significant progress will also be made toward a neuron-to-neuron map.
    • Matvey Ezhov
       
      draft of human connectome within five years
  • Though only a handful of labs around the world are working on the connectome right now, Jain and Turaga expect that to change as tools for diagramming the brain improve. “It’s a common pattern in neuroscience: A few people will come up with new technology and pioneer some applications, and then everybody else will start to adopt it,” says Jain.
1 - 20 of 35 Next ›
Showing 20 items per page