Skip to main content

Home/ Artificial Intelligence Research/ Group items matching "neural" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
mikhail-miguel

Neural.love Art Generator - Artificial Intelligence creates art from your words: 5M+ images made (Neural.love). - 0 views

  •  
    Neural.love Art Generator: Artificial Intelligence creates art from your words: 5M+ images made (Neural.love).
mikhail-miguel

Neural frames - Transform words into motion with Neural Frames, the Artificial Intelligence text-to-video tool (Neuralframes.com). - 0 views

  •  
    Neural frames: Transform words into motion with Neural Frames, the Artificial Intelligence text-to-video tool (Neuralframes.com).
mikhail-miguel

Neural frames - Transform words into motion with Neural Frames, the Artificial Intelligence text-to-video tool (Neuralframes.com). - 0 views

  •  
    Neural frames: Transform words into motion with Neural Frames, the Artificial Intelligence text-to-video tool (Neuralframes.com).
Matvey Ezhov

On Biological and Digital Intelligence - 0 views

  • In essence, Hawkins argues that, to whatever extent the concept of “consciousness” can’t be boiled down to brain theory, it’s simply a bunch of hooey.
    • Matvey Ezhov
       
      Not true!
  • in which conscious experience is more foundational than physical systems or linguistic communications
  • Conscious experiences are associated with patterns, and patterns are associated with physical systems, but none of these is fully reducible to the other. 
  • ...22 more annotations...
  • He makes the correct point that roughly-human-level AI’s will have dramatically different strengths and weaknesses from human being, due to different sensors and actuators and different physical infrastructures for their cognitive dynamics.  But he doesn’t even touch the notion of self-modifying AI – the concept that once an AI gets smart enough to modify its own code, it’s likely to get exponentially smarter and smarter until it’s left us humans in the dust.
    • Matvey Ezhov
       
      Совершенно не имеет отношения к теме, подход Хокинса легко масштабируется до сверх- и сверх-сверх-сверхчеловеческого интеллекта.
  • therefore if AI closely enough emulates the human brain it won’t radically self-modify either
  • Rather, I think the problem is that the field of AI has come to focus on “narrow AI” – programs that solve particularly, narrowly-defined problems – rather than “artificial general intelligence” (AGI). 
  • cognitive science, artificial general intelligence, philosophy of mind and abstract mathematics
    • Matvey Ezhov
       
      т.о. Гортзел признается, что вообще принимает и не считает нужным принимать нейронауку в расчет, т.е. опирается только на эмпирические представления о том, как работает сознание.
  • So what we’re doing is creating commercial narrow AI programs, using the software framework that we’re building out with our AGI design in mind.
    • Matvey Ezhov
       
      и в этом его большое отличие от платформы Хокинса, которая имеет одинаковую структуру для всех ее применений
  • I tend to largely agree with his take on the brain
  • I think he oversimplifies some things fairly seriously – giving them very brief mention when they’re actually quite long and complicated stories.  And some of these omissions, in my view, are not mere “biological details” but are rather points of serious importance for his program of abstracting principles from brain science and then re-concretizing these principles in the context of digital software.
  • One point Hawkins doesn’t really cover is how a mind/brain chooses which predictions to make, from among the many possible predictions that exist.
    • Matvey Ezhov
       
      тут он вроде бы прав...
  • Hawkins proposes that there are neurons or neuronal groups that represent patterns as “tokens,” and that these tokens are then incorporated along with other neurons or neuronal groups into larger groupings representing more abstract patterns.  This seems clearly to be correct, but he doesn’t give much information on how these tokens are supposed to be formed. 
  • So, what’s wrong with Hawkins’ picture of brain function?  Nothing’s exactly wrong with it, so far as I can tell.
  • But Edelman then takes the concept one step further and talks about “neural maps” – assemblies of neuronal groups that carry out particular perception, cognition or action functions.  neural maps, in essence, are sets of neuronal groups that host attractors of neurodynamics.  And Edelman then observes, astutely, that the dynamics of the population of neuronal groups, over time, is likely to obey a form of evolution by natural selection.
  • How fascinating if the brain also operates in this way!
    • Matvey Ezhov
       
      да нифига... слов нет
  • Hawkins argues that creativity is essentially just metaphorical thinking, generalization based on memory.  While this is true in a grand sense, it’s not a very penetrating statement.
  • Evolutionary learning is the most powerful general search mechanism known to computer science, and is also hypothesized by Edelman to underly neural intelligence.  This sort of idea, it seems to me, should be part of any synthetic approach to brain function.
  • Hawkins mentions the notion, and observes correctly that Hebbian learning in the brain is a lot subtler than the simple version that Donald Hebb laid out in the late 40’s.   But he largely portrays these variations as biological details, and then shifts focus to the hierarchical architecture of the cortex. 
  • Hawkins’ critique of AI, which in my view is overly harsh.  He dismisses work on formal logic based reasoning as irrelevant to “real intelligence.” 
  • So – to sum up – I think Hawkins’ statements about brain function are pretty much correct
  • What he omits are, for instance,   The way the brain displays evolutionary learning as a consequence of the dynamics of multiple attractors involving sets of neural clusters The way the brain may emergently give rise to probabilistic reasoning via the statistical coordination of Hebbian learning
  • Learning of predictive patterns requires an explicit or implicit search through a large space of predictive patterns; evolutionary learning provides one approach to this problem, with computer science foundations and plausible connections to brain function; again, Hawkins does not propose any concrete alternative.
  • crucial question of how far one has to abstract away from brain function, to get to something that can be re-specialized into efficient computer software.  My intuition is that this will require a higher level of abstraction than Hawkins seems to believe.  But I stress that this is a matter of intuitive judgment – neither of us really knows.
  • Of course, to interpret the Novamente design as an “abstraction from the brain” is to interpret this phrase in a fairly extreme sense – we’re abstracting general processes like probabilistic inference and evolutionary learning and general properties like hierarchical structure from the brain, rather than particular algorithms. 
    • Matvey Ezhov
       
      наконец-то он сказал это
  • Although I’m (unsurprisingly) most psyched about the Novamente approach, I think it’s also quite worthwhile to pursue AGI approaches that are closer to the brain level – there’s a large space between detailed brain simulation and Novamente, including neuron-level simulations, neural-cluster-level simulations, and so forth. 
Matvey Ezhov

» Python in neuroscience - 1 views

  • Some already exist specifically for neural data analysis and simulation, such as PyMVPA2 and Brian3 respectively.
  •  
    A widely used open-source programming language, Python is becoming the language of choice for neural data analysis and simulation.
Matvey Ezhov

Technology Review: Intelligence Explained (!) - 0 views

  • "Scientists are now able to switch the focus from particular regions of the brain to the connections between those regions," says Sherif Karama, a psychiatrist and a neuroscientist at McGill University's Montreal Neurological Institute.
  • A quantifiable "general intelligence factor," known as g, can be statistically extracted from scores on a battery of intelligence tests.
  • In 2001, Thompson showed that it is correlated with volume in the frontal cortex, a result consistent with a number of studies that have linked intelligence to overall brain size.
  • ...3 more annotations...
  • In 2007, Jung and Richard Haier, now professor emeritus of psychology at the University of California, Irvine, developed the first comprehensive theory drawn from neuroimaging of how the brain gives rise to intelligence.
    • Matvey Ezhov
       
      Attention! To Research.
  • As we "evolved from worms to humans," says George Bartzokis, a professor of psychiatry at UCLA, the number of non-neural cells in the brain increased 50 times more than the number of neurons. He adds, "My hypothesis has always been that what gives us our cognitive capacity is not actually the number of neurons, which can vary tremendously between human individuals, but rather the quality of our connections."
  • The type of MRI typically used for medical scans does not show the finer details of the brain's white matter. But with a technique called diffusion tensor imaging (DTI), which uses the scanner's magnet to track the movement of water molecules in the brain, scientists have developed ways to map out neural wiring in detail. While water moves randomly within most brain tissue, it flows along the insulated neural fibers like current through a wire.
mikhail-miguel

Topaz Video Artificial Intelligence - Unlimited access to production-grade neural networks for video optimization (topazlabs.com). - 0 views

  •  
    Topaz Video Artificial Intelligence: Unlimited access to production-grade neural networks for video optimization (topazlabs.com).
mikhail-miguel

Neuralangelo by NVIDIA - Neuralangelo, a new Artificial Intelligence model by NVIDIA Research for 3D reconstruction using Neural networks (blogs.nvidia.com). - 0 views

  •  
    Neuralangelo by NVIDIA: Neuralangelo, a new Artificial Intelligence model by NVIDIA Research for 3D reconstruction using Neural networks (blogs.nvidia.com).
Nikolay Sibirtsev

Neural Network for Recognition of Handwritten Digits - CodeProject - 1 views

  •  
    pending time to backpropagate small errors. In practice, the demo program calculates the error for each pattern. If the error is smalle
Matvey Ezhov

Prefrontal cortex - Wikipedia, the free encyclopedia - 0 views

  • Miller and Cohen propose an Integrative Theory of Prefrontal Cortex Function. The two theorize that “cognitive control stems from the active maintenance of patterns of activity in the prefrontal cortex that represents goals and means to achieve them. They provide bias signals to other brain structures whose net effect is to guide the flow of activity along neural pathways that establish the proper mappings between inputs, internal states, and outputs needed to perform a given task” (Miller & Cohen, 2001). Essentially the two theorize that the prefrontal cortex guides the inputs and connections which allows for cognitive control of our actions.
Matvey Ezhov

PLoS Biology: Towards a Mathematical Theory of Cortical Micro-circuits (about Hawkins' HTM) - 1 views

  • The theoretical setting of hierarchical Bayesian inference is gaining acceptance as a framework for understanding cortical computation.
    • Matvey Ezhov
       
      Statement needs checking
  • Friston recently expanded on this to suggest an inversion method for hierarchical Bayesian dynamic models and to point out that the brain, in principle, has the infrastructure needed to invert hierarchical dynamic models [6].
  • In a recent review, Hegde and Felleman pointed out that the “Bayesian framework is not yet a neural model. [The Bayesian] framework currently helps explain the computations that underlie various brain functions, but not how the brain implements these computations” [2]. This paper is an attempt to fill this gap by deriving a computational model for cortical circuits based on the mathematics of Bayesian belief propagation in the context of a particular Bayesian framework called Hierarchical Temporal Memory (HTM).
  • ...3 more annotations...
  • This paper's other author, George, recognized that the Memory-Prediction framework could be formulated in Bayesian terms and given a proper mathematical foundation [8],[9].
  • Several researchers have proposed detailed models for cortical circuits [10]–[12].
  • Other researchers [4],[13] have proposed detailed mechanisms by which Bayesian belief propagation techniques can be implemented in neurons.
    • Matvey Ezhov
       
      Николаю Сибирцеву: ты искал именно это
Matvey Ezhov

Technology Review: Intelligence Explained - page 2 - 1 views

  • In 2007, Jung and Richard Haier, now professor emeritus of psychology at the University of California, Irvine, developed the first comprehensive theory drawn from neuroimaging of how the brain gives rise to intelligence.
    • Matvey Ezhov
       
      we need to find them
  • Applying existing theories of how information flows in the brain, Jung and Haier hypothesized that neural signals travel from nodes near the back of the brain, where sensory data is collected and synthesized, to those in the frontal lobes, which are responsible for decision making and planning. The connections between these nodes, they argued, are just as critical as the nodes themselves.
1 - 18 of 18
Showing 20 items per page