Skip to main content

Home/ Artificial Intelligence Research/ Group items tagged the

Rss Feed Group items tagged

Matvey Ezhov

PLoS Computational Biology: Qualia: The Geometry of Integrated Information - 1 views

  •  
    According to the integrated information theory, the quantity of consciousness is the amount of integrated information generated by a complex of elements, and the quality of experience is specified by the informational relationships it generates. This paper outlines a framework for characterizing the informational relationships generated by such systems. Qualia space (Q) is a space having an axis for each possible state (activity pattern) of a complex. Within Q, each submechanism specifies a point corresponding to a repertoire of system states. Arrows between repertoires in Q define informational relationships. Together, these arrows specify a quale-a shape that completely and univocally characterizes the quality of a conscious experience. Φ- the height of this shape-is the quantity of consciousness associated with the experience. Entanglement measures how irreducible informational relationships are to their component relationships, specifying concepts and modes. Several corollaries follow from these premises. The quale is determined by both the mechanism and state of the system. Thus, two different systems having identical activity patterns may generate different qualia. Conversely, the same quale may be generated by two systems that differ in both activity and connectivity. Both active and inactive elements specify a quale, but elements that are inactivated do not. Also, the activation of an element affects experience by changing the shape of the quale. The subdivision of experience into modalities and submodalities corresponds to subshapes in Q. In principle, different aspects of experience may be classified as different shapes in Q, and the similarity between experiences reduces to similarities between shapes. Finally, specific qualities, such as the "redness" of red, while generated by a local mechanism, cannot be reduced to it, but require considering the entire quale. Ultimately, the present framework may offer a principled way for translating quali
mikhail-miguel

The AI Product Manager's Handbook (+Free PDF Ed.) - 0 views

  •  
    Master the skills required to become an AI product manager and drive the successful development and deployment of AI products to deliver value to your organization. Purchase of the print or Kindle book includes a free PDF eBook. Key Features Build products that leverage AI for the common good and commercial success Take macro data and use it to show your customers you're a source of truth Best practices and common pitfalls that impact companies while developing AI product Book Description Product managers working with artificial intelligence will be able to put their knowledge to work with this practical guide to applied AI. This book covers everything you need to know to drive product development and growth in the AI industry. From understanding AI and machine learning to developing and launching AI products, it provides the strategies, techniques, and tools you need to succeed. The first part of the book focuses on establishing a foundation of the concepts most relevant to maintaining AI pipelines. The next part focuses on building an AI-native product, and the final part guides you in integrating AI into existing products. You'll learn about the types of AI, how to integrate AI into a product or business, and the infrastructure to support the exhaustive and ambitious endeavor of creating AI products or integrating AI into existing products. You'll gain practical knowledge of managing AI product development processes, evaluating and optimizing AI models, and navigating complex ethical and legal considerations associated with AI products. With the help of real-world examples and case studies, you'll stay ahead of the curve in the rapidly evolving field of AI and ML. By the end of this book, you'll have understood how to navigate the world of AI from a product perspective. What you will learn Build AI products for the future using minimal resources Identify opportunities where AI can be leveraged to meet business needs Collaborate with cross-function
mikhail-miguel

Artificial Intelligence: A Guide for Thinking Humans - 0 views

  •  
    This program includes an introduction read by the author. No recent scientific enterprise has proved as alluring, terrifying, and filled with extravagant promise and frustrating setbacks as artificial intelligence. The award-winning author Melanie Mitchell, a leading computer scientist, now reveals its turbulent history and the recent surge of apparent successes, grand hopes, and emerging fears that surround AI. In Artificial Intelligence, Mitchell turns to the most urgent questions concerning AI today: How intelligent - really - are the best AI programs? How do they work? What can they actually do, and when do they fail? How humanlike do we expect them to become, and how soon do we need to worry about them surpassing us? Along the way, she introduces the dominant methods of modern AI and machine learning, describing cutting-edge AI programs, their human inventors, and the historical lines of thought that led to recent achievements. She meets with fellow experts like Douglas Hofstadter, the cognitive scientist and Pulitzer Prize - winning author of the modern classic Gödel, Escher, Bach, who explains why he is "terrified" about the future of AI. She explores the profound disconnect between the hype and the actual achievements in AI, providing a clear sense of what the field has accomplished and how much farther it has to go. Interweaving stories about the science and the people behind it, Artificial Intelligence brims with clear-sighted, captivating, and approachable accounts of the most interesting and provocative modern work in AI, flavored with Mitchell's humor and personal observations. This frank, lively book will prove an indispensable guide to understanding today's AI, its quest for "human-level" intelligence, and its impacts on all of our futures. PLEASE NOTE: When you purchase this title, the accompanying PDF will be available in your Audible Library along with the audio.
Matvey Ezhov

On Biological and Digital Intelligence - 0 views

  • In essence, Hawkins argues that, to whatever extent the concept of “consciousness” can’t be boiled down to brain theory, it’s simply a bunch of hooey.
    • Matvey Ezhov
       
      Not true!
  • in which conscious experience is more foundational than physical systems or linguistic communications
  • Conscious experiences are associated with patterns, and patterns are associated with physical systems, but none of these is fully reducible to the other. 
  • ...22 more annotations...
  • He makes the correct point that roughly-human-level AI’s will have dramatically different strengths and weaknesses from human being, due to different sensors and actuators and different physical infrastructures for their cognitive dynamics.  But he doesn’t even touch the notion of self-modifying AI – the concept that once an AI gets smart enough to modify its own code, it’s likely to get exponentially smarter and smarter until it’s left us humans in the dust.
    • Matvey Ezhov
       
      Совершенно не имеет отношения к теме, подход Хокинса легко масштабируется до сверх- и сверх-сверх-сверхчеловеческого интеллекта.
  • therefore if AI closely enough emulates the human brain it won’t radically self-modify either
  • Rather, I think the problem is that the field of AI has come to focus on “narrow AI” – programs that solve particularly, narrowly-defined problems – rather than “artificial general intelligence” (AGI). 
  • cognitive science, artificial general intelligence, philosophy of mind and abstract mathematics
    • Matvey Ezhov
       
      т.о. Гортзел признается, что вообще принимает и не считает нужным принимать нейронауку в расчет, т.е. опирается только на эмпирические представления о том, как работает сознание.
  • So what we’re doing is creating commercial narrow AI programs, using the software framework that we’re building out with our AGI design in mind.
    • Matvey Ezhov
       
      и в этом его большое отличие от платформы Хокинса, которая имеет одинаковую структуру для всех ее применений
  • I tend to largely agree with his take on the brain
  • I think he oversimplifies some things fairly seriously – giving them very brief mention when they’re actually quite long and complicated stories.  And some of these omissions, in my view, are not mere “biological details” but are rather points of serious importance for his program of abstracting principles from brain science and then re-concretizing these principles in the context of digital software.
  • One point Hawkins doesn’t really cover is how a mind/brain chooses which predictions to make, from among the many possible predictions that exist.
    • Matvey Ezhov
       
      тут он вроде бы прав...
  • Hawkins proposes that there are neurons or neuronal groups that represent patterns as “tokens,” and that these tokens are then incorporated along with other neurons or neuronal groups into larger groupings representing more abstract patterns.  This seems clearly to be correct, but he doesn’t give much information on how these tokens are supposed to be formed. 
  • So, what’s wrong with Hawkins’ picture of brain function?  Nothing’s exactly wrong with it, so far as I can tell.
  • But Edelman then takes the concept one step further and talks about “neural maps” – assemblies of neuronal groups that carry out particular perception, cognition or action functions.  Neural maps, in essence, are sets of neuronal groups that host attractors of neurodynamics.  And Edelman then observes, astutely, that the dynamics of the population of neuronal groups, over time, is likely to obey a form of evolution by natural selection.
  • How fascinating if the brain also operates in this way!
    • Matvey Ezhov
       
      да нифига... слов нет
  • Hawkins argues that creativity is essentially just metaphorical thinking, generalization based on memory.  While this is true in a grand sense, it’s not a very penetrating statement.
  • Evolutionary learning is the most powerful general search mechanism known to computer science, and is also hypothesized by Edelman to underly neural intelligence.  This sort of idea, it seems to me, should be part of any synthetic approach to brain function.
  • Hawkins mentions the notion, and observes correctly that Hebbian learning in the brain is a lot subtler than the simple version that Donald Hebb laid out in the late 40’s.   But he largely portrays these variations as biological details, and then shifts focus to the hierarchical architecture of the cortex. 
  • Hawkins’ critique of AI, which in my view is overly harsh.  He dismisses work on formal logic based reasoning as irrelevant to “real intelligence.” 
  • So – to sum up – I think Hawkins’ statements about brain function are pretty much correct
  • What he omits are, for instance,   The way the brain displays evolutionary learning as a consequence of the dynamics of multiple attractors involving sets of neural clusters The way the brain may emergently give rise to probabilistic reasoning via the statistical coordination of Hebbian learning
  • Learning of predictive patterns requires an explicit or implicit search through a large space of predictive patterns; evolutionary learning provides one approach to this problem, with computer science foundations and plausible connections to brain function; again, Hawkins does not propose any concrete alternative.
  • crucial question of how far one has to abstract away from brain function, to get to something that can be re-specialized into efficient computer software.  My intuition is that this will require a higher level of abstraction than Hawkins seems to believe.  But I stress that this is a matter of intuitive judgment – neither of us really knows.
  • Of course, to interpret the Novamente design as an “abstraction from the brain” is to interpret this phrase in a fairly extreme sense – we’re abstracting general processes like probabilistic inference and evolutionary learning and general properties like hierarchical structure from the brain, rather than particular algorithms. 
    • Matvey Ezhov
       
      наконец-то он сказал это
  • Although I’m (unsurprisingly) most psyched about the Novamente approach, I think it’s also quite worthwhile to pursue AGI approaches that are closer to the brain level – there’s a large space between detailed brain simulation and Novamente, including neuron-level simulations, neural-cluster-level simulations, and so forth. 
Matvey Ezhov

Is this a unified theory of the brain? (Bayesian theory in New Scientist) - 1 views

  • Neuroscientist Karl Friston and his colleagues have proposed a mathematical law that some are claiming is the nearest thing yet to a grand unified theory of the brain. From this single law, Friston’s group claims to be able to explain almost everything about our grey matter.
  • Friston’s ideas build on an existing theory known as the “Bayesian brain”, which conceptualises the brain as a probability machine that constantly makes predictions about the world and then updates them based on what it senses.
  • A crucial element of the approach is that the probabilities are based on experience, but they change when relevant new information, such as visual information about the object’s location, becomes available. “The brain is an inferential agent, optimising its models of what’s going on at this moment and in the future,” says Friston. In other words, the brain runs on Bayesian probability.
  • ...6 more annotations...
  • “In short, everything that can change in the brain will change to suppress prediction errors, from the firing of neurons to the wiring between them, and from the movements of our eyes to the choices we make in daily life,” he says.
  • Friston created a computer simulation of the cortex with layers of “neurons” passing signals back and forth. Signals going from higher to lower levels represent the brain’s internal predictions, while signals going the other way represent sensory input. As new information comes in, the higher neurons adjust their predictions according to Bayesian theory.
  • Volunteers watched two sets of moving dots, which sometimes moved in synchrony and at others more randomly, to change the predictability of the stimulus. The patterns of brain activity matched Friston’s model of the visual cortex reasonably well.
  • Friston’s results have earned praise for bringing together so many disparate strands of neuroscience. “It is quite certainly the most advanced conceptual framework regarding an application of these ideas to brain function in general,” says Wennekers. Marsel Mesulam, a cognitive neurologist from Northwestern University in Chicago, adds: “Friston’s work is pivotal. It resonates entirely with the sort of model that I would like to see emerge.”
  • “The final equation you write on a T-shirt will be quite simple,” Friston predicts.
  • There’s work still to be done, but for now Friston’s is the most promising approach we’ve got. “It will take time to spin off all of the consequences of the theory – but I take that property as a sure sign that this is a very important theory,” says Dehaene. “Most other models, including mine, are just models of one small aspect of the brain, very limited in their scope. This one falls much closer to a grand theory.”
mikhail-miguel

Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World - 0 views

  •  
    One of The Sunday Times Business Books of the Year Artificial intelligence is smarter than humans. It can process information at lightning speed and remain focused on specific tasks without distraction. AI can see into the future, predicting outcomes and even use sensors to see around physical and virtual corners. So why does AI frequently get it so wrong? The answer is us. Humans design the algorithms that define the way that AI works, and the processed information reflects an imperfect world. Does that mean we are doomed? In Scary Smart, Mo Gawdat, the internationally best-selling author of Solve for Happy, draws on his considerable expertise to answer this question and to show what we can all do now to teach ourselves and our machines how to live better. With more than 30 years' experience working at the cutting-edge of technology and his former role as chief business officer of Google [X], no one is better placed than Mo Gawdat to explain how the Artificial Intelligence of the future works. By 2049, AI will be a billion times more intelligent than humans. Scary Smart explains how to fix the current trajectory now, to make sure that the AI of the future can preserve our species. This book offers a blueprint, pointing the way to what we can do to safeguard ourselves, those we love and the planet itself.
mikhail-miguel

The Art of Prompt Engineering with ChatGPT: Accessible Edition (Learn AI Tools the Fun ... - 0 views

  •  
    Accessible Edition To make 'The Art of Prompt Engineering with ChatGPT' as beautiful as possible we designed the layout and published it here as a pdf. However, this wasn't the best option for those who used a kindle to read, or for those who had accessibility needs. So this is the accessible edition, rebuilt using a reflow able format. If you bought the original and need to use the accessibility features, just email us at nathan@ChatGPTtrainings.com. Let's move beyond basic examples and 'test this prompt.' ChatGPT is an amazing AI tool that can change the way we work. Bill Gates recently said that ChatGPT is as important an invention as the internet, and it could change the world. To make the most of ChatGPT and go beyond simple uses, you need to master prompt engineering. Check out a sample chapter: www.ChatGPTtrainings.com March Update - 2 New Sections with 34 More Pages of Content For this monthly update, we added a new section on Advanced Prompt Engineering to help you take your ChatGPT skills to a higher level once you've learned the basics. This section covers the co-creation approach, where you take control, and [format] your output, where you'll learn my favorite way to get the exact results you want. We also included a new section on GPT-4, with information on how to start, debunking past hype, and looking at some new improvements. This book will keep evolving as ChatGPT grows, making sure that everything you read and learn stays up-to-date and relevant. All updates are free and automatic for Kindle copies, and if you bought a hardcopy, you can email me at Nathan@ChatGPTtrainings.com with your proof of purchase to get a PDF update. Why This Book? This book helps you learn the art of working with ChatGPT to get much better results. This skill, prompt engineering, is what sets good apart from great when using ChatGPT. Learn 4 key techniques and tools for writing better prompts Master 2 advanced prompt engineering tools to take your skills further F
thinkahol *

Being No One - The MIT Press - 0 views

  •  
    According to Thomas Metzinger, no such things as selves exist in the world: nobody ever had or was a self. All that exists are phenomenal selves, as they appear in conscious experience. The phenomenal self, however, is not a thing but an ongoing process; it is the content of a "transparent self-model." In Being No One, Metzinger, a German philosopher, draws strongly on neuroscientific research to present a representationalist and functional analysis of what a consciously experienced first-person perspective actually is. Building a bridge between the humanities and the empirical sciences of the mind, he develops new conceptual toolkits and metaphors; uses case studies of unusual states of mind such as agnosia, neglect, blindsight, and hallucinations; and offers new sets of multilevel constraints for the concept of consciousness. Metzinger's central question is: How exactly does strong, consciously experienced subjectivity emerge out of objective events in the natural world? His epistemic goal is to determine whether conscious experience, in particular the experience of being someone that results from the emergence of a phenomenal self, can be analyzed on subpersonal levels of description. He also asks if and how our Cartesian intuitions that subjective experiences as such can never be reductively explained are themselves ultimately rooted in the deeper representational structure of our conscious minds.
Matvey Ezhov

Mapping the brain - MIT news - 2 views

  • To find connectomes, researchers will need to employ vast computing power to process images of the brain. But first, they need to teach the computers what to look for.
  • to manually trace connections between neurons
  • want to speed up the process dramatically by enlisting the help of high-powered computers.
  • ...5 more annotations...
  • To do that, they are teaching the computers to analyze the brain slices, using a common computer science technique called automated machine learning, which allows computers to change their behavior in response to new data.
  • With machine learning, the researchers teach computers to learn by example. They feed their computer electron micrographs as well as human tracings of these images. The computer then searches for an algorithm that allows it to imitate human performance.
  • Their eventual goal is to use computers to process the bulk of the images needed to create connectomes, but they expect that humans will still need to proofread the computers’ work.
  • Last year, the National Institutes of Health announced a five-year, $30 million Human Connectome Project to develop new techniques to figure out the connectivity of the human brain. That project is focused mainly on higher level, region-to-region connections. Sporns says he believes that a good draft of higher-level connections could be achieved within the five-year timeline of the NIH project, and that significant progress will also be made toward a neuron-to-neuron map.
    • Matvey Ezhov
       
      draft of human connectome within five years
  • Though only a handful of labs around the world are working on the connectome right now, Jain and Turaga expect that to change as tools for diagramming the brain improve. “It’s a common pattern in neuroscience: A few people will come up with new technology and pioneer some applications, and then everybody else will start to adopt it,” says Jain.
Matvey Ezhov

Technology Review: Intelligence Explained (!) - 0 views

  • "Scientists are now able to switch the focus from particular regions of the brain to the connections between those regions," says Sherif Karama, a psychiatrist and a neuroscientist at McGill University's Montreal Neurological Institute.
  • A quantifiable "general intelligence factor," known as g, can be statistically extracted from scores on a battery of intelligence tests.
  • In 2001, Thompson showed that it is correlated with volume in the frontal cortex, a result consistent with a number of studies that have linked intelligence to overall brain size.
  • ...3 more annotations...
  • In 2007, Jung and Richard Haier, now professor emeritus of psychology at the University of California, Irvine, developed the first comprehensive theory drawn from neuroimaging of how the brain gives rise to intelligence.
    • Matvey Ezhov
       
      Attention! To Research.
  • As we "evolved from worms to humans," says George Bartzokis, a professor of psychiatry at UCLA, the number of non-neural cells in the brain increased 50 times more than the number of neurons. He adds, "My hypothesis has always been that what gives us our cognitive capacity is not actually the number of neurons, which can vary tremendously between human individuals, but rather the quality of our connections."
  • The type of MRI typically used for medical scans does not show the finer details of the brain's white matter. But with a technique called diffusion tensor imaging (DTI), which uses the scanner's magnet to track the movement of water molecules in the brain, scientists have developed ways to map out neural wiring in detail. While water moves randomly within most brain tissue, it flows along the insulated neural fibers like current through a wire.
mikhail-miguel

Artificial Intelligence: Structures and Strategies for Complex Problem Solving (5th Edi... - 0 views

  •  
    The fifth edition of this book continues to provide a balanced perspective on the language schools, theories, and applications of artificial intelligence. These diverse branches are unified through detailed discussions of AI's theoretical foundations. The book is broken down into six parts to provide readers complete coverage of AI. It begins by introducing AI concepts, moves into a discussion on the research tools needs for AI problem solving, and then demonstrates representations for AI and knowledge-sensitive problem solving. The second half of the book offers an extensive presentation of issues in machine learning, continues presenting important AI application areas, and presents Lisp and Prolog to the reader. This book is appropriate for programmers both as an introduction to and a reference of the theoretical foundations of artificial intelligence.
mikhail-miguel

Opera ("Aria") browser - 0 views

  •  
    Opera features an integrated AI called Aria that you can access from the sidebar. You can use a keyboard shortcut (CTRL or Command and /) to start using Aria as well. The AI is also available in Opera's Android browser. The AI stems from Opera's partnership with ChatGPT creator OpenAI. Aria connects to GPT to help answer users' queries. The AI incorporates live information from the web and it can generate text or code and answer support questions regarding Opera products. In addition, Opera One can generate contextual prompts for Aria when you right click or highlighting text in the browser. If you prefer to use ChatGPT or ChatSonic, you can access those from the Opera One sidebar too. Opera says users don't have to engage with the browser's AI features if they don't want to. For one thing, you'll need to be logged into an Opera account to use Aria.
Matvey Ezhov

Time-keeping Brain Neurons Discovered - 3 views

  • An MIT team led by Institute Professor Ann Graybiel has found groups of neurons in the primate brain that code time with extreme precision.
  • The neurons are located in the prefrontal cortex and the striatum, both of which play important roles in learning, movement and thought control.
  • The research team trained two macaque monkeys to perform a simple eye-movement task. After receiving the "go" signal, the monkeys were free to perform the task at their own speed. The researchers found neurons that consistently fired at specific times -- 100 milliseconds, 110 milliseconds, 150 milliseconds and so on -- after the "go" signal.
  •  
    Its would be difficult, if neurons of that kind have not be discovered. Obliviously, we have millions of it in our brains. For make time-keeping neurons we need (in simplest case) only 2 neurons with reciprocal connections. More units in circle - more time to delay - more time to "keep". Also, not single "time keeping neurons" but time keeping circles. Such clear understating of processes on neuronal level is completely impossible without Brainbug play experience. Think about it!
Matvey Ezhov

Whole Brain Project™ - 1 views

  •  
    "Simultaneous revolutions in neuroscience research and next generation software tools are merged in the Whole Brain Project™. The project joins neuroscientists and software engineers to employ experimental techniques to visualize and explore the burgeoning new discoveries about the brain's structure and function. Despite rapid progress in development of new experimental methods, our ability to simultaneously study the brain across all these scales remains quite limited. The Whole Brain Project looks to provide open source networks to help unify the disparate and heterogeneous data of neuroscientists."
  •  
    Wooooohooo!!!!!!
  •  
    Пока не на что смотреть... Хотя может со временем и получится неплохая штука.
Matvey Ezhov

Perceptual Learning Relies On Local Motion Signals To Learn Global Motion - 0 views

  • The brain first perceives changes in visual input (local motion) in the primary visual cortex. The local motion signals are then integrated in the later visual processing stages and interpreted as global motion in the higher-level processes. But when subjects in a recent experiment using moving dots were asked to detect global motion (the overall direction of the dots moving together), the results show that their learning relied on more local motion processes (the movement of dots in small areas) than global motion areas.
  • show that the improvement in detection of global motion is not due to learning of the global motion but to learning of local motion of the moving dots in the test.
mikhail-miguel

LMSYS Chatbot Arena Vision (Multimodal): Benchmarking LLMs and VLMs in the Wild - 0 views

  •  
    The Chatbot Arena has launched a new beta feature supporting images, allowing users to interact with chatbots through images. Each conversation can include the submission of one image, as long as it is under 15MB. The Chatbot Arena logs user requests, including the images submitted, for research purposes. Although this data is not currently publicly disclosed, there may be a possibility of doing so in the future. Therefore, it is recommended that users avoid sending confidential or personal information through this feature. This feature is in its early development stage, so there may be issues or bugs. Users are encouraged to report any issues through the Chatbot Arena communication channels.
Matvey Ezhov

Recursive Self-Improvement - The Transhumanist Wiki - 2 views

  • True Artificial Intelligence would bypass problems of biological complexity and ethics, growing up on a substrate ideal for initiating Recursive Self-Improvement. (fully reprogrammable, ultrafast, the AI's "natural habitat".) This Artificial Intelligence would be based upon: 1) our current understanding of the central algorithms of intelligence, 2) our current knowledge of the brain, obtained through high-resolution fMRI and delicate Cognitive Science experiments, and 3) the kind of computing hardware available to AI designers.
  • Humans cannot conduct any of these enhancements to ourselves; the inherent structure of our biology and the limited level of our current technology makes this impossible.
  • Recursive Self-Improvement is the ability of a mind to genuinely improve its own intelligence. This might be accomplished through a variety of means; speeding up one's own hardware, redesigning one's own cognitive architecture for optimal intelligence, adding new components into one's own hardware, custom-designing specialized modules for recurrent tasks, and so on.
  • ...2 more annotations...
  • Unfortunately, the neurological structures corresponding to human intelligence are likely to be highly intricate, delicate, and biologically very complex (unnecessarily so; evolution exhibits no foresight, and most of the brain evolved in the absence of human General Intelligence).
  • 2) advances in Cognitive Science that indicate the complexity of certain brain areas is largely extraneous to intelligence,
    • Matvey Ezhov
       
      Очень серьезно допущение, которое может быть ошибочно. Нам известно, что все зоны кортекса участвуют в формировании модели мира индивида, а значит и сознания.
thinkahol *

YouTube - Jeff Hawkins on Artificial Intelligence - Part 1/5 - 0 views

  •  
    June 23, 2008 - The founder of Palm, Jeff Hawkins, solves the mystery of Artificial Intelligence and presents his theory at the RSA Conference 2008. He gives a brief tutorial on the neocortex and then explains how the brain stores memory and then describes how to use that knowledge to create artificial intelligence. This lecture is insightful and his theory will revolutionize computer science.
Matvey Ezhov

You can control your Marilyn Monroe neuron - 2 views

  • Another experiment designed to test how well the subjects could control the single neurons was a fade experiment in which the subject was shown a combined image of two faces: Josh Brolin (star of Goonies) and Marilyn Monroe, and told to think of Josh Brolin. The electrodes sent data on the Josh Brolin and Marilyn Monroe neurons to the computer, which brightened the image of the one causing most neuron firing. As the subject thought of Brolin, the image of Monroe faded out.
    • Matvey Ezhov
       
      This points that grandmother's cells correlates for consciousness, right?
  •  
    с нейронами моей бабушки дело обстоит еще сложнее чем думалось
  •  
    nope. Nonconsciousness species can control it too
thinkahol *

Jeff Hawkins on Artificial Intelligence - Part 1/5 - YouTube - 0 views

  •  
    The founder of Palm, Jeff Hawkins, solves the mystery of Artificial Intelligence and presents his theory at the RSA Conference 2008. He gives a brief tutorial on the neocortex and then explains how the brain stores memory and then describes how to use that knowledge to create artificial intelligence. This lecture is insightful and his theory will revolutionize computer science.
1 - 20 of 299 Next › Last »
Showing 20 items per page