Skip to main content

Home/ Artificial Intelligence Research/ Group items tagged things

Rss Feed Group items tagged

thinkahol *

Being No One - The MIT Press - 0 views

  •  
    According to Thomas Metzinger, no such things as selves exist in the world: nobody ever had or was a self. All that exists are phenomenal selves, as they appear in conscious experience. The phenomenal self, however, is not a thing but an ongoing process; it is the content of a "transparent self-model." In Being No One, Metzinger, a German philosopher, draws strongly on neuroscientific research to present a representationalist and functional analysis of what a consciously experienced first-person perspective actually is. Building a bridge between the humanities and the empirical sciences of the mind, he develops new conceptual toolkits and metaphors; uses case studies of unusual states of mind such as agnosia, neglect, blindsight, and hallucinations; and offers new sets of multilevel constraints for the concept of consciousness. Metzinger's central question is: How exactly does strong, consciously experienced subjectivity emerge out of objective events in the natural world? His epistemic goal is to determine whether conscious experience, in particular the experience of being someone that results from the emergence of a phenomenal self, can be analyzed on subpersonal levels of description. He also asks if and how our Cartesian intuitions that subjective experiences as such can never be reductively explained are themselves ultimately rooted in the deeper representational structure of our conscious minds.
Coupon Finder

Futuristic | Internet of Things, Artificial Intelligence, Gadgets - 0 views

  •  
    Internet of Things, Artificial Intelligence, Gadgets
mikhail-miguel

Bottell - Your Artificial Intelligence assistant for all things parenting (bottell.ai). - 0 views

  •  
    Bottell: Your Artificial Intelligence assistant for all things parenting (bottell.ai).
aliamalhotra

Is Robotics and Artificial Intelligence the Same Thing? - 1 views

  •  
    Robotics and artificial intelligence are not something very similar by any stretch of the imagination. Truth be told, the two fields are as a rule independent.
mikhail-miguel

Opera ("Aria") browser - 0 views

  •  
    Opera features an integrated AI called Aria that you can access from the sidebar. You can use a keyboard shortcut (CTRL or Command and /) to start using Aria as well. The AI is also available in Opera's Android browser. The AI stems from Opera's partnership with ChatGPT creator OpenAI. Aria connects to GPT to help answer users' queries. The AI incorporates live information from the web and it can generate text or code and answer support questions regarding Opera products. In addition, Opera One can generate contextual prompts for Aria when you right click or highlighting text in the browser. If you prefer to use ChatGPT or ChatSonic, you can access those from the Opera One sidebar too. Opera says users don't have to engage with the browser's AI features if they don't want to. For one thing, you'll need to be logged into an Opera account to use Aria.
Coupon Finder

IOT lots happening- Cool companies that are monetizing the IoT - Internet of Things | I... - 0 views

  •  
    sharing a very interesting write up. for other useful info you may check this one http://bit.ly/gigstooutsource or http://bit.ly/wearableVR
Matvey Ezhov

On Biological and Digital Intelligence - 0 views

  • In essence, Hawkins argues that, to whatever extent the concept of “consciousness” can’t be boiled down to brain theory, it’s simply a bunch of hooey.
    • Matvey Ezhov
       
      Not true!
  • in which conscious experience is more foundational than physical systems or linguistic communications
  • Conscious experiences are associated with patterns, and patterns are associated with physical systems, but none of these is fully reducible to the other. 
  • ...22 more annotations...
  • He makes the correct point that roughly-human-level AI’s will have dramatically different strengths and weaknesses from human being, due to different sensors and actuators and different physical infrastructures for their cognitive dynamics.  But he doesn’t even touch the notion of self-modifying AI – the concept that once an AI gets smart enough to modify its own code, it’s likely to get exponentially smarter and smarter until it’s left us humans in the dust.
    • Matvey Ezhov
       
      Совершенно не имеет отношения к теме, подход Хокинса легко масштабируется до сверх- и сверх-сверх-сверхчеловеческого интеллекта.
  • therefore if AI closely enough emulates the human brain it won’t radically self-modify either
  • Rather, I think the problem is that the field of AI has come to focus on “narrow AI” – programs that solve particularly, narrowly-defined problems – rather than “artificial general intelligence” (AGI). 
  • cognitive science, artificial general intelligence, philosophy of mind and abstract mathematics
    • Matvey Ezhov
       
      т.о. Гортзел признается, что вообще принимает и не считает нужным принимать нейронауку в расчет, т.е. опирается только на эмпирические представления о том, как работает сознание.
  • So what we’re doing is creating commercial narrow AI programs, using the software framework that we’re building out with our AGI design in mind.
    • Matvey Ezhov
       
      и в этом его большое отличие от платформы Хокинса, которая имеет одинаковую структуру для всех ее применений
  • I tend to largely agree with his take on the brain
  • I think he oversimplifies some things fairly seriously – giving them very brief mention when they’re actually quite long and complicated stories.  And some of these omissions, in my view, are not mere “biological details” but are rather points of serious importance for his program of abstracting principles from brain science and then re-concretizing these principles in the context of digital software.
  • One point Hawkins doesn’t really cover is how a mind/brain chooses which predictions to make, from among the many possible predictions that exist.
    • Matvey Ezhov
       
      тут он вроде бы прав...
  • Hawkins proposes that there are neurons or neuronal groups that represent patterns as “tokens,” and that these tokens are then incorporated along with other neurons or neuronal groups into larger groupings representing more abstract patterns.  This seems clearly to be correct, but he doesn’t give much information on how these tokens are supposed to be formed. 
  • So, what’s wrong with Hawkins’ picture of brain function?  Nothing’s exactly wrong with it, so far as I can tell.
  • But Edelman then takes the concept one step further and talks about “neural maps” – assemblies of neuronal groups that carry out particular perception, cognition or action functions.  Neural maps, in essence, are sets of neuronal groups that host attractors of neurodynamics.  And Edelman then observes, astutely, that the dynamics of the population of neuronal groups, over time, is likely to obey a form of evolution by natural selection.
  • How fascinating if the brain also operates in this way!
    • Matvey Ezhov
       
      да нифига... слов нет
  • Hawkins argues that creativity is essentially just metaphorical thinking, generalization based on memory.  While this is true in a grand sense, it’s not a very penetrating statement.
  • Evolutionary learning is the most powerful general search mechanism known to computer science, and is also hypothesized by Edelman to underly neural intelligence.  This sort of idea, it seems to me, should be part of any synthetic approach to brain function.
  • Hawkins mentions the notion, and observes correctly that Hebbian learning in the brain is a lot subtler than the simple version that Donald Hebb laid out in the late 40’s.   But he largely portrays these variations as biological details, and then shifts focus to the hierarchical architecture of the cortex. 
  • Hawkins’ critique of AI, which in my view is overly harsh.  He dismisses work on formal logic based reasoning as irrelevant to “real intelligence.” 
  • So – to sum up – I think Hawkins’ statements about brain function are pretty much correct
  • What he omits are, for instance,   The way the brain displays evolutionary learning as a consequence of the dynamics of multiple attractors involving sets of neural clusters The way the brain may emergently give rise to probabilistic reasoning via the statistical coordination of Hebbian learning
  • Learning of predictive patterns requires an explicit or implicit search through a large space of predictive patterns; evolutionary learning provides one approach to this problem, with computer science foundations and plausible connections to brain function; again, Hawkins does not propose any concrete alternative.
  • crucial question of how far one has to abstract away from brain function, to get to something that can be re-specialized into efficient computer software.  My intuition is that this will require a higher level of abstraction than Hawkins seems to believe.  But I stress that this is a matter of intuitive judgment – neither of us really knows.
  • Of course, to interpret the Novamente design as an “abstraction from the brain” is to interpret this phrase in a fairly extreme sense – we’re abstracting general processes like probabilistic inference and evolutionary learning and general properties like hierarchical structure from the brain, rather than particular algorithms. 
    • Matvey Ezhov
       
      наконец-то он сказал это
  • Although I’m (unsurprisingly) most psyched about the Novamente approach, I think it’s also quite worthwhile to pursue AGI approaches that are closer to the brain level – there’s a large space between detailed brain simulation and Novamente, including neuron-level simulations, neural-cluster-level simulations, and so forth. 
Matvey Ezhov

Is this a unified theory of the brain? (Bayesian theory in New Scientist) - 1 views

  • Neuroscientist Karl Friston and his colleagues have proposed a mathematical law that some are claiming is the nearest thing yet to a grand unified theory of the brain. From this single law, Friston’s group claims to be able to explain almost everything about our grey matter.
  • Friston’s ideas build on an existing theory known as the “Bayesian brain”, which conceptualises the brain as a probability machine that constantly makes predictions about the world and then updates them based on what it senses.
  • A crucial element of the approach is that the probabilities are based on experience, but they change when relevant new information, such as visual information about the object’s location, becomes available. “The brain is an inferential agent, optimising its models of what’s going on at this moment and in the future,” says Friston. In other words, the brain runs on Bayesian probability.
  • ...6 more annotations...
  • “In short, everything that can change in the brain will change to suppress prediction errors, from the firing of neurons to the wiring between them, and from the movements of our eyes to the choices we make in daily life,” he says.
  • Friston created a computer simulation of the cortex with layers of “neurons” passing signals back and forth. Signals going from higher to lower levels represent the brain’s internal predictions, while signals going the other way represent sensory input. As new information comes in, the higher neurons adjust their predictions according to Bayesian theory.
  • Volunteers watched two sets of moving dots, which sometimes moved in synchrony and at others more randomly, to change the predictability of the stimulus. The patterns of brain activity matched Friston’s model of the visual cortex reasonably well.
  • Friston’s results have earned praise for bringing together so many disparate strands of neuroscience. “It is quite certainly the most advanced conceptual framework regarding an application of these ideas to brain function in general,” says Wennekers. Marsel Mesulam, a cognitive neurologist from Northwestern University in Chicago, adds: “Friston’s work is pivotal. It resonates entirely with the sort of model that I would like to see emerge.”
  • “The final equation you write on a T-shirt will be quite simple,” Friston predicts.
  • There’s work still to be done, but for now Friston’s is the most promising approach we’ve got. “It will take time to spin off all of the consequences of the theory – but I take that property as a sure sign that this is a very important theory,” says Dehaene. “Most other models, including mine, are just models of one small aspect of the brain, very limited in their scope. This one falls much closer to a grand theory.”
davidjones29

cfp communication conference - 0 views

FICC 2018 aims to provide a forum for researchers from both academia and industry to share their latest research contributions and exchange knowledge with the common goal of shaping the future of I...

ai science intelligence artificial technolgy future-technology

started by davidjones29 on 28 Jun 17 no follow-up yet
davidjones29

FICC 2018 - Future of Information and Communication Conference (FICC) 2018 - 0 views

  •  
    FICC 2018 aims to provide a forum for researchers from both academia and industry to share their latest research contributions and exchange knowledge with the common goal of shaping the future of Information and Communication. Join us, April 5-6, to explore discovery, progress, and achievements related to Communication, Data Science, Computing and Internet of Things. ficc@saiconference.com saiconference.com/ficc https://groups.diigo.com/group/communication-conference https://youtu.be/7Qw-ovNd7A8
arianamaurya

How To Develop A Finance App Like ZOGO - 0 views

  •  
    Embarking on the journey to develop a finance app like ZOGO? This bookmark is your go-to resource for all things related to fintech app development. Learn about the key features, technologies, and strategies that will help you create a powerful financial app that empowers users and sets you on the path to success. Stay ahead of the curve in the world of personal finance apps.
1 - 11 of 11
Showing 20 items per page