Skip to main content

Home/ Advanced Concepts Team/ Group items tagged recognition

Rss Feed Group items tagged

Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
johannessimon81

IBM Speech Recognition, 1986 - 0 views

  •  
    Interesting historical perspective. Progress since the late '80 really seems to be fairly slow. ?: Do we need to wait for the singularity until speech recognition works without flaws?
  • ...1 more comment...
  •  
    funny - tried just yesterday the one built in on mavericks: sending one email took three times as long at least as typing it And now my speech PowerPoint Funny, trade trust yesterday they're built in speech recognition in Mavericks sending one e-mail to at least three times a talk as long as typing it. Well this was actually quite okay and relatively fast cheers nice evening
  •  
    "I thought I would give it a try on my android sexy seems to work pretty well and I'm speaking more less at normal speed" Actually I was speaking as fast as I could because it was for the google search input - if you make a pause it will think you finished your input and start the query. Also you might notice that Android thinks it is "android sexy" - this was meant to be "on my Android. THIS seems to work...". Still it is not too bad - maybe in a year or two they have it working. Of course it might also be that I just use the word "sexy" randomly... :-\
  •  
    The problem is that we don't yet understand how speech in humans actually works. As long as we merely build either inference or statistical language models we'll never get perfect speech recognition. A lot of recognition in humans has a predictive/expectational basis to it that stems from our understanding of higher lvl concepts and context awareness. Sadly I suspect that as long as machines remain unembodied in their perceptual abilities their ability to either properly recognize sounds/speech or objects and other features will never reach perfection.
Luís F. Simões

Stochastic Pattern Recognition Dramatically Outperforms Conventional Techniques - Techn... - 2 views

  • A stochastic computer, designed to help an autonomous vehicle navigate, outperforms a conventional computer by three orders of magnitude, say computer scientists
  • These guys have applied stochastic computing to the process of pattern recognition. The problem here is to compare an input signal with a reference signal to determine whether they match.   In the real world, of course, input signals are always noisy so a system that can cope with noise has an obvious advantage.  Canals and co use their technique to help an autonomous vehicle navigate its way through a simple environment for which it has an internal map. For this task, it has to measure the distance to the walls around it and work out where it is on the map. It then computes a trajectory taking it to its destination.
  • Although the idea of stochastic computing has been around for half a century, attempts to exploit have only just begun. Clearly there's much work to be done. And since one line of thought is that the brain might be a stochastic computer, at least in part, there could be exciting times ahead.
  • ...1 more annotation...
  • Ref: arxiv.org/abs/1202.4495: Stochastic-Based Pattern Recognition Analysis
  •  
    hey! This is essentially the Probabilistic Computing Ariadna
  •  
    The link is there but my understanding of our purpose is different than what I understood from the abstract. In any case,the authors are from Palma de Mallorca, Balears, Spain "somebody" should somehow make them aware of the Ariadna study ... E.g somebody no longer in the team :-)
LeopoldS

Never Forgetting a Face - 0 views

  •  
    Whether society embraces face recognition on a larger scale will ultimately depend on how legislators, companies and consumers resolve the argument about its singularity. Is faceprinting as innocuous as photography, an activity that people may freely perform? Or is a faceprint a unique indicator, like a fingerprint or a DNA sequence, that should require a person's active consent before it can be collected, matched, shared or sold?

    Dr. Atick is firmly in the second camp.
  •  
    Actually these sort of things are also quite easy to exploit. Print a picture of Osama bin Laden on your t-shirt and have the entire police force scared out of their wits.
  •  
    I saw so many bin laden t-shirts already ... they must have better filters than this
johannessimon81

There will be pizza on Mars - NASA awards $125000 for 3d-printed food - 0 views

  •  
    Tea, Earl Grey, hot...
  •  
    Lets see what part of the replicator will work first: the actual 3D-printing of the tea or the language recognition software...
  •  
    Oh yes, I forgot about the language recognition... The abominations it might produce... :-[]
LeopoldS

PLOS ONE: Galactic Cosmic Radiation Leads to Cognitive Impairment and Increas... - 1 views

  •  
    Galactic Cosmic Radiation consisting of high-energy, high-charged (HZE) particles poses a significant threat to future astronauts in deep space. Aside from cancer, concerns have been raised about late degenerative risks, including effects on the brain. In this study we examined the effects of 56Fe particle irradiation in an APP/PS1 mouse model of Alzheimer's disease (AD). We demonstrated 6 months after exposure to 10 and 100 cGy 56Fe radiation at 1 GeV/µ, that APP/PS1 mice show decreased cognitive abilities measured by contextual fear conditioning and novel object recognition tests. Furthermore, in male mice we saw acceleration of Aβ plaque pathology using Congo red and 6E10 staining, which was further confirmed by ELISA measures of Aβ isoforms. Increases were not due to higher levels of amyloid precursor protein (APP) or increased cleavage as measured by levels of the β C-terminal fragment of APP. Additionally, we saw no change in microglial activation levels judging by CD68 and Iba-1 immunoreactivities in and around Aβ plaques or insulin degrading enzyme, which has been shown to degrade Aβ. However, immunohistochemical analysis of ICAM-1 showed evidence of endothelial activation after 100 cGy irradiation in male mice, suggesting possible alterations in Aβ trafficking through the blood brain barrier as a possible cause of plaque increase. Overall, our results show for the first time that HZE particle radiation can increase Aβ plaque pathology in an APP/PS1 mouse model of AD.
johannessimon81

Neural network speech recognition - 4 views

  •  
    On Android speech recognition but also with a very nice video: direct translation of English voice input to Chinese audio. Looks like it might be really useful eventually.
Alexander Wittig

Picture This: NVIDIA GPUs Sort Through Tens of Millions of Flickr Photos - 2 views

  •  
    Strange and exotic cityscapes. Desolate wilderness areas. Dogs that look like wookies. Flickr, one of the world's largest photo sharing services, sees it all. And, now, Flickr's image recognition technology can categorize more than 11 billion photos like these. And it does it automatically. It's called "Magic View." Magical deep learning! Buzzword attack!
  • ...4 more comments...
  •  
    and here comes my standard question: how can we use this for space? fast detection of natural disasters onboard?
  •  
    Even on ground. You could for example teach it what nuclear reactors or missiles or other weapons you don't want look like on satellite pictures and automatically scan the world for them (basically replacing intelligence analysts).
  •  
    In fact, I think this could make a nice ACT project: counting seals from satellite imagery is an actual (and quite recent) thing: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0092613 In this publication they did it manually from a GeoEye 1 b/w image, which sounds quite tedious. Maybe one can train one of those image recognition algorithms to do it automatically. Or maybe it's a bit easier to count larger things, like elephants (also a thing).
  •  
    In HiPEAC (High Performance, embedded architecture and computation) conference I attended in the beginning of this year there was a big trend of CUDA GPU vs FPGA for hardware accelerated image processing. Most of it orbitting around discussing who was faster and cheaper with people from NVIDIA in one side and people from Xilinx and Intel in the other. I remember of talking with an IBM scientist working on hardware accelerated data processing working together with the Radio telescope institute in Netherlands about the solution where they working on (GPU CUDA). I gathered that NVIDIA GPU suits best in applications that somehow do not rely in hardware, having the advantage of being programmed in a 'easy' way accessible to a scientist. FPGA's are highly reliable components with the advantage of being available in radhard versions, but requiring specific knowledge of physical circuit design and tailored 'harsh' programming languages. I don't know what is the level of rad hardness in NVIDIA's GPUs... Therefore FPGAs are indeed the standard choice for image processing in space missions (a talk with the microelectronics department guys could expand on this), whereas GPUs are currently used in some ground based (radio astronomy or other types of telescopes). I think that on for a specific purpose as the one you mentioned, this FPGA vs GPU should be assessed first before going further.
  •  
    You're forgetting power usage. GPUs need 1000 hamster wheels worth of power while FPGAs can run on a potato. Since space applications are highly power limited, putting any kind of GPU monster in orbit or on a rover is failed idea from the start. Also in FPGAs if a gate burns out from radiation you can just reprogram around it. Looking for seals offline in high res images is indeed definitely a GPU task.... for now.
  •  
    The discussion of how to make FPGA hardware acceleration solutions easier to use for the 'layman' is starting btw http://reconfigurablecomputing4themasses.net/.
pacome delva

Neural Networks Designed to 'See' are Quite Good at 'Hearing' As Well - 2 views

  • Neural networks -- collections of artificial neurons or nodes set up to behave like the neurons in the brain -- can be trained to carry out a variety of tasks, often having something to do with pattern or sequence recognition. As such, they have shown great promise in image recognition systems. Now, research coming out of the University of Hong Kong has shown that neural networks can hear as well as see. A neural network there has learned the features of sound, classifying songs into specific genres with 87 percent accuracy.
  • Similar networks based on auditory cortexes have been rewired for vision, so it would appear these kinds of neural networks are quite flexible in their functions. As such, it seems they could potentially be applied to all sorts of perceptual tasks in artificial intelligence systems, the possibilities of which have only begun to be explored.
ESA ACT

Method and apparatus for verifying an individual's identity - US Patent 4805222 - 0 views

  •  
    pattern recognition for identity
ESA ACT

MRS Issue on Molecular biomimetics - 0 views

  •  
    In nature, the molecular-recognition ability of peptides and, consequently, their functions are evolved through successive cycles of mutation and selection. Using biology as a guide, we can now select, tailor, and control peptide-solid interactions and ex
ESA ACT

Official Google Blog: A picture of a thousand words? - 0 views

  •  
    Interestingly, Google uses optical character recognition (ocr) on pdfs that their bots find online, and then convert to html to make them searchable.
ESA ACT

Billboards with facial-recognition software trickling out - Engadget - 0 views

  •  
    frightening!!
LeopoldS

physicists explain what AI researchers are actually doing - 5 views

  •  
    love this one ... it seems to take physicist to explain to the AI crowd what they are actually doing ... Deep learning is a broad set of techniques that uses multiple layers of representation to automatically learn relevant features directly from structured data. Recently, such techniques have yielded record-breaking results on a diverse set of difficult machine learning tasks in computer vision, speech recognition, and natural language processing. Despite the enormous success of deep learning, relatively little is understood theoretically about why these techniques are so successful at feature learning and compression. Here, we show that deep learning is intimately related to one of the most important and successful techniques in theoretical physics, the renormalization group (RG). RG is an iterative coarse-graining scheme that allows for the extraction of relevant features (i.e. operators) as a physical system is examined at different length scales. We construct an exact mapping from the variational renormalization group, first introduced by Kadanoff, and deep learning architectures based on Restricted Boltzmann Machines (RBMs). We illustrate these ideas using the nearest-neighbor Ising Model in one and two-dimensions. Our results suggests that deep learning algorithms may be employing a generalized RG-like scheme to learn relevant features from data.
Marcus Maertens

The GoGo Chicken program in China is adding poultry to the blockchain with facial recog... - 2 views

  •  
    Is this the future of blockchain?
Dario Izzo

Detexify LaTeX handwritten symbol recognition - 2 views

  •  
    For hardcore latex users (btw ... implemented in haskell ... classical machine learning app, but useful)
  •  
    Also available as Android app (not sure if called "texify" or "detexify".
  •  
    works actually quite well!!
Guido de Croon

Convolutional networks start to rule the world! - 2 views

  •  
    Recently, many competitions in the computer vision domain have been won by huge convolutional networks. In the image net competition, the convolutional network approach halves the error from ~30% to ~15%! Key changes that make this happen: weight-sharing to reduce the search space, and training with a massive GPU approach. (See also the work at IDSIA: http://www.idsia.ch/~juergen/vision.html) This should please Francisco :)
  • ...1 more comment...
  •  
    where is Francisco when one needs him ...
  •  
    ...mmmmm... they use 60 million parameters and 650,000 neurons on a task that one can somehow consider easier than (say) predicting a financial crisis ... still they get 15% of errors .... reminds me of a comic we saw once ... cat http://www.sarjis.info/stripit/abstruse-goose/496/the_singularity_is_way_over_there.png
  •  
    I think the ultimate solution is still to put a human brain in a jar and use it for pattern recognition. Maybe we should get a stagiaire for this..?
dejanpetkow

Gesture recognition via Doppler effect (article in German) - 1 views

  •  
    How microphone and speaker of a laptop are used to recognize movement in front of the computer. The article is in German but the video is in English.
Lionel Jacques

Wasps Can Recognize Faces - 2 views

  •  
    Again, amazing insects! Scientists have discovered that Polistes fuscatus paper wasps can recognize and remember each other's faces with sharp accuracy, a new study suggests.
1 - 20 of 37 Next ›
Showing 20 items per page