Skip to main content

Home/ Advanced Concepts Team/ Group items tagged classification

Rss Feed Group items tagged

Thijs Versloot

Spotting East African Mammals in Open Savannah from Space - 1 views

  •  
    A hybrid image classification method was employed for this specific purpose by incorporating the advantages of both pixel-based and object-based image classification approaches. This was performed in two steps: firstly, a pixel-based image classification method, i.e., artificial neural network was applied to classify potential targets with similar spectral reflectance at pixel level; and then an object-based image classification method was used to further differentiate animal targets from the surrounding landscapes through the applications of expert knowledge. As a result, the large animals in two pilot study areas were successfully detected with an average count error of 8.2%, omission error of 6.6% and commission error of 13.7%. The results of the study show for the first time that it is feasible to perform automated detection and counting of large wild animals in open savannahs from space
  •  
    And Paul, it includes neural networks!
  •  
    I kept telling you guys but you just laughed and laughed :))
LeopoldS

QuiBids - intriguing auction type - 7 views

  •  
    Did any of you already try this type of auction? Was it inthe classification of Matthias ?
pacome delva

Galaxy Zoo 2 : The Story So Far - 3 views

  • The original Galaxy Zoo was launched in July 2007, with a data set made up of a million galaxies imaged with the robotic telescope of the Sloan Digital Sky Survey. With so many galaxies, the team thought that it might take at least two years for visitors to the site to work through them all. Within 24 hours of launch, the site was receiving 70,000 classifications an hour, and more than 50 million classifications were received by the project during its first year, from almost 150,000 people.
  •  
    this is what I call a nice example of crowdsourcing or citizen scientists .... (remember my idea in the ideastorm ?? :-)
LeopoldS

CREAX - Function Database - 0 views

shared by LeopoldS on 24 Jun 09 - Cached
  •  
    Functional classification of knowledge is a very effective way of stripping away boundaries between different industries and scientific disciplines. The function database provides descriptions, examples and animations for all known effects that can produce a function.
Dario Izzo

Bold title ..... - 3 views

  •  
    I got a fever. And the only prescription is more cat faces! ...../\_/\ ...(=^_^) ..\\(___) The article sounds quite interesting, though. I think the idea of a "fake" agent that tries to trick the classifier while both co-evolve is nice as it allows the classifier to first cope with the lower order complexity of the problem. As the fake agent mimics the real agent better and better the classifier has time to add complexity to itself instead of trying to do it all at once. It would be interesting if this is later reflected in the neural nets structure, i.e. having core regions that deal with lower order approximation / classification and peripheral regions (added at a later stage) that deal with nuances as they become apparent. Also this approach will develop not just a classifier for agent behavior but at the same time a model of the same. The later may be useful in itself and might in same cases be the actual goal of the "researcher". I suspect, however, that the problem of producing / evolving the "fake agent" model might in most case be at least as hard as producing a working classifier...
  •  
    This paper from 2014 seems discribe something pretty similar (except for not using physical robots, etc...): https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf
  •  
    Yes, this IS basically adversarial learning. Except the generator part instead of being a neural net is some kind of swarm parametrization. I just love how they rebranded it, though. :))
Luís F. Simões

Poison Attacks Against Machine Learning - Slashdot - 1 views

  • Support Vector Machines (SVMs) are fairly simple but powerful machine learning systems. They learn from data and are usually trained before being deployed.
  • In many cases they need to continue to learn as they do the job and this raised the possibility of feeding it with data that causes it to make bad decisions. Three researchers have recently demonstrated how to do this with the minimum poisoned data to maximum effect. What they discovered is that their method was capable of having a surprisingly large impact on the performance of the SVMs tested. They also point out that it could be possible to direct the induced errors so as to produce particular types of error.
  •  
    http://arxiv.org/abs/1206.6389v2 for Guido; an interesting example of "takeover" research
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
jcunha

Training and operation of an integrated neural network based on memristors - 0 views

  •  
    Almost in time for the workshop last week! This new Nature paper (e-mail me for full paper) claims training and usage of neural network implemented with metal-oxide memristors, without selector CMOS. They used it to implement a delta-rule algorithm for classification of 3x3 pixel black and white letters. Very impressive work!!!!
  •  
    For those not that much into the topic, see the Nature's News and View section www.nature.com/nature/journal/v521/n7550/full/521037a.html?WT.ec_id=NATURE-20150507 where they feature this article.
Luís F. Simões

Lockheed Martin buys first D-Wave quantum computing system - 1 views

  • D-Wave develops computing systems that leverage the physics of quantum mechanics in order to address problems that are hard for traditional methods to solve in a cost-effective amount of time. Examples of such problems include software verification and validation, financial risk analysis, affinity mapping and sentiment analysis, object recognition in images, medical imaging classification, compressed sensing and bioinformatics.
  •  
    According to the company's wikipedia page, the computer costs $ 10 million. Can we then declare Quantum Computing has officially arrived?! quotes from elsewhere in the site: "first commercial quantum computing system on the market"; "our current superconducting 128-qubit processor chip is housed inside a cryogenics system within a 10 square meter shielded room" Link to the company's scientific publications. Interestingly, this company seems to have been running a BOINC project, AQUA@home, to "predict the performance of superconducting adiabatic quantum computers on a variety of hard problems arising in fields ranging from materials science to machine learning. AQUA@home uses Internet-connected computers to help design and analyze quantum computing algorithms, using Quantum Monte Carlo techniques". List of papers coming out of it.
ESA ACT

Home | Galaxy Zoo - 0 views

  •  
    An alternative to curiosity cloning: Let millions of people have a look at the pictures...
Juxi Leitner

Convolutional Neural Networks for Visual Recognition - 3 views

  •  
    pretty impressive stuff!
  • ...3 more comments...
  •  
    Amazing how some guys from some other university also did pretty much the same thing (although they didn't use the bidirectional stuff) and published it just last month. Just goes to show you can dump pretty much anything into an RNN and train it for long enough and it'll produce magic. http://arxiv.org/pdf/1410.1090v1.pdf
  •  
    Seems like quite the trend. And the fact that google still tries to use LSTMs is even more surprising.
  •  
    LSTMs: that was also the first thing in the paper that caught my attention! :) I hadn't seen them in the wild in years... My oversight most likely. The paper seems to be getting ~100 citations a year. Someone's using them.
  •  
    There are a few papers on them. Though you have to be lucky to get them to work. The backprop is horrendous.
Juxi Leitner

Game-playing software holds lessons for neuroscience : Nature News & Comment - 4 views

  •  
    DeepMind actually got a comp-sci paper into nature...
1 - 13 of 13
Showing 20 items per page