Skip to main content

Home/ Advanced Concepts Team/ Group items tagged training

Rss Feed Group items tagged

Athanasia Nikolaou

The weather of 2013 bucked in an 8' video - 0 views

  •  
    Very comprehensive thanks to the narrator from EUMETSAT training office (plus aesthetically pleasing)
Daniel Hennes

A.I. XPRIZE - 3 views

  •  
    TED is sponsoring an A.I. XPRIZE. The goal? Develop an artificial intelligence that jumps on stage and gives a 3min talk on a random topic...
  •  
    I am going to propose that the rules include in addition something practical - like washing the dishes... If we are to foster progress, let's finally do so in the right direction...
  •  
    This sort of reminds me of Hinton's paper from some years ago: http://www.cs.utoronto.ca/~ilya/pubs/2011/LANG-RNN.pdf Train it on previous TED talks and let it run TED talk - like gibberish. It would probably be of similar value. He had a nice one on the meaning of life but I can't find it anymore.
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
Dario Izzo

Climate scientists told to 'cover up' the fact that the Earth's temperature hasn't rise... - 5 views

  •  
    This is becoming a mess :)
  • ...2 more comments...
  •  
    I would avoid reading climate science from political journals, for a less selective / dramatic picture :-) . Here is a good start: http://www.realclimate.org/ And an article on why climate understanding should be approached hierarcically, (that is not the way done in the IPCC), a view with insight, 8 years ago: http://www.princeton.edu/aos/people/graduate_students/hill/files/held2005.pdf
  •  
    True, but fundings are allocated to climate modelling 'science' on the basis of political decisions, not solid and boring scientific truisms such as 'all models are wrong'. The reason so many people got trained on this area in the past years is that resources were allocated to climate science on the basis of the dramatic picture depicted by some scientists when it was indeed convenient for them to be dramatic.
  •  
    I see your point, and I agree that funding was also promoted through the energy players and their political influence. A coincident parallel interest which is irrelevant to the fact that the question remains vital. How do we affect climate and how does it respond. Huge complex system to analyse which responds in various time scales which could obscure the trend. What if we made a conceptual parallelism with the L Ácquila case : Is the scientific method guilty or the interpretation of uncertainty in terms of societal mobilization? Should we leave the humanitarian aspect outside any scientific activity?
  •  
    I do not think there is anyone arguing that the question is not interesting and complex. The debate, instead, addresses the predictive value of the models produced so far. Are they good enough to be used outside of the scientific process aimed at improving them? Or should one wait for "the scientific method" to bring forth substantial improvements to the current understanding and only then start using its results? One can take both stand points, but some recent developments will bring many towards the second approach.
Athanasia Nikolaou

Neural Networks (!) in OLCI - ocean colour sensor onboard Sentinel 3 - 3 views

  •  
    Not easily digestible piece of esa document, but to prove Paul's point. And yes, they have already planned to train neural networks on a database of different water types, so that the satellite figures out from the combined retrieval of backscattering and absorption = f(λ) which type of water it is looking at. Type of water relates to οptical clarity of the water, a variable called turbidity. We could do this as well for mapping iron fertilization locations if we find its spectral signature. Lab time?????
Paul N

Animal brains connected up to make mind-melded computer - 2 views

  •  
    Parallel processing in computing --- Brainet The team sent electrical pulses to all four rats and rewarded them when they synchronised their brain activity. After 10 training sessions, the rats were able to do this 61 per cent of the time. This synchronous brain activity can be put to work as a computer to perform tasks like information storage and pattern recognition, says Nicolelis. "We send a message to the brains, the brains incorporate that message, and we can retrieve the message later," he says. Dividing the computing of a task between multiple brains is similar to sharing computations between multiple processors in modern computers, "If you could collaboratively solve common problems [using a brainet], it would be a way to leverage the skills of different individuals for a common goal."
Alexander Wittig

Picture This: NVIDIA GPUs Sort Through Tens of Millions of Flickr Photos - 2 views

  •  
    Strange and exotic cityscapes. Desolate wilderness areas. Dogs that look like wookies. Flickr, one of the world's largest photo sharing services, sees it all. And, now, Flickr's image recognition technology can categorize more than 11 billion photos like these. And it does it automatically. It's called "Magic View." Magical deep learning! Buzzword attack!
  • ...4 more comments...
  •  
    and here comes my standard question: how can we use this for space? fast detection of natural disasters onboard?
  •  
    Even on ground. You could for example teach it what nuclear reactors or missiles or other weapons you don't want look like on satellite pictures and automatically scan the world for them (basically replacing intelligence analysts).
  •  
    In fact, I think this could make a nice ACT project: counting seals from satellite imagery is an actual (and quite recent) thing: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0092613 In this publication they did it manually from a GeoEye 1 b/w image, which sounds quite tedious. Maybe one can train one of those image recognition algorithms to do it automatically. Or maybe it's a bit easier to count larger things, like elephants (also a thing).
  •  
    In HiPEAC (High Performance, embedded architecture and computation) conference I attended in the beginning of this year there was a big trend of CUDA GPU vs FPGA for hardware accelerated image processing. Most of it orbitting around discussing who was faster and cheaper with people from NVIDIA in one side and people from Xilinx and Intel in the other. I remember of talking with an IBM scientist working on hardware accelerated data processing working together with the Radio telescope institute in Netherlands about the solution where they working on (GPU CUDA). I gathered that NVIDIA GPU suits best in applications that somehow do not rely in hardware, having the advantage of being programmed in a 'easy' way accessible to a scientist. FPGA's are highly reliable components with the advantage of being available in radhard versions, but requiring specific knowledge of physical circuit design and tailored 'harsh' programming languages. I don't know what is the level of rad hardness in NVIDIA's GPUs... Therefore FPGAs are indeed the standard choice for image processing in space missions (a talk with the microelectronics department guys could expand on this), whereas GPUs are currently used in some ground based (radio astronomy or other types of telescopes). I think that on for a specific purpose as the one you mentioned, this FPGA vs GPU should be assessed first before going further.
  •  
    You're forgetting power usage. GPUs need 1000 hamster wheels worth of power while FPGAs can run on a potato. Since space applications are highly power limited, putting any kind of GPU monster in orbit or on a rover is failed idea from the start. Also in FPGAs if a gate burns out from radiation you can just reprogram around it. Looking for seals offline in high res images is indeed definitely a GPU task.... for now.
  •  
    The discussion of how to make FPGA hardware acceleration solutions easier to use for the 'layman' is starting btw http://reconfigurablecomputing4themasses.net/.
Alexander Wittig

Neuronal Networks: Computers paint like van Gogh - 1 views

  •  
    A neuronal network trained to paint the scene of a given photograph in the style of Kandinsky, van Gogh, or Munch. Their results look quite impressive. Unfortunately the article is in German, but the English paper (with plenty of pictures) is here: http://arxiv.org/pdf/1508.06576v2.pdf Malen wie Kandinsky, wie van Gogh, wie Munch nur auf Basis einer Fotovorlage? Natürlich gibt es begabte Kunstfälscher, die das können. Jetzt aber gelingt es auch Computern, und zwar auf höchst eindrucksvolle Weise. Drei Forscher von der Universität Tübingen haben es geschafft, einem sogenannten künstlichen neuronalen Netzwerk das Malen beizubringen.
  •  
    Impressive stuff indeed. Paper came out one week ago. Multiple independent implementations have popped out since then: * https://github.com/Lasagne/Recipes/blob/master/examples/styletransfer/Art%20Style%20Transfer.ipynb * https://github.com/jcjohnson/neural-style * https://github.com/kaishengtai/neuralart
Ma Ru

Train like an astronaut - 4 views

  •  
    Nice initiative by ESA...
pacome delva

Sensitivity training for LISA - 1 views

  • De Vine and colleagues therefore designed a laboratory scale version of LISA and showed they were able to suppress two potentially damaging noise sources: phase fluctuations in the clocks that synchronize the measurements and frequency fluctuations in the lasers.
  •  
    good news for LISA...!
pacome delva

Neural Networks Designed to 'See' are Quite Good at 'Hearing' As Well - 2 views

  • Neural networks -- collections of artificial neurons or nodes set up to behave like the neurons in the brain -- can be trained to carry out a variety of tasks, often having something to do with pattern or sequence recognition. As such, they have shown great promise in image recognition systems. Now, research coming out of the University of Hong Kong has shown that neural networks can hear as well as see. A neural network there has learned the features of sound, classifying songs into specific genres with 87 percent accuracy.
  • Similar networks based on auditory cortexes have been rewired for vision, so it would appear these kinds of neural networks are quite flexible in their functions. As such, it seems they could potentially be applied to all sorts of perceptual tasks in artificial intelligence systems, the possibilities of which have only begun to be explored.
Ma Ru

Train Like an Astronaut ESA Initiative - 2 views

  •  
    You know what to do...
Francesco Biscani

What Should We Teach New Software Developers? Why? | January 2010 | Communications of t... - 3 views

shared by Francesco Biscani on 15 Jan 10 - Cached
Dario Izzo liked it
  • Industry wants to rely on tried-and-true tools and techniques, but is also addicted to dreams of "silver bullets," "transformative breakthroughs," "killer apps," and so forth.
  • This leads to immense conservatism in the choice of basic tools (such as programming languages and operating systems) and a desire for monocultures (to minimize training and deployment costs).
  • The idea of software development as an assembly line manned by semi-skilled interchangeable workers is fundamentally flawed and wasteful.
  •  
    Nice opinion piece by the creator of C++ Bjarne Stroustrup. Substitute "industry" with "science" and many considerations still apply :)
  •  
    "for many, "programming" has become a strange combination of unprincipled hacking and invoking other people's libraries (with only the vaguest idea of what's going on). The notions of "maintenance" and "code quality" are typically forgotten or poorly understood. " ... seen so many of those students :( and ad "My suggestion is to define a structure of CS education based on a core plus specializations and application areas", I am not saying the austrian university system is good, but e.g. the CS degrees in Vienna are done like this, there is a core which is the same for everybody 4-5 semester, and then you specialise in e.g. software engineering or computational mgmt and so forth, and then after 2 semester you specialize again into one of I think 7 or 8 master degrees ... It does not make it easy for industry to hire people, as I have noticed, they sometimes really have no clue what the difference between Software Engineering is compared to Computational Intelligence, at least in HR :/
pacome delva

Electronic Nose Knows a Good Smell - 1 views

  • Most of these devices have been able to identify and distinguish only between specific odors they've previously been trained to recognize, however, says neuroscientist Rafi Haddad of the Weizmann Institute of Science in Rehovot, Israel. If an artificial nose is ever to replace the real thing, he says, it will have to be able to classify odors it has never encountered before.
  •  
    for Eduardo !
  •  
    Smells awesome! Thanks dude...
Joris _

The Associated Press: Daunting space task _ send astronauts to asteroid - 1 views

  • NASA leaders say civilization may depend on it
  • NASA is thinking about jetpacks, tethers, bungees, nets and spiderwebs to allow explorers to float just above the surface of it while attached to a smaller mini-spaceship.
  • At the moment, there are only a handful of asteroid options and they all have names like 1999AO10 or 2009OS5.
  • ...2 more annotations...
  • NASA is pursuing its concept for a mini-spaceship exploration vehicle, about the size of a minivan. And it's planning an underwater lab for training, an effort to mimic an asteroid mission's challenges
  • "There's a lot of things we need to invent and build between now and then."
ESA ACT

JoVE: Journal of Visualized Experiments - Biological Experiments and Protocols on Video - 0 views

  •  
    Amazing: you can now publish lecture videos of experiments and special techniques.
johannessimon81

42 - a constant of nature - 3 views

  •  
    It turns out that falling along any straight line through the Earth takes 42 minutes (Gravity train). I think this has not been opted as an explanation of Douglas Adams' 42 but this fact is definitely quite beautiful.
Juxi Leitner

Convolutional Neural Networks for Visual Recognition - 3 views

  •  
    pretty impressive stuff!
  • ...3 more comments...
  •  
    Amazing how some guys from some other university also did pretty much the same thing (although they didn't use the bidirectional stuff) and published it just last month. Just goes to show you can dump pretty much anything into an RNN and train it for long enough and it'll produce magic. http://arxiv.org/pdf/1410.1090v1.pdf
  •  
    Seems like quite the trend. And the fact that google still tries to use LSTMs is even more surprising.
  •  
    LSTMs: that was also the first thing in the paper that caught my attention! :) I hadn't seen them in the wild in years... My oversight most likely. The paper seems to be getting ~100 citations a year. Someone's using them.
  •  
    There are a few papers on them. Though you have to be lucky to get them to work. The backprop is horrendous.
Thijs Versloot

Real-Time Recognition and Profiling of Home Appliances through a Single Electricity Sensor - 3 views

  •  
    A personal interest of mine that I want to explore a bit more in the future. I just bought a ZigBee electricity monitor and I am wondering whether from the signal of the mains one could detect (reliably) the oven turning on, lights, etc. Probably requires Neural Network training. The idea would be to make a simple device which basically saves you money by telling you how much electricity you are wasting. Then again, its probably already done by Google...
  • ...3 more comments...
  •  
    nice project!
  •  
    For those interested, this is what/where I ordered.. http://openenergymonitor.org/emon/
  •  
    Update two.. RF chip is faulty and tonight I have to solder a new chip into place.. That's open-source hardware for you!
  •  
    haha, yep, that's it... but we can do better than that right! :)
jcunha

Computer model matches humans at predicting how objects move - 0 views

  •  
    We humans take for granted our remarkable ability to predict things that happen around us. Here, a deep learning model trained from real-world videos and a 3D graphics engine was able to infer physical properties of objects against humans.
‹ Previous 21 - 40 of 51 Next ›
Showing 20 items per page