Skip to main content

Home/ Advanced Concepts Team/ Group items tagged vision

Rss Feed Group items tagged

Annalisa Riccardi

Dynamic Vision Sensors - 5 views

  •  
    New vision sensor from a swiss company seems to go beyond Elementary Motion Detectors (those inspired by insect vision)
  •  
    Nice stuff!
ESA ACT

CNN.com Specials - Vision - 0 views

  •  
    visions of the future collected by CNN
Guido de Croon

Convolutional networks start to rule the world! - 2 views

  •  
    Recently, many competitions in the computer vision domain have been won by huge convolutional networks. In the image net competition, the convolutional network approach halves the error from ~30% to ~15%! Key changes that make this happen: weight-sharing to reduce the search space, and training with a massive GPU approach. (See also the work at IDSIA: http://www.idsia.ch/~juergen/vision.html) This should please Francisco :)
  • ...1 more comment...
  •  
    where is Francisco when one needs him ...
  •  
    ...mmmmm... they use 60 million parameters and 650,000 neurons on a task that one can somehow consider easier than (say) predicting a financial crisis ... still they get 15% of errors .... reminds me of a comic we saw once ... cat http://www.sarjis.info/stripit/abstruse-goose/496/the_singularity_is_way_over_there.png
  •  
    I think the ultimate solution is still to put a human brain in a jar and use it for pattern recognition. Maybe we should get a stagiaire for this..?
Nina Nadine Ridder

Surprising similarity in fly and mouse motion vision - 2 views

  •  
    Loosely related to an old ACT project on optical flow (if I remember correctly but even if not still an interesting read I think): "At first glance, the eyes of mammals and those of insects do not seem to have much in common. However, a comparison of the neural circuits for detecting motion shows surprising parallels between flies and mice. Scientists have learned a lot about the visual perception of both animals in recent years."
nikolas smyrlakis

BBC News - European space missions given cost warning - 1 views

  •  
    Cosmic Vision missions, some of which to be selected before the end of 2011.. Favorite phrase: "Mindful of the recent criticism the agency has received from member states on the issue of cost overruns, Professor David Southwood, Esa's director of science and robotics, told the teams: "Industry and the science community need to get to work on this; it's a collective responsibility."" :-> reference class forecasting!
Joris _

Panel Picks 3 Finalists for ESAs Cosmic Vision M class Missions | SpaceNews.com - 3 views

  • “superb from a science standpoint,” but beyond Europe’s current budget
  • There was some questioning of the cost estimate for Euclid, but at some point you have to decide: Either you don’t believe the estimates that [ESA science program managers] produced, or you assume their estimates are credible. Euclid was the one mission where costs were debated, but the consensus was to use the cost estimates presented to us.
  •  
    finally cross-scale is not selected, it was the only cosmic vision mission with multi-satellites formation flying, since the concept for Xeus has changed. There is still Swarm but it's not really formation flying... So ESA is missing something here...
jcunha

Nature Optics: Super vision - 6 views

  •  
    Taking images through opaque, light-scattering layers is a vital capability and essential diagnostic tool in many applications. The research group of Prof. Mosk of U. Twente have started doing experiments shooting optical lasers into opaque materials in 2007, and for surprise of everyone, it turn out the light intensity after the opaque material in their experiments was orders of magnitude bigger than expected. Following these results they succeeded in taking non-invasive sharp pictures of objects hidden behind a screen of opaqueness, the so referred Super Vision in this Nature overview article.
  •  
    very nice!!!
Marcus Maertens

amzn/computer-vision-basics-in-microsoft-excel: Computer Vision Basics in Microsoft Exc... - 2 views

  •  
    One of the best use cases for MS Excel so far.
LeopoldS

Helix Nebula - Helix Nebula Vision - 0 views

  •  
    The partnership brings together leading IT providers and three of Europe's leading research centres, CERN, EMBL and ESA in order to provide computing capacity and services that elastically meet big science's growing demand for computing power.

    Helix Nebula provides an unprecedented opportunity for the global cloud services industry to work closely on the Large Hadron Collider through the large-scale, international ATLAS experiment, as well as with the molecular biology and earth observation. The three flagship use cases will be used to validate the approach and to enable a cost-benefit analysis. Helix Nebula will lead these communities through a two year pilot-phase, during which procurement processes and governance issues for the public/private partnership will be addressed.

    This game-changing strategy will boost scientific innovation and bring new discoveries through novel services and products. At the same time, Helix Nebula will ensure valuable scientific data is protected by a secure data layer that is interoperable across all member states. In addition, the pan-European partnership fits in with the Digital Agenda of the European Commission and its strategy for cloud computing on the continent. It will ensure that services comply with Europe's stringent privacy and security regulations and satisfy the many requirements of policy makers, standards bodies, scientific and research communities, industrial suppliers and SMEs.

    Initially based on the needs of European big-science, Helix Nebula ultimately paves the way for a Cloud Computing platform that offers a unique resource to governments, businesses and citizens.
  •  
    "Helix Nebula will lead these communities through a two year pilot-phase, during which procurement processes and governance issues for the public/private partnership will be addressed." And here I was thinking cloud computing was old news 3 years ago :)
LeopoldS

Physicists twist water into knots : Nature News & Comment - 3 views

  •  
    More than a century after the idea was first floated, physicists have finally figured out how to tie water in knots in the laboratory. The gnarly feat, described today in Nature Physics1, paves the way for scientists to experimentally study twists and turns in a range of phenomena - ionized gases like that of the Sun's outer atmosphere, superconductive materials, liquid crystals and quantum fields that describe elementary particles.

    Lord Kelvin proposed that atoms were knotted "vortex rings" - which are essentially like tornado bent into closed loops and knotted around themselves, as Daniel Lathrop and Barbara Brawn-Cinani write in an accompanying commentary. In Kelvin's vision, the fluid was the theoretical 'aether' then thought to pervade all of space. Each type of atom would be represented by a different knot.

    Related stories
    Solar magnetism twists braids of superheated gas
    Electron microscopy gets twisted
    Topological insulators: Star material
    More related stories
    Kelvin's interpretation of the periodic table never went anywhere, but his ideas led to the blossoming of the mathematical theory of knots, part of the field of topology. Meanwhile, scientists also have come to realize that knots have a key role in a host of physical processes.
Marcus Maertens

World's first telescopic contact lens gives you Superman-like vision | ExtremeTech - 1 views

  •  
    Now we just need an x-ray mode and we are done.
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
Tom Gheysens

Dragonflies can see by switching 'on' and 'off' - 0 views

  •  
    Researchers at the University of Adelaide have discovered a novel and complex visual circuit in a dragonfly's brain that could one day help to improve vision systems for robots.
anonymous

See-Through Vision at UCSB - 5 views

  •  
    Determining the "volume" and position of objects inside buildings using wi-fi signal.
  • ...1 more comment...
  •  
    Now this is impressive
  •  
    another example of bayesian inversion
  •  
    "The objects on the other side do not even have to move to be detected." Very American way of saying "The objects have to be stationary to be detected"...
Tom Gheysens

Direct brain-to-brain communication demonstrated in human subjects -- ScienceDaily - 2 views

  •  
    In a first-of-its-kind study, an international team of neuroscientists and robotics engineers has demonstrated the viability of direct brain-to-brain communication in humans.
  •  
    Was just about to post it... :) It seems after transferring the EEG signals of one person, converting it to bits and stimulating some brain activity using magnetic stimulation (TMS) the receiving person actually sees 'flashes of light' in their peripheral vision. So its using your vision sense to get the information across. Would it not be better to try to see if you can generate some kind of signal in the part of your brain that is connected to 'hearing'? Or would this be me thinking too naive?
  •  
    "transferring the EEG signals of one person, converting it to bits and stimulating some brain activity using magnetic stimulation (TMS)" How is this "direct"?
LeopoldS

Worden Discusses Futuristic Vision CBS News | SpaceNews.com - 2 views

  •  
    he has always been between genius and crazy .... but I like it ...
Juxi Leitner

Real-Life Cyborg Astrobiologists to Search for Signs of Life on Future Mars Missions - 0 views

  • EuroGeo team developed a wearable-computer platform for testing computer-vision exploration algorithms in real-time at geological or astrobiological field sites, focusing on the concept of "uncommon mapping"  in order to identify contrasting areas in an image of a planetary surface. Recently, the system was made more ergonomic and easy to use by porting the system into a phone-cam platform connected to a remote server.
  • a second computer-vision exploration algorithm using a  neural network in order to remember aspects of previous images and to perform novelty detection
  •  
    well a bit misleading title...
Francesco Biscani

DIRECT - Wikipedia, the free encyclopedia - 0 views

  • DIRECT is a proposed alternative Shuttle-Derived Launch Vehicle architecture supporting NASA's Vision for Space Exploration, which would replace the space agency's planned Ares I and Ares V rockets with a family of launch vehicles named "Jupiter."
  • DIRECT is advocated by a group of space enthusiasts that asserts it represents a broader team of dozens of NASA and space industry engineers who actively work on the proposal on an anonymous, volunteer basis in their spare time.
  •  
    Just read about this, it looks like an interesting example of bottom-up innovation and self-organization.
LeopoldS

Twitter-brain interface offers terrifying vision of the future - 0 views

  •  
    another one for Luca ... :-)
1 - 20 of 70 Next › Last »
Showing 20 items per page