Skip to main content

Home/ Advanced Concepts Team/ Group items tagged computer vision

Rss Feed Group items tagged

LeopoldS

Helix Nebula - Helix Nebula Vision - 0 views

  •  
    The partnership brings together leading IT providers and three of Europe's leading research centres, CERN, EMBL and ESA in order to provide computing capacity and services that elastically meet big science's growing demand for computing power.

    Helix Nebula provides an unprecedented opportunity for the global cloud services industry to work closely on the Large Hadron Collider through the large-scale, international ATLAS experiment, as well as with the molecular biology and earth observation. The three flagship use cases will be used to validate the approach and to enable a cost-benefit analysis. Helix Nebula will lead these communities through a two year pilot-phase, during which procurement processes and governance issues for the public/private partnership will be addressed.

    This game-changing strategy will boost scientific innovation and bring new discoveries through novel services and products. At the same time, Helix Nebula will ensure valuable scientific data is protected by a secure data layer that is interoperable across all member states. In addition, the pan-European partnership fits in with the Digital Agenda of the European Commission and its strategy for cloud computing on the continent. It will ensure that services comply with Europe's stringent privacy and security regulations and satisfy the many requirements of policy makers, standards bodies, scientific and research communities, industrial suppliers and SMEs.

    Initially based on the needs of European big-science, Helix Nebula ultimately paves the way for a Cloud Computing platform that offers a unique resource to governments, businesses and citizens.
  •  
    "Helix Nebula will lead these communities through a two year pilot-phase, during which procurement processes and governance issues for the public/private partnership will be addressed." And here I was thinking cloud computing was old news 3 years ago :)
Marcus Maertens

amzn/computer-vision-basics-in-microsoft-excel: Computer Vision Basics in Microsoft Exc... - 2 views

  •  
    One of the best use cases for MS Excel so far.
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
Guido de Croon

Convolutional networks start to rule the world! - 2 views

  •  
    Recently, many competitions in the computer vision domain have been won by huge convolutional networks. In the image net competition, the convolutional network approach halves the error from ~30% to ~15%! Key changes that make this happen: weight-sharing to reduce the search space, and training with a massive GPU approach. (See also the work at IDSIA: http://www.idsia.ch/~juergen/vision.html) This should please Francisco :)
  • ...1 more comment...
  •  
    where is Francisco when one needs him ...
  •  
    ...mmmmm... they use 60 million parameters and 650,000 neurons on a task that one can somehow consider easier than (say) predicting a financial crisis ... still they get 15% of errors .... reminds me of a comic we saw once ... cat http://www.sarjis.info/stripit/abstruse-goose/496/the_singularity_is_way_over_there.png
  •  
    I think the ultimate solution is still to put a human brain in a jar and use it for pattern recognition. Maybe we should get a stagiaire for this..?
Juxi Leitner

Real-Life Cyborg Astrobiologists to Search for Signs of Life on Future Mars Missions - 0 views

  • EuroGeo team developed a wearable-computer platform for testing computer-vision exploration algorithms in real-time at geological or astrobiological field sites, focusing on the concept of "uncommon mapping"  in order to identify contrasting areas in an image of a planetary surface. Recently, the system was made more ergonomic and easy to use by porting the system into a phone-cam platform connected to a remote server.
  • a second computer-vision exploration algorithm using a  neural network in order to remember aspects of previous images and to perform novelty detection
  •  
    well a bit misleading title...
LeopoldS

physicists explain what AI researchers are actually doing - 5 views

  •  
    love this one ... it seems to take physicist to explain to the AI crowd what they are actually doing ... Deep learning is a broad set of techniques that uses multiple layers of representation to automatically learn relevant features directly from structured data. Recently, such techniques have yielded record-breaking results on a diverse set of difficult machine learning tasks in computer vision, speech recognition, and natural language processing. Despite the enormous success of deep learning, relatively little is understood theoretically about why these techniques are so successful at feature learning and compression. Here, we show that deep learning is intimately related to one of the most important and successful techniques in theoretical physics, the renormalization group (RG). RG is an iterative coarse-graining scheme that allows for the extraction of relevant features (i.e. operators) as a physical system is examined at different length scales. We construct an exact mapping from the variational renormalization group, first introduced by Kadanoff, and deep learning architectures based on Restricted Boltzmann Machines (RBMs). We illustrate these ideas using the nearest-neighbor Ising Model in one and two-dimensions. Our results suggests that deep learning algorithms may be employing a generalized RG-like scheme to learn relevant features from data.
jaihobah

Europe Unveils Its Vision for a Quantum Future - 0 views

  •  
    "...the European Commission announced in 2016 that it was investing one billion euros in a research effort known as the Quantum Technology Flagship. The goal for this project is to develop four technologies: quantum communication, quantum simulation, quantum computing, and quantum sensing. After almost two years, how is it going?" arxiv link to the actual report: http://arxiv.org/abs/1712.03773
johannessimon81

Computational Imaging: The Next Mobile Battlefield - 2 views

  •  
    Wired article giving an opinion on the future trends for mobile computing (e.g. SLAM, 3D vision, ...)
johannessimon81

New Metamaterial Camera Has Super-Fast Microwave Vision - 1 views

  •  
    "The metamaterial aperature is only 40 centimeters long and it doesn't move. It's a circuit-board-like structure consisting of two copper plates separated by a piece of plastic. One of the plates is etched with repeating boxy structures, units about 2 millimeters long that permit different lengths of microwaves to pass through. Scanning the scene at various microwave frequencies allows the computer to capture all the information necessary to reproduce a scene."
  •  
    where is Luzi's comment when one needs it ???
Lionel Jacques

DARPA's Shredder Challenge is solved ahead of schedule - 3 views

  •  
    The San Francisco-based team, which beat out approximately 9,000 competitors, used "custom-coded, computer-vision algorithms to suggest fragment pairings to human assemblers for verification."
  •  
    amusing team name
Juxi Leitner

The BCI X PRIZE: This Time It's Inner Space | h+ Magazine - 3 views

  • The Brain-Computer Interface X PRIZE will reward a team that provides vision to the blind, new bodies to disabled people...
  •  
    nice! are they studying our website?
Nicholas Lan

rapid 3D model acquisition with a webcam from Cambridge uni - 0 views

  •  
    impressive, particularly if it works like it does in the video the whole time. paper here http://mi.eng.cam.ac.uk/~qp202/
  •  
    Well, impressive indeed... have to try it out...
Juxi Leitner

Convolutional Neural Networks for Visual Recognition - 3 views

  •  
    pretty impressive stuff!
  • ...3 more comments...
  •  
    Amazing how some guys from some other university also did pretty much the same thing (although they didn't use the bidirectional stuff) and published it just last month. Just goes to show you can dump pretty much anything into an RNN and train it for long enough and it'll produce magic. http://arxiv.org/pdf/1410.1090v1.pdf
  •  
    Seems like quite the trend. And the fact that google still tries to use LSTMs is even more surprising.
  •  
    LSTMs: that was also the first thing in the paper that caught my attention! :) I hadn't seen them in the wild in years... My oversight most likely. The paper seems to be getting ~100 citations a year. Someone's using them.
  •  
    There are a few papers on them. Though you have to be lucky to get them to work. The backprop is horrendous.
Juxi Leitner

Game-playing software holds lessons for neuroscience : Nature News & Comment - 4 views

  •  
    DeepMind actually got a comp-sci paper into nature...
jaihobah

The New Science of Seeing Around Corners - 3 views

  •  
    Computer vision researchers have uncovered a world of visual signals hiding in our midst, including subtle motions that betray what's being said and faint images of what's around a corner.
Marion Nachon

NASA Next Mars Rover Mission: new landing technology - 3 views

JPL is also developing a crucial new landing technology called terrain-relative navigation. As the descent stage approaches the Martian surface, it will use computer vision to compare the landscape...

technology space

started by Marion Nachon on 15 Jan 18 no follow-up yet
jcunha

Compact single-shot metalens depth sensors inspired by eyes of jumping spiders - 4 views

  •  
    Making a jumping spider eye with nanophotonics and computer vision: depth from defocus with an integrated mentalens sensor! Some in the ACT might be familiar with the concept...
1 - 18 of 18
Showing 20 items per page