Computer Vision System Helps Diagnose Autism in Infants | MIT Technology Review - 2 views
Will robots be smarter than humans by 2029? - 2 views
-
Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
- ...9 more comments...
-
All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
-
These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
Convolutional networks start to rule the world! - 2 views
-
Recently, many competitions in the computer vision domain have been won by huge convolutional networks. In the image net competition, the convolutional network approach halves the error from ~30% to ~15%! Key changes that make this happen: weight-sharing to reduce the search space, and training with a massive GPU approach. (See also the work at IDSIA: http://www.idsia.ch/~juergen/vision.html) This should please Francisco :)
- ...1 more comment...
-
...mmmmm... they use 60 million parameters and 650,000 neurons on a task that one can somehow consider easier than (say) predicting a financial crisis ... still they get 15% of errors .... reminds me of a comic we saw once ... cat http://www.sarjis.info/stripit/abstruse-goose/496/the_singularity_is_way_over_there.png
-
I think the ultimate solution is still to put a human brain in a jar and use it for pattern recognition. Maybe we should get a stagiaire for this..?
Real-Life Cyborg Astrobiologists to Search for Signs of Life on Future Mars Missions - 0 views
-
EuroGeo team developed a wearable-computer platform for testing computer-vision exploration algorithms in real-time at geological or astrobiological field sites, focusing on the concept of "uncommon mapping" in order to identify contrasting areas in an image of a planetary surface. Recently, the system was made more ergonomic and easy to use by porting the system into a phone-cam platform connected to a remote server.
-
a second computer-vision exploration algorithm using a neural network in order to remember aspects of previous images and to perform novelty detection
physicists explain what AI researchers are actually doing - 5 views
-
love this one ... it seems to take physicist to explain to the AI crowd what they are actually doing ... Deep learning is a broad set of techniques that uses multiple layers of representation to automatically learn relevant features directly from structured data. Recently, such techniques have yielded record-breaking results on a diverse set of difficult machine learning tasks in computer vision, speech recognition, and natural language processing. Despite the enormous success of deep learning, relatively little is understood theoretically about why these techniques are so successful at feature learning and compression. Here, we show that deep learning is intimately related to one of the most important and successful techniques in theoretical physics, the renormalization group (RG). RG is an iterative coarse-graining scheme that allows for the extraction of relevant features (i.e. operators) as a physical system is examined at different length scales. We construct an exact mapping from the variational renormalization group, first introduced by Kadanoff, and deep learning architectures based on Restricted Boltzmann Machines (RBMs). We illustrate these ideas using the nearest-neighbor Ising Model in one and two-dimensions. Our results suggests that deep learning algorithms may be employing a generalized RG-like scheme to learn relevant features from data.
Europe Unveils Its Vision for a Quantum Future - 0 views
-
"...the European Commission announced in 2016 that it was investing one billion euros in a research effort known as the Quantum Technology Flagship. The goal for this project is to develop four technologies: quantum communication, quantum simulation, quantum computing, and quantum sensing. After almost two years, how is it going?" arxiv link to the actual report: http://arxiv.org/abs/1712.03773
New Metamaterial Camera Has Super-Fast Microwave Vision - 1 views
-
"The metamaterial aperature is only 40 centimeters long and it doesn't move. It's a circuit-board-like structure consisting of two copper plates separated by a piece of plastic. One of the plates is etched with repeating boxy structures, units about 2 millimeters long that permit different lengths of microwaves to pass through. Scanning the scene at various microwave frequencies allows the computer to capture all the information necessary to reproduce a scene."
-
where is Luzi's comment when one needs it ???
The BCI X PRIZE: This Time It's Inner Space | h+ Magazine - 3 views
-
The Brain-Computer Interface X PRIZE will reward a team that provides vision to the blind, new bodies to disabled people...
rapid 3D model acquisition with a webcam from Cambridge uni - 0 views
-
impressive, particularly if it works like it does in the video the whole time. paper here http://mi.eng.cam.ac.uk/~qp202/
-
Well, impressive indeed... have to try it out...
Convolutional Neural Networks for Visual Recognition - 3 views
-
pretty impressive stuff!
- ...3 more comments...
-
LSTMs: that was also the first thing in the paper that caught my attention! :) I hadn't seen them in the wild in years... My oversight most likely. The paper seems to be getting ~100 citations a year. Someone's using them.
-
There are a few papers on them. Though you have to be lucky to get them to work. The backprop is horrendous.
Game-playing software holds lessons for neuroscience : Nature News & Comment - 4 views
The New Science of Seeing Around Corners - 3 views
NASA Next Mars Rover Mission: new landing technology - 3 views
JPL is also developing a crucial new landing technology called terrain-relative navigation. As the descent stage approaches the Martian surface, it will use computer vision to compare the landscape...
1 - 18 of 18
Showing 20▼ items per page
Helix Nebula provides an unprecedented opportunity for the global cloud services industry to work closely on the Large Hadron Collider through the large-scale, international ATLAS experiment, as well as with the molecular biology and earth observation. The three flagship use cases will be used to validate the approach and to enable a cost-benefit analysis. Helix Nebula will lead these communities through a two year pilot-phase, during which procurement processes and governance issues for the public/private partnership will be addressed.
This game-changing strategy will boost scientific innovation and bring new discoveries through novel services and products. At the same time, Helix Nebula will ensure valuable scientific data is protected by a secure data layer that is interoperable across all member states. In addition, the pan-European partnership fits in with the Digital Agenda of the European Commission and its strategy for cloud computing on the continent. It will ensure that services comply with Europe's stringent privacy and security regulations and satisfy the many requirements of policy makers, standards bodies, scientific and research communities, industrial suppliers and SMEs.
Initially based on the needs of European big-science, Helix Nebula ultimately paves the way for a Cloud Computing platform that offers a unique resource to governments, businesses and citizens.