"…the habit of mind which leads to a search for relationships between facts becomes of the highest importance in the production of ideas."
Should that be bed reading for all aCT members? :)
"Magnetic resonance imaging (MRI) of the eyes and brains of 27 astronauts who have spent prolonged periods of time in space revealed optical abnormalities similar to those that can occur in intracranial hypertension of unknown cause, a potentially serious condition in which pressure builds within the skull."
Great introduction to the Bayesian view on the workings of the brain, which has been a successful view in explaining many psychological phenomena, visual illusions, etc.
One of the possible criticisms on this view is that it neatly separates perception and action.
Mantis shrimp seem to have 12 types of photo-receptive sensors - but this does not really improve their ability to discriminate between colors. Speculation is that they serve as a form of pre-processing for visual information: the brain does not need to decode full color information from just a few channels which would would allow for a smaller brain.
I guess technologically the two extremes of light detection would be RGB cameras which are like our eyes and offer good spatial resolution, and spectrometers which have a large amount of color channels but at the cost of spatial resolution. It seems the mantis shrimp uses something that is somewhere between RGB cameras and spectrometers. Could there be a use for this in space?
> RGB cameras which are like our eyes
...apart from the fact that the spectral response of the eyes is completely different from "RGB" cameras (http://en.wikipedia.org/wiki/File:Cones_SMJ2_E.svg)
... and that the eyes have 4 types of light-sensitive cells, not three (http://en.wikipedia.org/wiki/File:Cone-response.svg)
... and that, unlike cameras, human eye is precise only in a very narrow centre region (http://en.wikipedia.org/wiki/Fovea)
...hmm, apart from relying on tri-stimulus colour perception it seems human eyes are in fact completely different from "RGB cameras" :-)
OK sorry for picking on this - that's just the colour science geek in me :-)
Now seriously, on one hand the article abstract sounds very interesting, but on the other the statement "Why use 12 color channels when three or four are sufficient for fine color discrimination?" reveals so much ignorance to the very basics of colour science that I'm completely puzzled - in the end, it's a Science article so it should be reasonably scientifically sound, right?
Pity I can't access full text... the interesting thing is that more channels mean more information and therefore should require *more* power to process - which is exactly opposite to their theory (as far as I can tell it from the abstract...). So the key is to understand *what* information about light these mantises are collecting and why
- definitely it's not "colour" in the sense of human perceptual experience.
But in any case - yes, spectrometry has its uses in space :-)
Echoing this, in 2009 Google researchers Alon Halevy, Peter Norvig, and Fernando Pereira penned an article under the title The Unreasonable Effectiveness of Data. In it, they describe the surprising insight that given enough data, often the choice of mathematical model stops being as important - that particularly for their task of automated language translation, "simple models and a lot of data trump more elaborate models based on less data."
If we make the leap and assume that this insight can be at least partially extended to fields beyond natural language processing, what we can expect is a situation in which domain knowledge is increasingly trumped by "mere" data-mining skills. I would argue that this prediction has already begun to pan-out: in a wide array of academic fields, the ability to effectively process data is superseding other more classical modes of research.
Researchers at the University of Adelaide have discovered a novel and complex visual circuit in a dragonfly's brain that could one day help to improve vision systems for robots.
I have no idea if an algorithm based on this already exists, but it would certainly be a good one for autonomous AI, I think.
I think an algorithm based on this should be able to select his own input parameters and reject them if they are not stimulated any further or integrate them in the algorithm if they are continiously stimulated... this could enable self learning, etc.
By steering the neuron's back to an intermediate activity level the mechanism probably optimizes their efficiency within the network (after all a neuron that fires all the time is just as useless as one that never fires).
Built in Silicon technology (Samsung's 28 nm process), its power is measured as one million neurons and 256 million synapses. It contains 5.4 million transistor being the largest IBM chip in these terms. All this said, it consumes less than 100 mW!!
"These systems can efficiently process high-dimensional, noisy sensory data in real time, while consuming orders of magnitude less power than conventional computer architectures." IBM is working with initLabs to integrate the DVS retinal camera with these chips = real time image neuro-like image processing.
In what seems to be a very successful project hugely funded by DARPA, "Our sights are now set high on the ambitious goal of integrating 4,096 chips in a single rack with 4 billion neurons and 1 trillion synapses while consuming ~4kW of power."