thanks - I was already wondering several times what had happened to this technique that he used at the talk we looked at several times when it was first uploaded ... good that they have made it open source! are they easy to use?
the easiest way to use them is:
Google Docs > open/create a spreadsheet > Insert > Gadget > Charts > Motion Chart !! :)
You have here a tutorial describing all the steps to get it running.
A common data hub that allows the representation and comparison of data from numerous space missions. "The IMPEx portal offers tools for the visualization and analysis of datasets from different space missions. Furthermore, several computational model databases are feeding into the environment." As they say, with its massive 3D-visualization capabilities it offers the possibility of displaying spacecraft trajectories, planetary ephemerides as well as scientific representations of observational and simulation datasets.
Really nice visualizations for rectifier (ReLU) neural nets, illustrating the effects of skip-connections, depth, width, etc. on the loss function curvature.
I wonder if this could be used for irregular adaptive grids?
From the description:
"One application of the diagram is the idea of Voronoi entropy - a mathematical tool for quantitative characterisation of the orderliness of points distributed on a surface - i.e. how visually 'ordered' the tessellation is. I found this idea particularly fascinating especially when thinking about the aperiodicity and the infinite structure of the Penrose tiling. In these visuals, the Voronoi diagram is created using the vertices of the Penrose as its seed points. This creates a new type of Penrose Tiling, clearly different from the classical Penrose, however still exhibiting the fivefold structure of the original, while 'defects' begin to appear at the peripheries."
Cool project from a few years ago! Checked for some recent updates but they seem to have adapted the concept to structural health monitoring and visual vibrometry to estimate material properties and wear.
Prezi is a cloud-based presentation software that opens up a new world between whiteboards and slides. The zoomable canvas makes it fun to explore ideas and the connections between them. The result: visually captivating presentations that lead your audience down a path of discovery.
AutoDraw is a new kind of drawing tool. It pairs machine learning with drawings from talented artists to help everyone create anything visual, fast. There's nothing to download. Nothing to pay for. And it works anywhere: smartphone, tablet, laptop, desktop, etc.
AutoDraw's suggestion tool uses the same technology used in QuickDraw, to guess what you're trying to draw. Right now, it can guess hundreds of drawings and we look forward to adding more over time. If you are interested in creating drawings for others to use with AutoDraw, contact us here.
We hope AutoDraw will help make drawing and creating a little more accessible and fun for everyone.
Great introduction to the Bayesian view on the workings of the brain, which has been a successful view in explaining many psychological phenomena, visual illusions, etc.
One of the possible criticisms on this view is that it neatly separates perception and action.
Mantis shrimp seem to have 12 types of photo-receptive sensors - but this does not really improve their ability to discriminate between colors. Speculation is that they serve as a form of pre-processing for visual information: the brain does not need to decode full color information from just a few channels which would would allow for a smaller brain.
I guess technologically the two extremes of light detection would be RGB cameras which are like our eyes and offer good spatial resolution, and spectrometers which have a large amount of color channels but at the cost of spatial resolution. It seems the mantis shrimp uses something that is somewhere between RGB cameras and spectrometers. Could there be a use for this in space?
> RGB cameras which are like our eyes
...apart from the fact that the spectral response of the eyes is completely different from "RGB" cameras (http://en.wikipedia.org/wiki/File:Cones_SMJ2_E.svg)
... and that the eyes have 4 types of light-sensitive cells, not three (http://en.wikipedia.org/wiki/File:Cone-response.svg)
... and that, unlike cameras, human eye is precise only in a very narrow centre region (http://en.wikipedia.org/wiki/Fovea)
...hmm, apart from relying on tri-stimulus colour perception it seems human eyes are in fact completely different from "RGB cameras" :-)
OK sorry for picking on this - that's just the colour science geek in me :-)
Now seriously, on one hand the article abstract sounds very interesting, but on the other the statement "Why use 12 color channels when three or four are sufficient for fine color discrimination?" reveals so much ignorance to the very basics of colour science that I'm completely puzzled - in the end, it's a Science article so it should be reasonably scientifically sound, right?
Pity I can't access full text... the interesting thing is that more channels mean more information and therefore should require *more* power to process - which is exactly opposite to their theory (as far as I can tell it from the abstract...). So the key is to understand *what* information about light these mantises are collecting and why
- definitely it's not "colour" in the sense of human perceptual experience.
But in any case - yes, spectrometry has its uses in space :-)
"An elegant combination of electronics and elastic materials has been used to construct a small visual sensor that closely resembles an insect's eye. The device paves the way for autonomous navigation of tiny aerial vehicles."