Here are 12 of our favourite images of 2011, in no particular order. These range from the beautiful and historical to pictures that show how science affects the world we live in
Here are 12 of our favourite images of 2011, in no particular order. These range from the beautiful and historical to pictures that show how science affects the world we live in
Physicists have devised a way to take pictures using light that has not interacted with the object being photographed. This form of imaging uses pairs of photons, twins that are 'entangled' in such a way that the quantum state of one is inextricably linked to the other.
The picture was made with the Canon 5D mark II and a 400mm-lens. It consists of 1.665 full format pictures with 21.4 megapixel, which was recorded by a photo-robot in 172 minutes.
"With a resolution of 297.500 x 87.500 pixel (26 gigapixel) the picture is the largest in the world. (stand December 2009)"
Daring statement... I'm not quite sure, but I'd quess microscopic images used in medicine can easily reach terapixels...
What a waste of pixels anyway... they weren't able to find a bit more interesing city?
yes it seems like. most of it seems however directed toward understanding this effect, and not toward applications. But i'm still convinced that we could find many very interesting applications !!!
a few references from ADS:
1 2011PhRvA..83f3807B
1.000 06/2011 A E X R C U
Brida, G.; Chekhova, M. V.; Fornaro, G. A.; Genovese, M.; Lopaeva, E. D.; Berchera, I. Ruo
Systematic analysis of signal-to-noise ratio in bipartite ghost imaging with classical and quantum light
2 2011PhRvA..83e3808L
1.000 05/2011 A E R U
Liu, Ying-Chuan; Kuang, Le-Man
Theoretical scheme of thermal-light many-ghost imaging by Nth-order intensity correlation
3 2011PhRvA..83e1803D
1.000 05/2011 A E R C U
Dixon, P. Ben; Howland, Gregory A.; Chan, Kam Wai Clifford; O'Sullivan-Hale, Colin; Rodenburg, Brandon; Hardy, Nicholas D.; Shapiro, Jeffrey H.; Simon, D. S.; Sergienko, A. V.; Boyd, R. W.; Howell, John C.
Quantum ghost imaging through turbulence
4 2011SPIE.7961E.160O
1.000 03/2011 A E T
Ohuchi, H.; Kondo, Y.
Complete erasing of ghost images caused by deeply trapped electrons on computed radiography plates
5 2011ApPhL..98k1115M
1.000 03/2011 A E R U
Meyers, Ronald E.; Deacon, Keith S.; Shih, Yanhua
Turbulence-free ghost imaging
6 2011ApPhL..98k1102G
1.000 03/2011 A E R C U
Gan, Shu; Zhang, Su-Heng; Zhao, Ting; Xiong, Jun; Zhang, Xiangdong; Wang, Kaige
Cloaking of a phase object in ghost imaging
7 2011RScI...82b3110Y
1.000 02/2011 A E R U
Yang, Hao; Zhao, Baosheng; Qiu
I love the comments:
"damn i almost thought this was about Mercury messenger... an OSX messenger app..."
"I'm just glad we have an atmosphere"
"The US is in the biggest economic crisis since the Great Depression... and we're spending all this money... for this? What a waste. Get rid of NASA - it will save us trillions! "
sic :-(
...Covering about a third of the sky, the new image contains 10 times as many objects as the Palomar Survey, or about half a billion. The higher resolution scan is a goldmine for astronomers and is expected to lead to discoveries "for decades to come"...
wow thats pretty amazing! Ok, the pictures are not great (mainly due to skin surface, baggy eyes, zits I guess) but considering its only from DNA it is pretty close already. That will help crime scene investigations greatly, whether positively or negatively.
Well actually, they did something like that as they searched for common DNA patterns in people that had similar facial features. With a large enough dataset that could provide already 24 DNA tracers that could used reliably for prediction. Imagine if you had even more data available, who needs a model then... just let the NN do it :)
Demand for drones is exploding!
Dupin wants to aggregate aerial imagery from around the globle at Dronestagr.am.
In the near future we could experience something close to google maps, made with aerial pictures.
Remember that drones have top-view camera and front-view camera which gives more possibilities in terms of what you can do with collected data. With such a huge database and a little bit of 3D geometry we could get e.g. a 3D map of the world... I guess google can derive something like that already from their streetview images however obviously street view covers some relatively small part of the globe and also can not access places that UAV can.
True, but fundings are allocated to climate modelling 'science' on the basis of political decisions, not solid and boring scientific truisms such as 'all models are wrong'.
The reason so many people got trained on this area in the past years is that resources were allocated to climate science on the basis of the dramatic picture depicted by some scientists when it was indeed convenient for them to be dramatic.
I see your point, and I agree that funding was also promoted through the energy players and their political influence. A coincident parallel interest which is irrelevant to the fact that the question remains vital. How do we affect climate and how does it respond. Huge complex system to analyse which responds in various time scales which could obscure the trend.
What if we made a conceptual parallelism with the L Ácquila case : Is the scientific method guilty or the interpretation of uncertainty in terms of societal mobilization? Should we leave the humanitarian aspect outside any scientific activity?
I do not think there is anyone arguing that the question is not interesting and complex.
The debate, instead, addresses the predictive value of the models produced so far. Are they good enough to be used outside of the scientific process aimed at improving them? Or should one wait for "the scientific method" to bring forth substantial improvements to the current understanding and only then start using its results? One can take both stand points, but some recent developments will bring many towards the second approach.
For all your statistics needs. The "Plastic Debris in the World's Oceans Report - UNEP" www.unep.org/regionalseas/marinelitter/.../docs/plastic_ocean_report.pdf
"Densities of plastic debris (Moore et al. 2001). Using nets to collect debris, the abundance of floating plastic averaged 334,271 pieces/km2"
More worrying maybe is (http://www.ncbi.nlm.nih.gov/pubmed/22610295)
"Our oceans eventually serve as a sink for these small plastic particles ("UV degraded surface plastic") and in one estimate, it is thought that 200,000 microplastics per km(2) of the ocean's surface commonly exist."
Here we go: we might not need liquid water after all on mars to get some nice flowering plants there! ... and terraform ? :-)
Thirsty plants can extract water from the crystalline structure of gypsum, a rock-forming mineral found in soil on Earth and Mars.
Some plants grow on gypsum outcrops and remain active even during dry summer months, despite having shallow roots that cannot reach the water table. Sara Palacio of the Pyrenean Institute of Ecology in Jaca, Spain, and her colleagues compared the isotopic composition of sap from one such plant, called Helianthemum squamatum (pictured), with gypsum crystallization water and water found free in the soil. The team found that up to 90% of the plant's summer water supply came from gypsum.
The study has implications for the search for life in extreme environments on this planet and others.
Very interesting indeed. Attention is to be put on the form of calcium sulfate that is found on Mars. If it is hydrated (gypsum Ca(SO4)*2(H2O)) it works, but if it is dehydrated there is no water for the roots to take in.
The Curiosity Rover tries to find out, but has uncertainty in recognising the hydrogen presence in the mineral:
Copying :
"(...)
3.2 Hydration state of calcium sulfates
Calcium sulfates occur as a non-hydrated phase (anhydrite, CaSO4) or as one of two
hydrated phases (bassanite, CaSO4.1/2H2O, which can contain a somewhat variable water
content, and gypsum, CaSO4.2H2O). ChemCam identifies the presence of hydrogen at 656 nm,
as already found in soils and dust [Meslin et al., 2013] and within fluvial conglomerates
[Williams et al., 2013]. However, the quantification of H is strongly affected by matrix effects
[Schröder et al., 2013], i.e. effects including major or even minor element chemistry, optical and
mechanical properties, that can result in variations of emission lines unrelated to actual
quantitative variations of the element in question in the sample. Due to these effects,
discriminating between bassanite and gypsum is difficult. (...)"
Strange and exotic cityscapes. Desolate wilderness areas. Dogs that look like wookies. Flickr, one of the world's largest photo sharing services, sees it all. And, now, Flickr's image recognition technology can categorize more than 11 billion photos like these. And it does it automatically. It's called "Magic View."
Magical deep learning! Buzzword attack!
Even on ground. You could for example teach it what nuclear reactors or missiles or other weapons you don't want look like on satellite pictures and automatically scan the world for them (basically replacing intelligence analysts).
In fact, I think this could make a nice ACT project: counting seals from satellite imagery is an actual (and quite recent) thing:
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0092613
In this publication they did it manually from a GeoEye 1 b/w image, which sounds quite tedious. Maybe one can train one of those image recognition algorithms to do it automatically.
Or maybe it's a bit easier to count larger things, like elephants (also a thing).
In HiPEAC (High Performance, embedded architecture and computation) conference I attended in the beginning of this year there was a big trend of CUDA GPU vs FPGA for hardware accelerated image processing. Most of it orbitting around discussing who was faster and cheaper with people from NVIDIA in one side and people from Xilinx and Intel in the other. I remember of talking with an IBM scientist working on hardware accelerated data processing working together with the Radio telescope institute in Netherlands about the solution where they working on (GPU CUDA).
I gathered that NVIDIA GPU suits best in applications that somehow do not rely in hardware, having the advantage of being programmed in a 'easy' way accessible to a scientist. FPGA's are highly reliable components with the advantage of being available in radhard versions, but requiring specific knowledge of physical circuit design and tailored 'harsh' programming languages. I don't know what is the level of rad hardness in NVIDIA's GPUs...
Therefore FPGAs are indeed the standard choice for image processing in space missions (a talk with the microelectronics department guys could expand on this), whereas GPUs are currently used in some ground based (radio astronomy or other types of telescopes).
I think that on for a specific purpose as the one you mentioned, this FPGA vs GPU should be assessed first before going further.
You're forgetting power usage. GPUs need 1000 hamster wheels worth of power while FPGAs can run on a potato. Since space applications are highly power limited, putting any kind of GPU monster in orbit or on a rover is failed idea from the start. Also in FPGAs if a gate burns out from radiation you can just reprogram around it.
Looking for seals offline in high res images is indeed definitely a GPU task.... for now.