New data show that the balance between the airborne and the absorbed fraction of CO2 has stayed approximately constant since 1850, despite emissions of CO2 having risen from about 2 billion tons a year in 1850 to 35 billion tons a year now.
The work is based on the use of a "continuous flow" microreactor to produce nanoparticle inks that make solar cells by printing. In this process, simulated sunlight is focused on the solar microreactor to rapidly heat it, while allowing precise control of temperature to aid the quality of the finished product. The light in these experiments was produced artificially, but the process could be done with direct sunlight, and at a fraction of the cost of current approaches.
The novel part here is that it can be scaled down to the cubesat platform. I then wondered, could we place multiple of such Cubesats in a 'decaying orbit' around the Sun? Fractionated will give spatial and temporal information which, even with a simple langmuir probe setup, can give information on density, temperature, velocity, ion energy distribution, potential.. Of course they will be lost relatively quickly, but more could be ejected from a mother ship which is orbiting at a safer distance.
DARPA for several years has been working on a program dubbed System F6 that seeks to prove that a cluster of small spacecraft can perform the mission of a large spacecraft by communicating wirelessly with one another in space
DARPA plans to launch three dedicated System F6 spacecraft either to low Earth orbit or geostationary orbit in mid-2013 to 2014
DARPA will entertain proposals from all qualified sources,
be they government, commercial, national or international, the posting said.
Responses to the request for information are due May 17.
yes it seems like. most of it seems however directed toward understanding this effect, and not toward applications. But i'm still convinced that we could find many very interesting applications !!!
a few references from ADS:
1 2011PhRvA..83f3807B
1.000 06/2011 A E X R C U
Brida, G.; Chekhova, M. V.; Fornaro, G. A.; Genovese, M.; Lopaeva, E. D.; Berchera, I. Ruo
Systematic analysis of signal-to-noise ratio in bipartite ghost imaging with classical and quantum light
2 2011PhRvA..83e3808L
1.000 05/2011 A E R U
Liu, Ying-Chuan; Kuang, Le-Man
Theoretical scheme of thermal-light many-ghost imaging by Nth-order intensity correlation
3 2011PhRvA..83e1803D
1.000 05/2011 A E R C U
Dixon, P. Ben; Howland, Gregory A.; Chan, Kam Wai Clifford; O'Sullivan-Hale, Colin; Rodenburg, Brandon; Hardy, Nicholas D.; Shapiro, Jeffrey H.; Simon, D. S.; Sergienko, A. V.; Boyd, R. W.; Howell, John C.
Quantum ghost imaging through turbulence
4 2011SPIE.7961E.160O
1.000 03/2011 A E T
Ohuchi, H.; Kondo, Y.
Complete erasing of ghost images caused by deeply trapped electrons on computed radiography plates
5 2011ApPhL..98k1115M
1.000 03/2011 A E R U
Meyers, Ronald E.; Deacon, Keith S.; Shih, Yanhua
Turbulence-free ghost imaging
6 2011ApPhL..98k1102G
1.000 03/2011 A E R C U
Gan, Shu; Zhang, Su-Heng; Zhao, Ting; Xiong, Jun; Zhang, Xiangdong; Wang, Kaige
Cloaking of a phase object in ghost imaging
7 2011RScI...82b3110Y
1.000 02/2011 A E R U
Yang, Hao; Zhao, Baosheng; Qiu
I especially like " The program will also create a "developer's kit" of open hardware and software specifications to make it easier for new components to integrate into such fractionated systems."
Joris: wanna take the lead on having a closer look on this, I definitely would like to be part of it and happy to contribute, possibly also Juxi? - first assessment by Christmas realistic?
I think it a very interesting approach.
If you google "darpa F6", you should see that a lot seems to be on-going. So, should we do something about it before having the conclusions of the Darpa study ?
wait and see is never a good approach in these cases .... first step has to be anyway to understand what they are up to and then to think about our own ideas on it, own approaches, alternatives and then to see what we can do specifically in the team on it.
Researchers in the US have developed a new kind of organic solar cell that converts a small but significant fraction of the sunlight that falls onto it into electricity, while still allowing most of the visible part of that light to pass through. Thanks to this transparency, the team says that the cell could be mounted onto windows in buildings or cars in order to tap a currently under-exploited source of energy.
Positioning via accumulated accelerometer data used to stabilize cold trapped atoms. Current systems are not very reliable for submarines, which cannot use GPS underwater.
To create the supersensitive quantum accelerometers, Stansfield's team was inspired by the Nobel-prizewinning discovery that lasers can trap and cool a cloud of atoms placed in a vacuum to a fraction of a degree above absolute zero. Once chilled, the atoms achieve a quantum state that is easily perturbed by an outside force - and another laser beam can then be used to track them. This looks out for any changes caused by a perturbation, which are then used to calculate the size of the outside force.
Just getting a particle up to near the speed of light isn't good enough for today's physics. To properly unravel the fundamentals of the universe, particles have to be smashed together with enormous force. And two Stanford researchers have just devised a laser-based method that imparts ten times the power of traditional methods at a fraction of the cost.
This has been around for over a year. The current trend in deep learning is "deeper is better". But a consequence of this is that for a given network depth, we can only feasibly evaluate a tiny fraction of the "search space" of NN architectures. The current approach to choosing a network architecture is to iteratively add more layers/units and keeping the architecture which gives an increase in the accuracy on some held-out data set i.e. we have the following information: {NN, accuracy}. Clearly, this process can be automated by using the accuracy as a 'signal' to a learning algorithm. The novelty in this work is they use reinforcement learning with a recurrent neural network controller which is trained by a policy gradient - a gradient-based method. Previously, evolutionary algorithms would typically be used.
In summary, yes, the results are impressive - BUT this was only possible because they had access to Google's resources. An evolutionary approach would probably end up with the same architecture - it would just take longer. This is part of a broader research area in deep learning called 'meta-learning' which seeks to automate all aspects of neural network training.