Big Data bring new opportunities to modern society and challenges to data scientists. On the one hand, Big Data hold great promises for discovering subtle population patterns and heterogeneities that are not possible with small-scale data. On the other hand, the massive sample size and high dimensionality of Big Data introduce unique computational and statistical challenges, including scalability and storage bottleneck, noise accumulation, spurious correlation, incidental endogeneity and measurement errors. These challenges are distinguished and require new computational and statistical paradigm. This paper gives overviews on the salient features of Big Data and how these features impact on paradigm change on statistical and computational methods as well as computing architectures.
An exhaustive overview of all possible advanced rocket concepts, eg..
"As an example, consider a photon rocket with its launching mass, say, 1000 ton moving with a constant acceleration a =0.1 g=0.98 m/s2. The flux of photons with E γ=0.5 MeV needed to produce this acceleration is ~1027/s, which corresponds to the efflux power of 1014 W and the rate of annihilation events N'a~5×1026 s−1 [47]. This annihilation rate in ambiplasma l -l ann corresponds to the value of current ~108 A and linear density N ~2×1018 m−1 thus any hope for non-relativistic relative velocity of electrons and positrons in ambiplasma is groundless."
And also, even if it would work, then one of the major issues is going to be heat dispersal:
"For example, if the temperature of radiator is chosen T=1500 K, the emitting area should be not less than 1000 m2 for Pb=1 GW, not less than 1 km2 for Pb=1 TW, and ~100 km2 for Pb=100 TW, assuming ε=0.5 and δ=0.2. Lower temperature would require even larger radiator area to maintain the outer temperature of the engine section stable for a given thermal power of the reactor."
We were also discussing a while ago a propulsion system using the relativistic fragments from nuclear fission. That would also produce an extremely high ISP (>100000) with a fairly high thrust.
Never really got any traction though.
I absolutely do not see the point in a photon rocket. Certainly, the high energy releasing nulcear processes (annihilation, fusion, ...) should rather be used to heat up some fluid to plasma state and accelerate it via magnetic nozzle. This would surely work as door-opener to our solar system...and by the way minimize the heat disposal problem if regenerative cooling is used.
The problem is not achieving a high energy density, that we can already do with nuclear fission, the question however is how to confine or harness this power with relatively high efficiency, low waste heat and at not too crazy specific mass. I see magnetic confinement as a possibility, yet still decades away and also an all-or-nothing method as we cannot easily scale this up from a test experiment to a full-scale system. It might be possible to extract power from such a plasma, but definitely well below breakeven so an additional power supply is needed. The fission fragments circumvent these issues by a more brute force approach, thereby wasting a lot of energy for sure but at the end probably providing more ISP and thrust.
Sure. However, the annihilation based photon rocket concept unifies almost all relevant drawbacks if we speak about solar system scales, making itself obsolete...it is just an academic testcase.
Cornell applied physicists have demonstrated an unprecedented method of control over electron spins using extremely high-frequency sound waves - new insights in the study of the spin of the electron. Crazy idea but, no further need for complicated quantum encryption techniques of sound signals?
"One method (...) involves counting the atoms in two silicon-28 spheres that each weigh the same as the reference kilogram."
Sounds like a lengthy task, but someone must keep those physics PhD students busy, I guess...
By attaching a diamond crystal to an AFM tip, researcher at New York City University managed to measure the heat flows at atomic levels in resistors.
The method works due to a vacancy in the carbon lattice, two spots are empty of which one is filled with a nitrogen atom. The energy state of the vacancy is temperature dependent and can actually be read out spectroscopically.
Nature paper showing a new photo-bioelectrochemical cell with a new photon-driven biocatalytic fuel cell method achieving electrical power generation from solar energy.
This paper introduces a novel deflection approach based on nuclear explosions: the nuclear cycler. The idea is to combine the effectiveness of nuclear explosions with the controllability and redundancy offered by slow push methods within an incremental deflection strategy. The paper will present an extended model for single nuclear stand-off explosions in the proximity of elongated ellipsoidal asteroids, and a family of natural formation orbits that allows the spacecraft to deploy multiple bombs while being shielded by the asteroid during the detonation.
New method for geoengineering?
New insight into methanotrophs, bacteria that can oxidise methane, may help us develop an array of biotechnological applications that exploit methane and protect our environment from this potent greenhouse gas. Publishing in Nature, scientists led by Newcastle University have provided new understanding of how methanotrophs are able to use large quantities of copper for methane oxidation.
Prediction-based method for encouraging reinforcement learning agents to explore their environments through curiosity (reward for unfamiliar states). Learns some games without any extrinsic reward!
Not read this article but on a related note: Curiosity and various metrics for it have been explored for some time in robotics (outside of RL) as a framework for exploring (partially) unfamiliar environments. I came across some papers on this topic applied to UAVs when prep'ing for a PhD app. This one (http://www.cim.mcgill.ca/~yogesh/publications/crv2014.pdf) comes to mind - which used a topic modelling approach.
:) the idea of having this method for debris removal is actually an ACT one from Claudio Bombardelli (ACT RF in MAD https://en.wikipedia.org/wiki/Ion-beam_shepherd). This is just a technological device to implement it so that the system on board is simplified (i.e. instead of two engines, you get away with one and a weird nozzle)
Marcus, you cannot align it to get rid of two debris as you need to keep the spacecraft close to the debris as this is a long duration acion. One of the two would drift away (can only follow one!)
lol, while ESA is sending around memos and its managers are spending time talking about validation and verification of AI methods ... US / DARPA is already 5-6 years ahead.
Hopefully the ACT can contribute to this with our DA based approach ....
"In contrast to conventional multipixel cameras, single-pixel cameras capture images using a single detector that measures the correlations between the scene and a set of patterns. However, these systems typically exhibit low frame rates, because to fully sample a scene in this way requires at least the same number of correlation measurements as the number of pixels in the reconstructed image."
This has been around for over a year. The current trend in deep learning is "deeper is better". But a consequence of this is that for a given network depth, we can only feasibly evaluate a tiny fraction of the "search space" of NN architectures. The current approach to choosing a network architecture is to iteratively add more layers/units and keeping the architecture which gives an increase in the accuracy on some held-out data set i.e. we have the following information: {NN, accuracy}. Clearly, this process can be automated by using the accuracy as a 'signal' to a learning algorithm. The novelty in this work is they use reinforcement learning with a recurrent neural network controller which is trained by a policy gradient - a gradient-based method. Previously, evolutionary algorithms would typically be used.
In summary, yes, the results are impressive - BUT this was only possible because they had access to Google's resources. An evolutionary approach would probably end up with the same architecture - it would just take longer. This is part of a broader research area in deep learning called 'meta-learning' which seeks to automate all aspects of neural network training.