Skip to main content

Home/ Advanced Concepts Team/ Group items tagged for

Rss Feed Group items tagged

Luís F. Simões

NASA Goddard to Auction off Patents for Automated Software Code Generation - 0 views

  • The technology was originally developed to handle coding of control code for spacecraft swarms, but it is broadly applicable to any commercial application where rule-based systems development is used.
  •  
    This is related to the "Verified Software" item in NewScientist's list of ideas that will change science. At the link below you'll find the text of the patents being auctioned: http://icapoceantomo.com/item-for-sale/exclusive-license-related-improved-methodology-formally-developing-control-systems :) Patent #7,627,538 ("Swarm autonomic agents with self-destruct capability") makes for quite an interesting read: "This invention relates generally to artificial intelligence and, more particularly, to architecture for collective interactions between autonomous entities." "In some embodiments, an evolvable synthetic neural system is operably coupled to one or more evolvable synthetic neural systems in a hierarchy." "In yet another aspect, an autonomous nanotechnology swarm may comprise a plurality of workers composed of self-similar autonomic components that are arranged to perform individual tasks in furtherance of a desired objective." "In still yet another aspect, a process to construct an environment to satisfy increasingly demanding external requirements may include instantiating an embryonic evolvable neural interface and evolving the embryonic evolvable neural interface towards complex complete connectivity." "In some embodiments, NBF 500 also includes genetic algorithms (GA) 504 at each interface between autonomic components. The GAs 504 may modify the intra-ENI 202 to satisfy requirements of the SALs 502 during learning, task execution or impairment of other subsystems."
pacome delva

TeamParis-SynthEthics - 5 views

  •  
    This is an interesting report from a student in sociology, who worked with a group of scientists on a synthetic biology project for the competition IGEM (http://2009.igem.org/Main_Page). This is what happen when you mix hard and soft sciences. For this project they won the special prize for "Best Human Practices Advance". You can read the part on self or exploded governance (p.34). When reading parts of this reports, I thought that it could be good to have a stagiaire or a YGT in human science to see if we can raise interesting question about ethics for the space sector. There are many questions I'm sure, about the governance, the legitimacy of spending millions to go in space, etc...
Luís F. Simões

Boeing probes international market for human spacecraft - 1 views

  • The aerospace powerhouse is designing and testing systems for its CST-100 space capsule, a craft the company says could begin flying astronauts to low Earth orbit by 2015. It will launch on existing rockets to lessen development risk and costs.
  • "The spacecraft that we're designing is rocket-agnostic. It would be possible to sell this like a commercial airplane to countries who perhaps have a launch vehicle who would like to launch it in their own country."
  •  
    ...and hitting the news in the same day: A Rocket Built from U.S. and European Parts "A new rocket that would combine parts from NASA's canceled Ares I rocket as well as the Ariane 5 , a well-proven European satellite launcher, could provide a low-cost option for taking crew and cargo to the space station. The rocket proposal was announced this week by ATK, an aerospace and defense company that manufactures the solid rocket motors for NASA's space shuttles, and Astrium, the European company that makes the Ariane 5. They say the rocket, called Liberty, would be ready for flight by 2015." "Other commercial companies, including Boeing and Orbital Sciences Corporation, are looking to use low-end versions of the Atlas V to carry the capsules they are building. Liberty could carry any capsule at a cost less than that of the Atlas V, according to ATK." Look! Competition! :)
darioizzo2

Optimised spatial planning to meet long term urban sustainability objectives - ScienceD... - 3 views

  •  
    for the ACT architects .... Can we do the same for the Moon Village? We brainstorm on some mathematical simplified objectives for growing the settlement (taking inputs from the modular growth, resources, terrain suitability etc ....), we define some simple rules for growth and we optimize. ..... easy peasy (i am serious)
  •  
    i agree, with most of the parameters that would actually be really cool. but doesn't it get very messy once economy plays a large factor?
  •  
    We can start studying the ideal case, or add also some economical constraints on the settlement layout ...
santecarloni

Getting to the froth of the matter - physicsworld.com - 1 views

  •  
    Whether it is the frothy milk on your cappuccino, the soapy suds in your bath or the large-scale structure of the universe, foams have intrigued physicists for many years. Now, for the first time in a lab, an international group of scientists has made the Weaire-Phelan foam - which physicists believe is the lowest-energy structure for a foam formed of equal-volume bubbles.
  •  
    Does this mean that there is foam that is non regular that can have even less energy structure? "which physicists believe is the lowest-energy structure for a foam formed of equal-volume bubbles."
Athanasia Nikolaou

Nature Paper: Rivers and streams release more CO2 than previously believed - 6 views

  •  
    Another underestimated source of CO2, are turbulent waters. "The stronger the turbulences at the water's surface, the more CO2 is released into the atmosphere. The combination of maps and data revealed that, while the CO2 emissions from lakes and reservoirs are lower than assumed, those from rivers and streams are three times as high as previously believed." Alltogether the emitted CO2 equates to roughly one-fifth of the emissions caused by humans. Yet more stuff to model...
  • ...10 more comments...
  •  
    This could also be a mechanism to counter human CO2 emission ... the more we emit, the less turbulent rivers and stream, the less CO2 is emitted there ... makes sense?
  •  
    I guess there is a natural equilibrium there. Once the climate warms up enough for all rivers and streams to evaporate they will not contribute CO2 anymore - which stops their contribution to global warming. So the problem is also the solution (as always).
  •  
    "The source of inland water CO2 is still not known with certainty and new studies are needed to research the mechanisms controlling CO2 evasion globally." It is another source of CO2 this one, and the turbulence in the rivers is independent of our emissions in CO2 and just facilitates the process of releasing CO2 waters. Dario, if I understood correct you have in mind a finite quantity of CO2 that the atmosphere can accomodate, and to my knowledge this does not happen, so I cannot find a relevant feedback there. Johannes, H2O is a powerful greenhouse gas :-)
  •  
    Nasia I think you did not get my point (a joke, really, that Johannes continued) .... by emitting more CO2 we warm up the planet thus drying up rivers and lakes which will, in turn emit less CO2 :) No finite quantity of CO2 in the atmosphere is needed to close this loop ... ... as for the H2O it could just go into non turbulent waters rather than staying into the atmosphere ...
  •  
    Really awkward joke explanation: I got the joke of Johannes, but maybe you did not get mine: by warming up the planet to get rid of the rivers and their problems, the water of the rivers will be accomodated in the atmosphere, therefore, the greenhouse gas of water.
  •  
    from my previous post: "... as for the H2O it could just go into non turbulent waters rather than staying into the atmosphere ..."
  •  
    I guess the emphasis is on "could"... ;-) Also, everybody knows that rain is cold - so more water in the atmosphere makes the climate colder.
  •  
    do you have the nature paper also? looks like very nice, meticulous typically german research lasting over 10 years with painstakingly many researchers from all over the world involved .... and while important the total is still only 20% of human emissions ... so a variation in it does not seem to change the overall picture
  •  
    here is the nature paper : http://www.nature.com/nature/journal/v503/n7476/full/nature12760.html I appreciate Johannes' and Dario's jokes, since climate is the common ground that all of us can have an opinion, taking honours from experiencing weather. But, the same as if I am trying to make jokes for material science, or A.I. I take a high risk of failing(!) :-S Water is a greenhouse gas, rain rather releases latent heat to the environment in order to be formed, Johannes, nice trolling effort ;-) Between this and the next jokes to come, I would stop to take a look here, provided you have 10 minutes: how/where rain forms http://www.scribd.com/doc/58033704/Tephigrams-for-Dummies
  •  
    omg
  •  
    Nasia, I thought about your statement carefully - and I cannot agree with you. Water is not a greenhouse gas. It is instead a liquid. Also, I can't believe you keep feeding the troll! :-P But on a more topical note: I think it is an over-simplification to call water a greenhouse gas - water is one of the most important mechanisms in the way Earth handles heat input from the sun. The latent heat that you mention actually cools Earth: solar energy that would otherwise heat Earth's surface is ABSORBED as latent heat by water which consequently evaporates - the same water condenses into rain drops at high altitudes and releases this stored heat. In effect the water cycle is a mechanism of heat transport from low altitude to high altitude where the chance of infrared radiation escaping into space is much higher due to the much thinner layer of atmosphere above (including the smaller abundance of greenhouse gasses). Also, as I know you are well aware, the cloud cover that results from water condensation in the troposphere dramatically increases albedo which has a cooling effect on climate. Furthermore the heat capacity of wet air ("humid heat") is much larger than that of dry air - so any advective heat transfer due to air currents is more efficient in wet air - transporting heat from warm areas to a natural heat sink e.g. polar regions. Of course there are also climate heating effects of water like the absorption of IR radiation. But I stand by my statement (as defended in the above) that rain cools the atmosphere. Oh and also some nice reading material on the complexities related to climate feedback due to sea surface temperature: http://journals.ametsoc.org/doi/abs/10.1175/1520-0442(1993)006%3C2049%3ALSEOTR%3E2.0.CO%3B2
  •  
    I enjoy trolling conversations when there is a gain for both sides at the end :-) . I had to check upon some of the facts in order to explain my self properly. The IPCC report states the greenhouse gases here, and water vapour is included: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-2-1.html Honestly, I read only the abstract of the article you posted, which is a very interesting hypothesis on the mechanism of regulating sea surface temperature, but it is very localized to the tropics (vivid convection, storms) a region of which I have very little expertise, and is difficult to study because it has non-hydrostatic dynamics. The only thing I can comment there is that the authors define constant relative humidity for the bottom layer, supplied by the oceanic surface, which limits the implementation of the concept on other earth regions. Also, we may confuse during the conversation the greenhouse gas with the Radiative Forcing of each greenhouse gas: I see your point of the latent heat trapped in the water vapour, and I agree, but the effect of the water is that it traps even as latent heat an amount of LR that would otherwise escape back to space. That is the greenhouse gas identity and an image to see the absorption bands in the atmosphere and how important the water is, without vain authority-based arguments that miss the explanation in the end: http://www.google.nl/imgres?imgurl=http://www.solarchords.com/uploaded/82/87-33833-450015_44absorbspec.gif&imgrefurl=http://www.solarchords.com/agw-science/4/greenhouse--1-radiation/33784/&h=468&w=458&sz=28&tbnid=x2NtfKh5OPM7lM:&tbnh=98&tbnw=96&zoom=1&usg=__KldteWbV19nVPbbsC4jsOgzCK6E=&docid=cMRZ9f22jbtYPM&sa=X&ei=SwynUq2TMqiS0QXVq4C4Aw&ved=0CDkQ9QEwAw
LeopoldS

Peter Higgs: I wouldn't be productive enough for today's academic system | Science | Th... - 1 views

  •  
    what an interesting personality ... very symathetic Peter Higgs, the British physicist who gave his name to the Higgs boson, believes no university would employ him in today's academic system because he would not be considered "productive" enough.

    The emeritus professor at Edinburgh University, who says he has never sent an email, browsed the internet or even made a mobile phone call, published fewer than 10 papers after his groundbreaking work, which identified the mechanism by which subatomic material acquires mass, was published in 1964.

    He doubts a similar breakthrough could be achieved in today's academic culture, because of the expectations on academics to collaborate and keep churning out papers. He said: "It's difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964."

    Speaking to the Guardian en route to Stockholm to receive the 2013 Nobel prize for science, Higgs, 84, said he would almost certainly have been sacked had he not been nominated for the Nobel in 1980.

    Edinburgh University's authorities then took the view, he later learned, that he "might get a Nobel prize - and if he doesn't we can always get rid of him".

    Higgs said he became "an embarrassment to the department when they did research assessment exercises". A message would go around the department saying: "Please give a list of your recent publications." Higgs said: "I would send back a statement: 'None.' "

    By the time he retired in 1996, he was uncomfortable with the new academic culture. "After I retired it was quite a long time before I went back to my department. I thought I was well out of it. It wasn't my way of doing things any more. Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough."

    Higgs revealed that his career had also been jeopardised by his disagreements in the 1960s and 7
  •  
  •  
    interesting one - Luzi will like it :-)
Thijs Versloot

Dolphin inspired radar #biomimicry - 2 views

  •  
    The device, like dolphins, sends out two pulses in quick succession to allow for a targeted search for semiconductor devices, cancelling any background "noise",
  • ...1 more comment...
  •  
    and it sends out two pulses of opposite polarity, in succession, such that a semiconductor changes the negative to a positive one, amplifying the returning signal. Very interesting. Maybe we can combine different frequencies for identifying a single variable in earth observation. We already use more that one frequencies but for identifying one variable each.
  •  
    Could it be used to measure ocean acidification? I found a study that links sound wave propagation with ocean acidity. Maybe we are able to do such measurement from space even? "Their paper, "Unanticipated consequences of ocean acidification: A noisier ocean at lower pH," published last week in the journal Geophysical Research Letters, found that fossil fuels are turning up the ocean's volume. Since the beginning of the Industrial Revolution, the overall pH of the world's oceans has dropped by about 0.1 units, with more of the changes concentrated closer to the poles. The authors found that sound absorption has decreased by 15 percent in parts of the North Atlantic and by 10 percent throughout the Atlantic and Pacific"
  •  
    The last time I asked an oceanographer for the use of acoustic waves, she said it is still a bit problematic method to take into account its data, but we were referring to measuring ocean circulation. It may be more conclusive for PH measurements, though. The truth is that there is a whole underwater network with pulse emmitters/receivers covering the North Atlantic basin, remnant infrastructure for spying activities in the WW2 and in the cold war, that stays unexploited. We should look more into this idea
jmlloren

Exotic matter : Insight : Nature - 5 views

shared by jmlloren on 03 Aug 10 - Cached
LeopoldS liked it
  •  
    Trends in materials and condensed matter. Check out the topological insulators. amazing field.
  • ...12 more comments...
  •  
    Aparently very interesting, will it survive the short hype? Relevant work describing mirror charges of topological insulators and the classical boundary conditions were done by Ismo and Ari. But the two communities don't know each other and so they are never cited. Also a way to produce new things...
  •  
    Thanks for noticing! Indeed, I had no idea that Ari (don't know Ismo) was involved in the field. Was it before Kane's proposal or more recently? What I mostly like is that semiconductors are good candidates for 3D TI, however I got lost in the quantum field jargon. Yesterday, I got a headache trying to follow the Majorana fermions, the merons, skyrnions, axions, and so on. Luzi, are all these things familiar to you?
  •  
    Ismo Lindell described in the early 90's the mirror charge of what is now called topological insulator. He says that similar results were obtained already at the beginning of the 20th century... Ismo Lindell and Ari Sihvola in the recent years discussed engineering aspects of PEMCs (perfect electro-megnetic conductors,) which are more or less classical analogues of topological insulators. Fundamental aspects of PEMCs are well knwon in high-energy physics for a long time, recent works are mainly due to Friedrich Hehl and Yuri Obukhov. All these works are purely classical, so there is no charge quantisation, no considerations of electron spin etc. About Majorana fermions: yes, I spent several years of research on that topic. Axions: a topological state, of course, trivial :-) Also merons and skyrnions are topological states, but I'm less familiar with them.
  •  
    "Non-Abelian systems1, 2 contain composite particles that are neither fermions nor bosons and have a quantum statistics that is far richer than that offered by the fermion-boson dichotomy. The presence of such quasiparticles manifests itself in two remarkable ways. First, it leads to a degeneracy of the ground state that is not based on simple symmetry considerations and is robust against perturbations and interactions with the environment. Second, an interchange of two quasiparticles does not merely multiply the wavefunction by a sign, as is the case for fermions and bosons. Rather, it takes the system from one ground state to another. If a series of interchanges is made, the final state of the system will depend on the order in which these interchanges are being carried out, in sharp contrast to what happens when similar operations are performed on identical fermions or bosons." wow, this paper by Stern reads really weired ... any of you ever looked into this?
  •  
    C'mon Leopold, it's as trivial as the topological states, AKA axions! Regarding the question, not me!
  •  
    just looked up the wikipedia entry on axions .... at least they have some creativity in names giving: "In supersymmetric theories the axion has both a scalar and a fermionic superpartner. The fermionic superpartner of the axion is called the axino, the scalar superpartner is called the saxion. In some models, the saxion is the dilaton. They are all bundled up in a chiral superfield. The axino has been predicted to be the lightest supersymmetric particle in such a model.[24] In part due to this property, it is considered a candidate for the composition of dark matter.[25]"
  •  
    Thank's Leopold. Sorry Luzi for being ironic concerning the triviality of the axions. Now, Leo confirmed me that indeed is a trivial matter. I have problems with models where EVERYTHING is involved.
  •  
    Well, that's the theory of everything, isn't it?? Seriously: I don't think that theoretically there is a lot of new stuff here. Topological aspects of (non-Abelian) theories became extremely popular in the context of string theory. The reason is very simple: topological theories are much simpler than "normal" and since string theory anyway is far too complicated to be solved, people just consider purely topological theories, then claiming that this has something to do with the real world, which of course is plainly wrong. So what I think is new about these topological insulators are the claims that one can actually fabricate a material which more or less accurately mimics a topological theory and that these materials are of practical use. Still, they are a little bit the poor man's version of the topological theories fundamental physicists like to look at since electrdynamics is an Abelian theory.
  •  
    I have the feeling, not the knowledge, that you are right. However, I think that the implications of this light quantum field effects are great. The fact of being able to sustain two currents polarized in spin is a technological breakthrough.
  •  
    not sure how much I can contribute to your apparently educated debate here but if I remember well from my work for the master, these non-Abelian theories were all but "simple" as Luzi puts it ... and from a different perspective: to me the whole thing of being able to describe such non-Abelian systems nicely indicates that they should in one way or another also have some appearance in Nature (would be very surprised if not) - though this is of course no argument that makes string theory any better or closer to what Luzi called reality ....
  •  
    Well, electrodynamics remains an Abelian theory. From the theoretical point of view this is less interesting than non-Abelian ones, since in 4D the fibre bundle of a U(1) theory is trivial (great buzz words, eh!) But in topological insulators the point of view is slightly different since one always has the insulator (topological theory), its surrounding (propagating theory) and most importantly the interface between the two. This is a new situation that people from field and string theory were not really interested in.
  •  
    guys... how would you explain this to your gran mothers?
  •  
    *you* tried *your* best .... ??
nikolas smyrlakis

mentored by the Advanced Concepts Team for Google Summer of Code 2010 - 4 views

  •  
    you propably already know,I post it for the twitter account and for your comments
  • ...4 more comments...
  •  
    once again one of these initiatives that came up from a situation and that would never have been possible with a top-down approach .... fantastic! and as Dario said: we are apparently where NASA still has to go with this :-)
  •  
    Actually, NASA Ames did that already within the NASA Open Source Agreement in 2008 for a V&V software!
  •  
    indeed ... you are right .... interesting project btw - they started in 1999, were in 2005 the first NASA project on Sourceforge and won several awards .... then this entry why they did not participate last year: "05/01/09: Skipping this years Google Summer-of-Code - many of you have asked why we are not participating in this years Summer of Code. The answer is that both John and Peter were too busy with other assignments to set this up in time. We will be back in 2010. At least we were able to compensate with a limited number of NASA internships to continue some of last years projects." .... but I could not find them in this years selected list - any clue?
  •  
    but in any case, according to the apple guru, Java is a dying technology, so their project might as well ...
  •  
    They participate under the name "The Java Pathfinder Team" (http://babelfish.arc.nasa.gov/trac/jpf/wiki/events/soc2010). It is actually a very useful project for both education and industry (Airbus created a consortium on model checking soft, and there is a lot of research on it) As far as I know, TAS had some plans of using Java onboard spacecrafts, 2 years ago. Not sure the industry is really sensible about Jobs' opinions ;) particularly if there is no better alternative!
Juxi Leitner

Japan Plans a Moon Base by 2020, Built by Robots for Robots | Popular Science - 1 views

  • Those initial surveyor bots will pave the way for the construction of the unmanned moon base near the lunar south pole, which the robots will construct for themselves.
  • Even if Japan falls short of its 2020 deadline, the advances in robotics technology that could fall out of this little project could be as exciting as the moon base itself.
  •  
    More on these Japanese moon base plans... Those initial surveyor bots will pave the way for the construction of the unmanned moon base near the lunar south pole, which the robots will construct for themselves.
LeopoldS

Miniaturized power modules for aircraft bodies - 0 views

  •  
    probably not practical for launchers nor for s/c but maybe for suborbital planes? nice idea anyway ...
LeopoldS

Global Innovation Commons - 4 views

  •  
    nice initiative!
  • ...6 more comments...
  •  
    Any viral licence is a bad license...
  •  
    I'm pretty confident I'm about to open a can of worms, but mind explaining why? :)
  •  
    I am less worried about the can of worms ... actually eager to open it ... so why????
  •  
    Well, the topic GPL vs other open-source licenses (e.g., BSD, MIT, etc.) is old as the internet and it has provided material for long and glorious flame wars. The executive summary is that the GPL license (the one used by Linux) is a license which imposes some restrictions on the way you are allowed to (re)use the code. Specifically, if you re-use or modify GPL code and re-distribute it, you are required to make it available again under the GPL license. It is called "viral" because once you use a bit of GPL code, you are required to make the whole application GPL - so in this sense GPL code replicates like a virus. On the other side of the spectrum, there are the so-called BSD-like licenses which have more relaxed requirements. Usually, the only obligation they impose is to acknowledge somewhere (e.g., in a README file) that you have used some BSD code and who wrote it (this is called "attribution clause"), but they do not require to re-distribute the whole application under the same license. GPL critics usually claim that the license is not really "free" because it does not allow you to do whatever you want with the code without restrictions. GPL proponents claim that the requirements imposed by the GPL are necessary to safeguard the freedom of the code, in order to avoid being able to re-use GPL code without giving anything back to the community (which the BSD license allow: early versions of Microsoft Windows, for instance, had the networking code basically copy-pasted from BSD-licensed versions of Unix). In my opinion (and this point is often brought up in the debates) the division pro/against GPL mirrors somehow the division between anti/pro anarchism. Anarchists claim that the only way to be really free is the absence of laws, while non-anarchist maintain that the only practical way to be free is to have laws (which by definition limit certain freedoms). So you can see how the topic can quickly become inflammatory :) GPL at the current time is used by aro
  •  
    whoa, the comment got cut off. Anyway, I was just saying that at the present time the GPL license is used by around 65% of open source projects, including the Linux kernel, KDE, Samba, GCC, all the GNU utils, etc. The topic is much deeper than this brief summary, so if you are interested in it, Leopold, we can discuss it at length in another place.
  •  
    Thanks for the record long comment - am sure that this is longest ever made to an ACT diigo post! On the topic, I would rather lean for the GPL license (which I also advocated for the Marek viewer programme we put on source forge btw), mainly because I don't trust that open source is by nature delivering a better product and thus will prevail but I still would like to succeed, which I am not sure it would if there were mainly BSD like licenses around. ... but clearly, this is an outsider talking :-)
  •  
    btw: did not know the anarchist penchant of Marek :-)
  •  
    Well, not going into the discussion about GPL/BSD, the viral license in this particular case in my view simply undermines the "clean and clear" motivations of the initiative authors - why should *they* be credited for using something they have no rights for? And I don't like viral licences because they prevent using things released under this licence to all those people who want to release their stuff under a different licence, thus limiting the usefulness of the stuff released on that licence :) BSD is not a perfect license too, it also had major flaws And I'm not an anarchist, lol
anonymous

Nasa validates 'impossible' space drive (Wired UK) - 3 views

  •  
    NASA validates the EmDrive (http://emdrive.com/) technology for converting electrical energy into thrust. (from the website: "Thrust is produced by the amplification of the radiation pressure of an electromagnetic wave propagated through a resonant waveguide assembly.")
  • ...3 more comments...
  •  
    I would be very very skeptic on this results and am actually ready to take bets that they are victims of something else than "new physics" ... some measurement error e.g.
  •  
    Assuming that this system is feasible, and taking the results of Chinese team (Thrust of 720 mN http://www.wired.co.uk/news/archive/2013-02/06/emdrive-and-cold-fusion), I wonder whether this would allow for some actual trajectory maneuvers (and to which degree). If so, can we simulate some possible trajectories, e.g. compare the current solutions to this one ? For example, Shawyer (original author) claims that this system would be capable of stabilizing ISS without need for refueling. Other article on the same topic: http://www.theverge.com/2014/8/1/5959637/nasa-cannae-drive-tests-have-promising-results
  •  
    To be exact, the chinese reported 720mN and the americans found ~50microN. The first one I simply do not believe and the second one seems more credible, yet it has to be said that measuring such low thrust levels on a thrust-stand is very difficult and prone to measurement errors. @Krzys, the thrust level of 720mN is within the same range of other electric propulsion systems which are considered - and even used in some cases - for station keeping, also for the ISS actually (for which there are also ideas to use a high power system delivering several Newtons of thrust). Then on the idea, I do not rule out that an interaction between the EM waves and 'vacuum' could be possible, however if this would be true then this surely would be detectable in any particle accelerator as it would produce background events/noise. The energy densities involved and the conversion to thrust via some form of interaction with the vacuum surely could not provide thrusts in the range reported by the chinese, nor the americans. The laws of momentum conservation would still need to apply. Finally, 'quantum vacuum virtual plasma'.. really?
  •  
    I have to join the skeptics on this one ...
Thijs Versloot

Relativistic rocket: Dream and reality - 3 views

  •  
    An exhaustive overview of all possible advanced rocket concepts, eg.. "As an example, consider a photon rocket with its launching mass, say, 1000 ton moving with a constant acceleration a =0.1 g=0.98 m/s2. The flux of photons with E γ=0.5 MeV needed to produce this acceleration is ~1027/s, which corresponds to the efflux power of 1014 W and the rate of annihilation events N'a~5×1026 s−1 [47]. This annihilation rate in ambiplasma l -l ann corresponds to the value of current ~108 A and linear density N ~2×1018 m−1 thus any hope for non-relativistic relative velocity of electrons and positrons in ambiplasma is groundless." And also, even if it would work, then one of the major issues is going to be heat dispersal: "For example, if the temperature of radiator is chosen T=1500 K, the emitting area should be not less than 1000 m2 for Pb=1 GW, not less than 1 km2 for Pb=1 TW, and ~100 km2 for Pb=100 TW, assuming ε=0.5 and δ=0.2. Lower temperature would require even larger radiator area to maintain the outer temperature of the engine section stable for a given thermal power of the reactor."
  • ...2 more comments...
  •  
    We were also discussing a while ago a propulsion system using the relativistic fragments from nuclear fission. That would also produce an extremely high ISP (>100000) with a fairly high thrust. Never really got any traction though.
  •  
    I absolutely do not see the point in a photon rocket. Certainly, the high energy releasing nulcear processes (annihilation, fusion, ...) should rather be used to heat up some fluid to plasma state and accelerate it via magnetic nozzle. This would surely work as door-opener to our solar system...and by the way minimize the heat disposal problem if regenerative cooling is used.
  •  
    The problem is not achieving a high energy density, that we can already do with nuclear fission, the question however is how to confine or harness this power with relatively high efficiency, low waste heat and at not too crazy specific mass. I see magnetic confinement as a possibility, yet still decades away and also an all-or-nothing method as we cannot easily scale this up from a test experiment to a full-scale system. It might be possible to extract power from such a plasma, but definitely well below breakeven so an additional power supply is needed. The fission fragments circumvent these issues by a more brute force approach, thereby wasting a lot of energy for sure but at the end probably providing more ISP and thrust.
  •  
    Sure. However, the annihilation based photon rocket concept unifies almost all relevant drawbacks if we speak about solar system scales, making itself obsolete...it is just an academic testcase.
jcunha

'Superman memory crystal' that could store 360TB of data forever | ExtremeTech - 0 views

  •  
    A new so called 5D data storage that could potentially survive for billions of years. The research consists of nanostructured glass that can record digital data in five dimensions using femtosecond laser writing.
  • ...2 more comments...
  •  
    Very scarce scientific info available.. I'm very curious to see a bit more in future. From https://spie.org/PWL/conferencedetails/laser-micro-nanoprocessing I made a back of envelop calc: for 20 nm spaced, each laser spot in 5D encryption encodes 3 bits (it seemed to me) written in 3 planes, to obtain the claimed 360TB disk one needs very roughly 6000mm2, which does not complain with the dimensions shown in video. Only with larger number of planes (order of magnitude higher) it could be.. Also, at current commercial trends NAND Flash and HDD allow for 1000 Gb/in2. This means a 360 TB could hypothetically fit in 1800mm2.
  •  
    I had the same issue with the numbers when I saw the announcement a few days back (https://www.southampton.ac.uk/news/2016/02/5d-data-storage-update.page). It doesn't seem to add up. Plus, the examples they show are super low amounts of data (the bible probably fits on a few 1.44 MB floppy disk). As for the comparison with NAND and HDD, I think the main argument for their crystal is that it is supposedly more durable. HDDs are chronically bad at long term storage, and also NAND as far as I know needs to be refreshed frequently.
  •  
    Yes Alex, indeed, the durability is the point I think they highlight and focus on (besides the fact the abstract says something as the extrapolated decay time being comparable to the age of the Universe..). Indeed memories face problems with retention time. Most of the disks retain the information up to 10 years. When enterprises want to store data for longer times than this they use... yeah, magnetic tapes :-). Check a interesting article about magnetic tape market revival here http://www.information-age.com/technology/data-centre-and-it-infrastructure/123458854/rise-fall-and-re-rise-magnetic-tape I compared for fun, to have one idea of what we were talking about. I am also very curious so see the writing and reading times in this new memory :)
  •  
    But how can glass store the information so long? Glass is not even solid?!
Dario Izzo

Miguel Nicolelis Says the Brain Is Not Computable, Bashes Kurzweil's Singularity | MIT ... - 9 views

  •  
    As I said ten years ago and psychoanalysts 100 years ago. Luis I am so sorry :) Also ... now that the commission funded the project blue brain is a rather big hit Btw Nicolelis is a rather credited neuro-scientist
  • ...14 more comments...
  •  
    nice article; Luzi would agree as well I assume; one aspect not clear to me is the causal relationship it seems to imply between consciousness and randomness ... anybody?
  •  
    This is the same thing Penrose has been saying for ages (and yes, I read the book). IF the human brain proves to be the only conceivable system capable of consciousness/intelligence AND IF we'll forever be limited to the Turing machine type of computation (which is what the "Not Computable" in the article refers to) AND IF the brain indeed is not computable, THEN AI people might need to worry... Because I seriously doubt the first condition will prove to be true, same with the second one, and because I don't really care about the third (brains is not my thing).. I'm not worried.
  •  
    In any case, all AI research is going in the wrong direction: the mainstream is not on how to go beyond Turing machines, rather how to program them well enough ...... and thats not bringing anywhere near the singularity
  •  
    It has not been shown that intelligence is not computable (only some people saying the human brain isn't, which is something different), so I wouldn't go so far as saying the mainstream is going in the wrong direction. But even if that indeed was the case, would it be a problem? If so, well, then someone should quickly go and tell all the people trading in financial markets that they should stop using computers... after all, they're dealing with uncomputable undecidable problems. :) (and research on how to go beyond Turing computation does exist, but how much would you want to devote your research to a non existent machine?)
  •  
    [warning: troll] If you are happy with developing algorithms that serve the financial market ... good for you :) After all they have been proved to be useful for humankind beyond any reasonable doubt.
  •  
    Two comments from me: 1) an apparently credible scientist takes Kurzweil seriously enough to engage with him in polemics... oops 2) what worries me most, I didn't get the retail store pun at the end of article...
  •  
    True, but after Google hired Kurzweil he is de facto being taken seriously ... so I guess Nicolelis reacted to this.
  •  
    Crazy scientist in residence... interesting marketing move, I suppose.
  •  
    Unfortunately, I can't upload my two kids to the cloud to make them sleep, that's why I comment only now :-). But, of course, I MUST add my comment to this discussion. I don't really get what Nicolelis point is, the article is just too short and at a too popular level. But please realize that the question is not just "computable" vs. "non-computable". A system may be computable (we have a collection of rules called "theory" that we can put on a computer and run in a finite time) and still it need not be predictable. Since the lack of predictability pretty obviously applies to the human brain (as it does to any sufficiently complex and nonlinear system) the question whether it is computable or not becomes rather academic. Markram and his fellows may come up with a incredible simulation program of the human brain, this will be rather useless since they cannot solve the initial value problem and even if they could they will be lost in randomness after a short simulation time due to horrible non-linearities... Btw: this is not my idea, it was pointed out by Bohr more than 100 years ago...
  •  
    I guess chaos is what you are referring to. Stuff like the Lorentz attractor. In which case I would say that the point is not to predict one particular brain (in which case you would be right): any initial conditions would be fine as far as any brain gets started :) that is the goal :)
  •  
    Kurzweil talks about downloading your brain to a computer, so he has a specific brain in mind; Markram talks about identifying neural basis of mental diseases, so he has at least pretty specific situations in mind. Chaos is not the only problem, even a perfectly linear brain (which is not a biological brain) is not predictable, since one cannot determine a complete set of initial conditions of a working (viz. living) brain (after having determined about 10% the brain is dead and the data useless). But the situation is even worse: from all we know a brain will only work with a suitable interaction with its environment. So these boundary conditions one has to determine as well. This is already twice impossible. But the situation is worse again: from all we know, the way the brain interacts with its environment at a neural level depends on his history (how this brain learned). So your boundary conditions (that are impossible to determine) depend on your initial conditions (that are impossible to determine). Thus the situation is rather impossible squared than twice impossible. I'm sure Markram will simulate something, but this will rather be the famous Boltzmann brain than a biological one. Boltzman brains work with any initial conditions and any boundary conditions... and are pretty dead!
  •  
    Say one has an accurate model of a brain. It may be the case that the initial and boundary conditions do not matter that much in order for the brain to function an exhibit macro-characteristics useful to make science. Again, if it is not one particular brain you are targeting, but the 'brain' as a general entity this would make sense if one has an accurate model (also to identify the neural basis of mental diseases). But in my opinion, the construction of such a model of the brain is impossible using a reductionist approach (that is taking the naive approach of putting together some artificial neurons and connecting them in a huge net). That is why both Kurzweil and Markram are doomed to fail.
  •  
    I think that in principle some kind of artificial brain should be feasible. But making a brain by just throwing together a myriad of neurons is probably as promising as throwing together some copper pipes and a heap of silica and expecting it to make calculations for you. Like in the biological system, I suspect, an artificial brain would have to grow from a small tiny functional unit by adding neurons and complexity slowly and in a way that in a stable way increases the "usefulness"/fitness. Apparently our brain's usefulness has to do with interpreting inputs of our sensors to the world and steering the body making sure that those sensors, the brain and the rest of the body are still alive 10 seconds from now (thereby changing the world -> sensor inputs -> ...). So the artificial brain might need sensors and a body to affect the "world" creating a much larger feedback loop than the brain itself. One might argue that the complexity of the sensor inputs is the reason why the brain needs to be so complex in the first place. I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain. Anyone? Or are they trying to simulate the human brain after it has been removed from the body? That might be somewhat easier I guess...
  •  
    Johannes: "I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain." In Artificial Life the whole environment+bodies&brains is simulated. You have also the whole embodied cognition movement that basically advocates for just that: no true intelligence until you model the system in its entirety. And from that you then have people building robotic bodies, and getting their "brains" to learn from scratch how to control them, and through the bodies, the environment. Right now, this is obviously closer to the complexity of insect brains, than human ones. (my take on this is: yes, go ahead and build robots, if the intelligence you want to get in the end is to be displayed in interactions with the real physical world...) It's easy to dismiss Markram's Blue Brain for all their clever marketing pronouncements that they're building a human-level consciousness on a computer, but from what I read of the project, they seem to be developing a platfrom onto which any scientist can plug in their model of a detail of a detail of .... of the human brain, and get it to run together with everyone else's models of other tiny parts of the brain. This is not the same as getting the artificial brain to interact with the real world, but it's a big step in enabling scientists to study their own models on more realistic settings, in which the models' outputs get to effect many other systems, and throuh them feed back into its future inputs. So Blue Brain's biggest contribution might be in making model evaluation in neuroscience less wrong, and that doesn't seem like a bad thing. At some point the reductionist approach needs to start moving in the other direction.
  •  
    @ Dario: absolutely agree, the reductionist approach is the main mistake. My point: if you take the reductionsit approach, then you will face the initial and boundary value problem. If one tries a non-reductionist approach, this problem may be much weaker. But off the record: there exists a non-reductionist theory of the brain, it's called psychology... @ Johannes: also agree, the only way the reductionist approach could eventually be successful is to actually grow the brain. Start with essentially one neuron and grow the whole complexity. But if you want to do this, bring up a kid! A brain without body might be easier? Why do you expect that a brain detached from its complete input/output system actually still works. I'm pretty sure it does not!
  •  
    @Luzi: That was exactly my point :-)
Luís F. Simões

ARKYD: A Space Telescope for Everyone, by Planetary Resources - Kickstarter - 0 views

  •  
    space-related kickstarters moving from cubesats to space telescopes. This funding campaign was launched today, and will last for 32 days. They are asking for 1M USD.
  •  
    "Since the formation of Planetary Resources, our primary goal has been to build technology enabling us to prospect and mine asteroids. We've spent the last year making great leaps in the development of these technologies." - Damn we need to get in touch with these people..!
jaihobah

Antimatter Starship Scheme Coming to Kickstarter - 1 views

  •  
    "Hbar Technologies plans a Kickstarter effort to raise US $200,000 for the next phase design of an antimatter-propelled spaceship. The two scientists behind this design effort are a veteran Fermilab particle accelerator scientist and a former Los Alamos National Laboratory physicist and founding director of the U.S. Center for Space Nuclear Research. They originally developed it for NASA at the turn of the millennium."
LeopoldS

Strong evidence for d-electron spin transport at room temperature - 2 views

  •  
    Strong evidence for d-electron spin transport at room temperature
  •  
    WOW! Great non-local signals, at room temperature!!! Spin transistor on the way finally!? (of course electric field gate controlled is fundamental) See more about the "quest" for the spin transistor here: http://spectrum.ieee.org/semiconductors/processors/the-quest-for-the-spin-transistor
‹ Previous 21 - 40 of 1850 Next › Last »
Showing 20 items per page