Skip to main content

Home/ Advanced Concepts Team/ Group items tagged of

Rss Feed Group items tagged

Thijs Versloot

A Groundbreaking Idea About Why Life Exists - 1 views

  •  
    Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life. The simulation results made me think of Jojo's attempts to make a self-assembling space structure. Seems he may have been on the right track, just not thinking big enough
  •  
    :-P Thanks Thijs... I do not agree with the premise of the article that a possible correlation of energy dissipation in living systems and their fitness means that one is the cause for the other - it may just be that both go hand-in-hand because of the nature of the world that we live in. Maybe there is such a drive for pre-biotic systems (like crystals and amino acids), but once life as we know it exists (i.e., heredity + mutation) it is hard to see the need for an amendment of Darwin's principles. The following just misses the essence of Darwin: "If England's approach stands up to more testing, it could further liberate biologists from seeking a Darwinian explanation for every adaptation and allow them to think more generally in terms of dissipation-driven organization. They might find, for example, that "the reason that an organism shows characteristic X rather than Y may not be because X is more fit than Y, but because physical constraints make it easier for X to evolve than for Y to evolve." Darwin's principle in its simplest expression just says that if a genome is more effective at reproducing it is more likely to dominate the next generation. The beauty of it is that there is NO need for a steering mechanism (like maximize energy dissipation) any random set of mutations will still lead to an increase of reproductive effectiveness. BTW: what does "better at dissipating energy" even mean? If I run around all the time I will have more babies? Most species that prove to be very successful end up being very good at conserving energy: trees, turtles, worms. Even complexity of an organism is not a recipe for evolutionary success: jellyfish have been successful for hundreds of millions of years while polar bears are seem to be on the way out.
LeopoldS

Plant sciences: Plants drink mineral water : Nature : Nature Publishing Group - 1 views

  •  
    Here we go: we might not need liquid water after all on mars to get some nice flowering plants there! ... and terraform ? :-) Thirsty plants can extract water from the crystalline structure of gypsum, a rock-forming mineral found in soil on Earth and Mars.

    Some plants grow on gypsum outcrops and remain active even during dry summer months, despite having shallow roots that cannot reach the water table. Sara Palacio of the Pyrenean Institute of Ecology in Jaca, Spain, and her colleagues compared the isotopic composition of sap from one such plant, called Helianthemum squamatum (pictured), with gypsum crystallization water and water found free in the soil. The team found that up to 90% of the plant's summer water supply came from gypsum.

    The study has implications for the search for life in extreme environments on this planet and others.

    Nature Commun 5, 4660 (2014)
  •  
    Very interesting indeed. Attention is to be put on the form of calcium sulfate that is found on Mars. If it is hydrated (gypsum Ca(SO4)*2(H2O)) it works, but if it is dehydrated there is no water for the roots to take in. The Curiosity Rover tries to find out, but has uncertainty in recognising the hydrogen presence in the mineral: Copying : "(...) 3.2 Hydration state of calcium sulfates Calcium sulfates occur as a non-hydrated phase (anhydrite, CaSO4) or as one of two hydrated phases (bassanite, CaSO4.1/2H2O, which can contain a somewhat variable water content, and gypsum, CaSO4.2H2O). ChemCam identifies the presence of hydrogen at 656 nm, as already found in soils and dust [Meslin et al., 2013] and within fluvial conglomerates [Williams et al., 2013]. However, the quantification of H is strongly affected by matrix effects [Schröder et al., 2013], i.e. effects including major or even minor element chemistry, optical and mechanical properties, that can result in variations of emission lines unrelated to actual quantitative variations of the element in question in the sample. Due to these effects, discriminating between bassanite and gypsum is difficult. (...)"
Alexander Wittig

Picture This: NVIDIA GPUs Sort Through Tens of Millions of Flickr Photos - 2 views

  •  
    Strange and exotic cityscapes. Desolate wilderness areas. Dogs that look like wookies. Flickr, one of the world's largest photo sharing services, sees it all. And, now, Flickr's image recognition technology can categorize more than 11 billion photos like these. And it does it automatically. It's called "Magic View." Magical deep learning! Buzzword attack!
  • ...4 more comments...
  •  
    and here comes my standard question: how can we use this for space? fast detection of natural disasters onboard?
  •  
    Even on ground. You could for example teach it what nuclear reactors or missiles or other weapons you don't want look like on satellite pictures and automatically scan the world for them (basically replacing intelligence analysts).
  •  
    In fact, I think this could make a nice ACT project: counting seals from satellite imagery is an actual (and quite recent) thing: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0092613 In this publication they did it manually from a GeoEye 1 b/w image, which sounds quite tedious. Maybe one can train one of those image recognition algorithms to do it automatically. Or maybe it's a bit easier to count larger things, like elephants (also a thing).
  •  
    In HiPEAC (High Performance, embedded architecture and computation) conference I attended in the beginning of this year there was a big trend of CUDA GPU vs FPGA for hardware accelerated image processing. Most of it orbitting around discussing who was faster and cheaper with people from NVIDIA in one side and people from Xilinx and Intel in the other. I remember of talking with an IBM scientist working on hardware accelerated data processing working together with the Radio telescope institute in Netherlands about the solution where they working on (GPU CUDA). I gathered that NVIDIA GPU suits best in applications that somehow do not rely in hardware, having the advantage of being programmed in a 'easy' way accessible to a scientist. FPGA's are highly reliable components with the advantage of being available in radhard versions, but requiring specific knowledge of physical circuit design and tailored 'harsh' programming languages. I don't know what is the level of rad hardness in NVIDIA's GPUs... Therefore FPGAs are indeed the standard choice for image processing in space missions (a talk with the microelectronics department guys could expand on this), whereas GPUs are currently used in some ground based (radio astronomy or other types of telescopes). I think that on for a specific purpose as the one you mentioned, this FPGA vs GPU should be assessed first before going further.
  •  
    You're forgetting power usage. GPUs need 1000 hamster wheels worth of power while FPGAs can run on a potato. Since space applications are highly power limited, putting any kind of GPU monster in orbit or on a rover is failed idea from the start. Also in FPGAs if a gate burns out from radiation you can just reprogram around it. Looking for seals offline in high res images is indeed definitely a GPU task.... for now.
  •  
    The discussion of how to make FPGA hardware acceleration solutions easier to use for the 'layman' is starting btw http://reconfigurablecomputing4themasses.net/.
Dario Izzo

Probabilistic Logic Allows Computer Chip to Run Faster - 3 views

  •  
    Francesco pointed out this research one year ago, we dropped it as noone was really considering it ... but in space a low CPU power consumption is crucial!! Maybe we should look back into this?
  • ...6 more comments...
  •  
    Q1: For the time being, for what purposes computers are mainly used on-board?
  •  
    for navigation, control, data handling and so on .... why?
  •  
    Well, because the point is to identify an application in which such computers would do the job... That could be either an existing application which can be done sufficiently well by such computers or a completely new application which is not already there for instance because of some power consumption constraints... Q2 would be then: for which of these purposes strict determinism of the results is not crucial? As the answer to this may not be obvious, a potential study could address this very issue. For instance one can consider on-board navigation systems with limited accuracy... I may be talking bullshit now, but perhaps in some applications it doesn't matter whether a satellite flies on the exact route but +/-10km to the left/right? ...and so on for the other systems. Another thing is understanding what exactly this probabilistic computing is, and what can be achieved using it (like the result is probabilistic but falls within a defined range of precision), etc. Did they build a complete chip or at least a sub-circiut, or still only logic gates...
  •  
    Satellites use old CPUs also because with the trend of going for higher power modern CPUs are not very convenient from a system design point of view (TBC)... as a consequence the constraints put on on-board algorithms can be demanding. I agree with you that double precision might just not be necessary for a number of applications (navigation also), but I guess we are not talking about 10km as an absolute value, rather to a relative error that can be tolerated at level of (say) 10^-6. All in all you are right a first study should assess what application this would be useful at all.. and at what precision / power levels
  •  
    The interest of this can be a high fault tolerance for some math operations, ... which would have for effect to simplify the job of coders! I don't think this is a good idea regarding power consumption for CPU (strictly speaking). The reason we use old chip is just a matter of qualification for space, not power. For instance a LEON Sparc (e.g. use on some platform for ESA) consumes something like 5mW/MHz so it is definitely not were an engineer will look for some power saving considering a usual 10-15kW spacecraft
  •  
    What about speed then? Seven time faster could allow some real time navigation at higher speed (e.g. velocity of a terminal guidance for an asteroid impactor is limited to 10 km/s ... would a higher velocity be possible with faster processors?) Another issue is the radiation tolerance of the technology ... if the PCMOS are more tolerant to radiation they could get more easily space qualified.....
  •  
    I don't remember what is the speed factor, but I guess this might do it! Although, I remember when using an IMU that you cannot have the data above a given rate (e.g. 20Hz even though the ADC samples the sensor at a little faster rate), so somehow it is not just the CPU that must be re-thought. When I say qualification I also imply the "hardened" phase.
  •  
    I don't know if the (promised) one-order-of-magnitude improvements in power efficiency and performance are enough to justify looking into this. For once, it is not clear to me what embracing this technology would mean from an engineering point of view: does this technology need an entirely new software/hardware stack? If that were the case, in my opinion any potential benefit would be nullified. Also, is it realistic to build an entire self-sufficient chip on this technology? While the precision of floating point computations may be degraded and still be useful, how does all this play with integer arithmetic? Keep in mind that, e.g., in the Linux kernel code floating-point calculations are not even allowed/available... It is probably possible to integrate an "accelerated" low-accuracy floating-point unit together with a traditional CPU, but then again you have more implementation overhead creeping in. Finally, recent processors by Intel (e.g., the Atom) and especially ARM boast really low power-consumption levels, at the same time offering performance-boosting features such as multi-core and vectorization capabilities. Don't such efforts have more potential, if anything because of economical/industrial inertia?
LeopoldS

Ministry of Science and Technology of the People's Republic of China - 0 views

  •  
    University Alliance for Low Carbon Energy   Three universities, including Tsinghua University, University of Cambridge, and the Massachusetts Institute of Technology, have fostered up an alliance on November 15, 2009 to advocate low carbon energy and climate change adaptation The alliance will mainly work on 6 major areas: clean coal technology and CCS, homebuilding energy efficiency, industrial energy efficiency and sustainable transport, biomass energy and other renewable energy, advanced nuclear energy, intelligent power grid, and energy policies/planning. A steering panel made up of the senior experts from the three universities (two from each) will be established to review, evaluate, and endorse the goals, projects, fund raising activities, and collaborations under the alliance. With the Headquarters at the campus of Tsinghua University and branch offices at other two universities, the alliance will be chaired by a scientist selected from Tsinghua University.   According to a briefing, the alliance will need a budget of USD 3-5 million, mainly from the donations of government, industry, and all walks of life. In this context, the R&D findings derived from the alliance will find its applications in improving people's life.
Luís F. Simões

Seminar: You and Your Research, Dr. Richard W. Hamming (March 7, 1986) - 10 views

  • This talk centered on Hamming's observations and research on the question "Why do so few scientists make significant contributions and so many are forgotten in the long run?" From his more than forty years of experience, thirty of which were at Bell Laboratories, he has made a number of direct observations, asked very pointed questions of scientists about what, how, and why they did things, studied the lives of great scientists and great contributions, and has done introspection and studied theories of creativity. The talk is about what he has learned in terms of the properties of the individual scientists, their abilities, traits, working habits, attitudes, and philosophy.
  •  
    Here's the link related to one of the lunch time discussions. I recommend it to every single one of you. I promise it will be worth your time. If you're lazy, you have a summary here (good stuff also in the references, have a look at them):      Erren TC, Cullen P, Erren M, Bourne PE (2007) Ten Simple Rules for Doing Your Best Research, According to Hamming. PLoS Comput Biol 3(10): e213.
  • ...3 more comments...
  •  
    I'm also pretty sure that the ones who are remembered are not the ones who tried to be... so why all these rules !? I think it's bullshit...
  •  
    The seminar is not a manual on how to achieve fame, but rather an analysis on how others were able to perform very significant work. The two things are in some cases related, but the seminar's focus is on the second.
  •  
    Then read a good book on the life of Copernic, it's the anti-manual of Hamming... he breaks all the rules !
  •  
    honestly I think that some of these rules actually make sense indeed ... but I am always curious to get a good book recommendation (which book of Copernic would you recommend?) btw Pacome: we are in Paris ... in case you have some time ...
  •  
    I warmly recommend this book, a bit old but fascinating: The sleepwalkers from Arthur Koestler. It shows that progress in science is not straight and do not obey any rule... It is not as rational as most of people seem to believe today. http://www.amazon.com/Sleepwalkers-History-Changing-Universe-Compass/dp/0140192468/ref=sr_1_1?ie=UTF8&qid=1294835558&sr=8-1 Otherwise yes I have some time ! my phone number: 0699428926 We live around Denfert-Rochereau and Montparnasse. We could go for a beer this evening ?
jmlloren

Experimental verification of the feasibility of a quantum channel between space and Earth - 0 views

  •  
    Extending quantum communication to space environments would enable us to perform fundamental experiments on quantum physics as well as applications of quantum information at planetary and interplanetary scales. Here, we report on the first experimental study of the conditions for the implementation of the single-photon exchange between a satellite and an Earth-based station. We built an experiment that mimics a single photon source on a satellite, exploiting the telescope at the Matera Laser Ranging Observatory of the Italian Space Agency to detect the transmitted photons. Weak laser pulses, emitted by the ground-based station, are directed toward a satellite equipped with cube-corner retroreflectors. These reflect a small portion of the pulse, with an average of less-than-one photon per pulse directed to our receiver, as required for faint-pulse quantum communication. We were able to detect returns from satellite Ajisai, a low-Earth orbit geodetic satellite, whose orbit has a perigee height of 1485 km.
  •  
    hello Jose! Interesting it was proposed to do the same with the ISS as part of the ACES experiment. I don't remember the paper but i can look if you're interested
LeopoldS

Finding the Source of the Pioneer Anomaly - IEEE Spectrum - 0 views

  •  
    The article came out some time ago of course and was posted here, though the story here is still well written. If you are lazy to read the rel long article, here the summary explanation: The team found that a good half of the force came from heat from the RTGs, which bounced off the back of the spacecraft antenna. The other half came from electrical heat from circuitry in the heart of the spacecraft. Most of that heat was radiated through louvers at the back of the probes, which weren't as well insulated as the rest of their bodies, further contributing to the deceleration.
Tom Gheysens

Biomimicr-E: Nature-Inspired Energy Systems | AAAS - 4 views

  •  
    some biomimicry used in energy systems... maybe it sparks some ideas
  •  
    not much new that has not been shared here before ... BUT: we have done relativley little on any of them. for good reasons?? don't know - maybe time to look into some of these again more closely Energy Efficiency( Termite mounds inspired regulated airflow for temperature control of large structures, preventing wasteful air conditioning and saving 10% energy.[1] Whale fins shapes informed the design of new-age wind turbine blades, with bumps/tubercles reducing drag by 30% and boosting power by 20%.[2][3][4] Stingray motion has motivated studies on this type of low-effort flapping glide, which takes advantage of the leading edge vortex, for new-age underwater robots and submarines.[5][6] Studies of microstructures found on shark skin that decrease drag and prevent accumulation of algae, barnacles, and mussels attached to their body have led to "anti-biofouling" technologies meant to address the 15% of marine vessel fuel use due to drag.[7][8][9][10] Energy Generation( Passive heliotropism exhibited by sunflowers has inspired research on a liquid crystalline elastomer and carbon nanotube system that improves the efficiency of solar panels by 10%, without using GPS and active repositioning panels to track the sun.[11][12][13] Mimicking the fluid dynamics principles utilized by schools of fish could help to optimize the arrangement of individual wind turbines in wind farms.[14] The nanoscale anti-reflection structures found on certain butterfly wings has led to a model to effectively harness solar energy.[15][16][17] Energy Storage( Inspired by the sunlight-to-energy conversion in plants, researchers are utilizing a protein in spinach to create a sort of photovoltaic cell that generates hydrogen from water (i.e. hydrogen fuel cell).[18][19] Utilizing a property of genetically-engineered viruses, specifically their ability to recognize and bind to certain materials (carbon nanotubes in this case), researchers have developed virus-based "scaffolds" that
johannessimon81

Mathematicians Predict the Future With Data From the Past - 6 views

  •  
    Asimov's Foundation meets ACT's Tipping Point Prediction?
  • ...2 more comments...
  •  
    Good luck to them!!
  •  
    "Mathematicians Predict the Future With Data From the Past". GREAT! And physicists probably predict the past with data from the future?!? "scientists and mathematicians analyze history in the hopes of finding patterns they can then use to predict the future". Big deal! That's what any scientist does anyway... "cliodynamics"!? Give me a break!
  •  
    still, some interesting thoughts in there ... "Then you have the 50-year cycles of violence. Turchin describes these as the building up and then the release of pressure. Each time, social inequality creeps up over the decades, then reaches a breaking point. Reforms are made, but over time, those reforms are reversed, leading back to a state of increasing social inequality. The graph above shows how regular these spikes are - though there's one missing in the early 19th century, which Turchin attributes to the relative prosperity that characterized the time. He also notes that the severity of the spikes can vary depending on how governments respond to the problem. Turchin says that the United States was in a pre-revolutionary state in the 1910s, but there was a steep drop-off in violence after the 1920s because of the progressive era. The governing class made decisions to reign in corporations and allowed workers to air grievances. These policies reduced the pressure, he says, and prevented revolution. The United Kingdom was also able to avoid revolution through reforms in the 19th century, according to Turchin. But the most common way for these things to resolve themselves is through violence. Turchin takes pains to emphasize that the cycles are not the result of iron-clad rules of history, but of feedback loops - just like in ecology. "In a predator-prey cycle, such as mice and weasels or hares and lynx, the reason why populations go through periodic booms and busts has nothing to do with any external clocks," he writes. "As mice become abundant, weasels breed like crazy and multiply. Then they eat down most of the mice and starve to death themselves, at which point the few surviving mice begin breeding like crazy and the cycle repeats." There are competing theories as well. A group of researchers at the New England Complex Systems Institute - who practice a discipline called econophysics - have built their own model of political violence and
  •  
    It's not the scientific activity described in the article that is uninteresting, on the contrary! But the way it is described is just a bad joke. Once again the results itself are seemingly not sexy enough and thus something is sold as the big revolution, though it's just the application of the oldest scientific principles in a slightly different way than used before.
Luís F. Simões

NASA Goddard to Auction off Patents for Automated Software Code Generation - 0 views

  • The technology was originally developed to handle coding of control code for spacecraft swarms, but it is broadly applicable to any commercial application where rule-based systems development is used.
  •  
    This is related to the "Verified Software" item in NewScientist's list of ideas that will change science. At the link below you'll find the text of the patents being auctioned: http://icapoceantomo.com/item-for-sale/exclusive-license-related-improved-methodology-formally-developing-control-systems :) Patent #7,627,538 ("Swarm autonomic agents with self-destruct capability") makes for quite an interesting read: "This invention relates generally to artificial intelligence and, more particularly, to architecture for collective interactions between autonomous entities." "In some embodiments, an evolvable synthetic neural system is operably coupled to one or more evolvable synthetic neural systems in a hierarchy." "In yet another aspect, an autonomous nanotechnology swarm may comprise a plurality of workers composed of self-similar autonomic components that are arranged to perform individual tasks in furtherance of a desired objective." "In still yet another aspect, a process to construct an environment to satisfy increasingly demanding external requirements may include instantiating an embryonic evolvable neural interface and evolving the embryonic evolvable neural interface towards complex complete connectivity." "In some embodiments, NBF 500 also includes genetic algorithms (GA) 504 at each interface between autonomic components. The GAs 504 may modify the intra-ENI 202 to satisfy requirements of the SALs 502 during learning, task execution or impairment of other subsystems."
johannessimon81

Bacteria grow electric wire in their natural environment - 1 views

  •  
    Bacterial wires explain enigmatic electric currents in the seabed: Each one of these 'cable bacteria' contains a bundle of insulated wires that conduct an electric current from one end to the other. Cable bacteria explain electric currents in the seabed Electricity and seawater are usually a bad mix.
  •  
    WOW!!!! don't want to even imagine what we do to these with the trailing fishing boats that sweep through sea beds with large masses .... "Our experiments showed that the electric connections in the seabed must be solid structures built by bacteria," says PhD student Christian Pfeffer, Aarhus University. He could interrupt the electric currents by pulling a thin wire horizontally through the seafloor. Just as when an excavator cuts our electric cables. In microscopes, scientists found a hitherto unknown type of long, multi-cellular bacteria that was always present when scientists measured the electric currents. "The incredible idea that these bacteria should be electric cables really fell into place when, inside the bacteria, we saw wire-like strings enclosed by a membrane," says Nils Risgaard-Petersen, Aarhus University. Kilometers of living cables The bacterium is one hundred times thinner than a hair and the whole bacterium functions as an electric cable with a number of insulated wires within it. Quite similar to the electric cables we know from our daily lives. "Such unique insulated biological wires seem simple but with incredible complexity at nanoscale," says PhD student Jie Song, Aarhus University, who used nanotools to map the electrical properties of the cable bacteria. In an undisturbed seabed more than tens of thousands kilometers cable bacteria live under a single square meter seabed. The ability to conduct an electric current gives cable bacteria such large benefits that it conquers much of the energy from decomposition processes in the seabed. Unlike all other known forms of life, cable bacteria maintain an efficient combustion down in the oxygen-free part of the seabed. It only requires that one end of the individual reaches the oxygen which the seawater provides to the top millimeters of the seabed. The combustion is a transfer of the electrons of the food to oxygen which the bacterial inner wires manage over centimeter-long distances. However, s
Thijs Versloot

Graphene coated silicon super-capacitors for energy storage - 1 views

  •  
    Recharge in seconds and efficiently store power for weeks between charges. Added bonus is the cheap and abundant components needed. One of the applications they foresee is to attach such a super-capacitor to the back of solar panels to store the power and discharge this during the night
  •  
    very nice indeed - is this already at a stage where we should have a closer look at it? what you think? With experience in growing carbon nanostructures, Pint's group decided to try to coat the porous silicon surface with carbon. "We had no idea what would happen," said Pint. "Typically, researchers grow graphene from silicon-carbide materials at temperatures in excess of 1400 degrees Celsius. But at lower temperatures - 600 to 700 degrees Celsius - we certainly didn't expect graphene-like material growth." When the researchers pulled the porous silicon out of the furnace, they found that it had turned from orange to purple or black. When they inspected it under a powerful scanning electron microscope they found that it looked nearly identical to the original material but it was coated by a layer of graphene a few nanometers thick. When the researchers tested the coated material they found that it had chemically stabilized the silicon surface. When they used it to make supercapacitors, they found that the graphene coating improved energy densities by over two orders of magnitude compared to those made from uncoated porous silicon and significantly better than commercial supercapacitors. Transmission electron microscope image of the surface of porous silicon coated with graphene. The coating consists of a thin layer of 5-10 layers of graphene which filled pores with diameters less than 2-3 nanometers and so did not alter the nanoscale architecture of the underlying silicon. (Cary Pint / Vanderbilt) The graphene layer acts as an atomically thin protective coating. Pint and his group argue that this approach isn't limited to graphene. "The ability to engineer surfaces with atomically thin layers of materials combined with the control achieved in designing porous materials opens opportunities for a number of different applications beyond energy storage," he said.
Joris _

What the strange persistence of rockets can teach us about innovation. - 5 views

  •  
    If I could write, this is exactly what I would write about rocket, GO, and so on... :) "we are decadent and tired. But none of the bright young up-and-coming economies seem to be interested in anything besides aping what the United States and the USSR did years ago. We may, in other words, need to look beyond strictly U.S.-centric explanations for such failures of imagination and initiative. ... Those are places we need to go if we are not to end up as the Ottoman Empire of the 21st century, and yet in spite of all of the lip service that is paid to innovation in such areas, it frequently seems as though we are trapped in a collective stasis." "But those who do concern themselves with the formal regulation of "technology" might wish to worry less about possible negative effects of innovation and more about the damage being done to our environment and our prosperity by the mid-20th-century technologies that no sane and responsible person would propose today, but in which we remain trapped by mysterious and ineffable forces."
  • ...4 more comments...
  •  
    Very interesting, though I'm amused how the author tends to (subconsciously?) shift the blame to non-US dictators :-) Suggestion that in absence of cold war US might have abandoned HB and ICBM programmes is ridiculous.
  •  
    Interesting, this was written by Neal Stephenson ( http://en.wikipedia.org/wiki/Neal_Stephenson#Works ). Great article indeed. The videos of the event from which this arose might be equally interesting: Here Be Dragons: Governing a Technologically Uncertain Future http://newamerica.net/events/2011/here_be_dragons "To employ a commonly used metaphor, our current proficiency in rocket-building is the result of a hill-climbing approach; we started at one place on the technological landscape-which must be considered a random pick, given that it was chosen for dubious reasons by a maniac-and climbed the hill from there, looking for small steps that could be taken to increase the size and efficiency of the device."
  •  
    You know Luis, when I read this quote, I could help thinking about GO, which would be kind of ironic considering the context but not far from what happens in the field :p
  •  
    Fantastic!!!
  •  
    Would have been nice if it were historically more accurate and less polemic / superficial
  •  
    mmmh... the wheel is also an old invention... there is an idea behind but this article is not very deepfull, and I really don't think the problem is with innovation and lack of creative young people !!! look at what is done in the financial sector...
LeopoldS

Global Innovation Commons - 4 views

  •  
    nice initiative!
  • ...6 more comments...
  •  
    Any viral licence is a bad license...
  •  
    I'm pretty confident I'm about to open a can of worms, but mind explaining why? :)
  •  
    I am less worried about the can of worms ... actually eager to open it ... so why????
  •  
    Well, the topic GPL vs other open-source licenses (e.g., BSD, MIT, etc.) is old as the internet and it has provided material for long and glorious flame wars. The executive summary is that the GPL license (the one used by Linux) is a license which imposes some restrictions on the way you are allowed to (re)use the code. Specifically, if you re-use or modify GPL code and re-distribute it, you are required to make it available again under the GPL license. It is called "viral" because once you use a bit of GPL code, you are required to make the whole application GPL - so in this sense GPL code replicates like a virus. On the other side of the spectrum, there are the so-called BSD-like licenses which have more relaxed requirements. Usually, the only obligation they impose is to acknowledge somewhere (e.g., in a README file) that you have used some BSD code and who wrote it (this is called "attribution clause"), but they do not require to re-distribute the whole application under the same license. GPL critics usually claim that the license is not really "free" because it does not allow you to do whatever you want with the code without restrictions. GPL proponents claim that the requirements imposed by the GPL are necessary to safeguard the freedom of the code, in order to avoid being able to re-use GPL code without giving anything back to the community (which the BSD license allow: early versions of Microsoft Windows, for instance, had the networking code basically copy-pasted from BSD-licensed versions of Unix). In my opinion (and this point is often brought up in the debates) the division pro/against GPL mirrors somehow the division between anti/pro anarchism. Anarchists claim that the only way to be really free is the absence of laws, while non-anarchist maintain that the only practical way to be free is to have laws (which by definition limit certain freedoms). So you can see how the topic can quickly become inflammatory :) GPL at the current time is used by aro
  •  
    whoa, the comment got cut off. Anyway, I was just saying that at the present time the GPL license is used by around 65% of open source projects, including the Linux kernel, KDE, Samba, GCC, all the GNU utils, etc. The topic is much deeper than this brief summary, so if you are interested in it, Leopold, we can discuss it at length in another place.
  •  
    Thanks for the record long comment - am sure that this is longest ever made to an ACT diigo post! On the topic, I would rather lean for the GPL license (which I also advocated for the Marek viewer programme we put on source forge btw), mainly because I don't trust that open source is by nature delivering a better product and thus will prevail but I still would like to succeed, which I am not sure it would if there were mainly BSD like licenses around. ... but clearly, this is an outsider talking :-)
  •  
    btw: did not know the anarchist penchant of Marek :-)
  •  
    Well, not going into the discussion about GPL/BSD, the viral license in this particular case in my view simply undermines the "clean and clear" motivations of the initiative authors - why should *they* be credited for using something they have no rights for? And I don't like viral licences because they prevent using things released under this licence to all those people who want to release their stuff under a different licence, thus limiting the usefulness of the stuff released on that licence :) BSD is not a perfect license too, it also had major flaws And I'm not an anarchist, lol
Ma Ru

Dark Matter or Black Hole Propulsion? - 1 views

  •  
    Anyone out there still doing propulsion stuff? Two more papers just waiting to get busted... http://arxiv.org/abs/0908.1429v1 http://arxiv.org/abs/0908.1803
  • ...5 more comments...
  •  
    What an awful bunch of complete nonsense!!! But I don't think anybody wants to hear MY opinion on this...
  •  
    wow, is this serious at all...!?
  •  
    Are you joking?? The BH drive propses a BH with a lifetime of about an year, just 10^7 tons, peanuts!! Then you have to produce it, better not on Earth, so you do this in space, with a laser that produces an equivalent of 10^9 tons highly foucussed, even more peanuts!! Reasonable losses in the production process (probably 99,999%) are not yet taken into account. Engineering problems... :-) The DM drive is even better, they want to collect DM and compress it in a propulsion chamber. Very easy to collect and compress a gas of particles that traverse the Earth without any interaction. Perhaps if the walls of the chamber are made of artificial BHs?? Who knows??
  •  
    WRONG!!! we are all just WAITING for your opinion on this ....!!!
  •  
    well, yes my remark was ironic... I'm surprised they did a magazine on these concepts...! But the press is always waiting for sensational. They do not even wait for the work to be peer-reviewed now to make an article on it ! This is one of the bad sides of arxiv in my opinion. It's like a journalist that make an article with a copy-paste in wikipedia ! Anyway, this is of course complete bullsh..., and I would have laughed if I had read this in a sci-fi book... but in a "serious" article i'm crying... For the DM i do not agree with your remark Luzi. It's not dark energy they want to use. The DM is baryonic, it's dark just because it's cold so we don't see it by usual means. If you believe the in the standard model of cosmology, then the DM should be somewhere around the galaxies. But it's of course not uniformly distributed, so a DM engine would work (if at all...) only in the periphery of galaxies. It's already impossible to get there...
  •  
    One reply to Pacome, though the discussion exceeds by far the relevance of the topic already. Baryonic DM is strictly limited by cosomology, if one believes in these models, of course. Anyway, even though most DM is cold, we are constantly bombarded by some DM particles that come together with cosmic radiation, solar wind etc. etc. If DM easily interacted with normal matter, we would have found it long ago. In the paper they consider DM as neutralinos, which are neither baryonic nor strongly or electromagnetically interacting.
  •  
    well then I agree, how the fu.. they want to collect them !!!
santecarloni

Ergodic theorem passes the test - physicsworld.com - 0 views

  •  
    For more than a century scientists have relied on the "ergodic theorem" to explain diffusive processes such as the movement of molecules in a liquid. However, they had not been able to confirm experimentally a central tenet of the theorem - that the average of repeated measurements of the random motion of an individual molecule is the same as the random motion of the entire ensemble of those molecules. Now, however, researchers in Germany have measured both parameters in the same system - making them the first to confirm experimentally that the ergodic theorem applies to diffusion.
Alexander Wittig

Scientists discover hidden galaxies behind the Milky Way - 1 views

  •  
    Hundreds of hidden nearby galaxies have been studied for the first time, shedding light on a mysterious gravitational anomaly dubbed the Great Attractor. Despite being just 250 million light years from Earth-very close in astronomical terms-the new galaxies had been hidden from view until now by our own galaxy, the Milky Way. Using CSIRO's Parkes radio telescope equipped with an innovative receiver, an international team of scientists were able to see through the stars and dust of the Milky Way, into a previously unexplored region of space. The discovery may help to explain the Great Attractor region, which appears to be drawing the Milky Way and hundreds of thousands of other galaxies towards it with a gravitational force equivalent to a million billion Suns. Lead author Professor Lister Staveley-Smith, from The University of Western Australia node of the International Centre for Radio Astronomy Research (ICRAR), said the team found 883 galaxies, a third of which had never been seen before. "The Milky Way is very beautiful of course and it's very interesting to study our own galaxy but it completely blocks out the view of the more distant galaxies behind it," he said. Professor Staveley-Smith said scientists have been trying to get to the bottom of the mysterious Great Attractor since major deviations from universal expansion were first discovered in the 1970s and 1980s. "We don't actually understand what's causing this gravitational acceleration on the Milky Way or where it's coming from," he said. "We know that in this region there are a few very large collections of galaxies we call clusters or superclusters, and our whole Milky Way is moving towards them at more than two million kilometres per hour." The research identified several new structures that could help to explain the movement of the Milky Way, including three galaxy concentrations (named NW1, NW2 and NW3) and two new clusters (named CW1 and CW2).
Alexander Wittig

On the extraordinary strength of Prince Rupert's drops - 1 views

  •  
    Prince Rupert's drops (PRDs), also known as Batavian tears, have been in existence since the early 17th century. They are made of a silicate glass of a high thermal expansion coefficient and have the shape of a tadpole. Typically, the diameter of the head of a PRD is in the range of 5-15 mm and that of the tail is 0.5 to 3.0 mm. PRDs have exceptional strength properties: the head of a PRD can withstand impact with a small hammer, or compression between tungsten carbide platens to high loads of ∼15 000 N, but the tail can be broken with just finger pressure leading to catastrophic disintegration of the PRD. We show here that the high strength of a PRD comes from large surface compressive stresses in the range of 400-700 MPa, determined using techniques of integrated photoelasticity. The surface compressive stresses can suppress Hertzian cone cracking during impact with a small hammer or compression between platens. Finally, it is argued that when the compressive force on a PRD is very high, plasticity in the PRD occurs, which leads to its eventual destruction with increasing load.
LeopoldS

Schumpeter: More than just a game | The Economist - 3 views

  •  
    remember the discussion I tried to trigger in the team a few weeks ago ...
  • ...5 more comments...
  •  
    main quote I take from the article: "gamification is really a cover for cynically exploiting human psychology for profit"
  •  
    I would say that it applies to management in general :-)
  •  
    which is exactly why it will never work .... and surprisingly "managers" fail to understand this very simple fact.
  •  
    ... "gamification is really a cover for cynically exploiting human psychology for profit" --> "Why Are Half a Million People Poking This Giant Cube?" http://www.wired.com/gamelife/2012/11/curiosity/
  •  
    I think the "essence" of the game is its uselessness... workers need exactly the inverse, to find a meaning in what they do !
  •  
    I love the linked article provided by Johannes! It expresses very elegantly why I still fail to understand even extremely smart and busy people in my view apparently waiting their time in playing computer games - but I recognise that there is something in games that we apparently need / gives us something we cherish .... "In fact, half a million players so far have registered to help destroy the 64 billion tiny blocks that compose that one gigantic cube, all working in tandem toward a singular goal: discovering the secret that Curiosity's creator says awaits one lucky player inside. That's right: After millions of man-hours of work, only one player will ever see the center of the cube. Curiosity is the first release from 22Cans, an independent game studio founded earlier this year by Peter Molyneux, a longtime game designer known for ambitious projects like Populous, Black & White and Fable. Players can carve important messages (or shameless self-promotion) onto the face of the cube as they whittle it to nothing. Image: Wired Molyneux is equally famous for his tendency to overpromise and under-deliver on his games. In 2008, he said that his upcoming game would be "such a significant scientific achievement that it will be on the cover of Wired." That game turned out to be Milo & Kate, a Kinect tech demo that went nowhere and was canceled. Following this, Molyneux left Microsoft to go indie and form 22Cans. Not held back by the past, the Molyneux hype train is going full speed ahead with Curiosity, which the studio grandiosely promises will be merely the first of 22 similar "experiments." Somehow, it is wildly popular. The biggest challenge facing players of Curiosity isn't how to blast through the 2,000 layers of the cube, but rather successfully connecting to 22Cans' servers. So many players are attempting to log in that the server cannot handle it. Some players go for utter efficiency, tapping rapidly to rack up combo multipliers and get more
  •  
    why are video games so much different than collecting stamps or spotting birds or planes ? One could say they are all just hobbies
‹ Previous 21 - 40 of 2781 Next › Last »
Showing 20 items per page