Skip to main content

Home/ Advanced Concepts Team/ Group items tagged utilization

Rss Feed Group items tagged

Tom Gheysens

Biomimicr-E: Nature-Inspired Energy Systems | AAAS - 4 views

  •  
    some biomimicry used in energy systems... maybe it sparks some ideas
  •  
    not much new that has not been shared here before ... BUT: we have done relativley little on any of them. for good reasons?? don't know - maybe time to look into some of these again more closely Energy Efficiency( Termite mounds inspired regulated airflow for temperature control of large structures, preventing wasteful air conditioning and saving 10% energy.[1] Whale fins shapes informed the design of new-age wind turbine blades, with bumps/tubercles reducing drag by 30% and boosting power by 20%.[2][3][4] Stingray motion has motivated studies on this type of low-effort flapping glide, which takes advantage of the leading edge vortex, for new-age underwater robots and submarines.[5][6] Studies of microstructures found on shark skin that decrease drag and prevent accumulation of algae, barnacles, and mussels attached to their body have led to "anti-biofouling" technologies meant to address the 15% of marine vessel fuel use due to drag.[7][8][9][10] Energy Generation( Passive heliotropism exhibited by sunflowers has inspired research on a liquid crystalline elastomer and carbon nanotube system that improves the efficiency of solar panels by 10%, without using GPS and active repositioning panels to track the sun.[11][12][13] Mimicking the fluid dynamics principles utilized by schools of fish could help to optimize the arrangement of individual wind turbines in wind farms.[14] The nanoscale anti-reflection structures found on certain butterfly wings has led to a model to effectively harness solar energy.[15][16][17] Energy Storage( Inspired by the sunlight-to-energy conversion in plants, researchers are utilizing a protein in spinach to create a sort of photovoltaic cell that generates hydrogen from water (i.e. hydrogen fuel cell).[18][19] Utilizing a property of genetically-engineered viruses, specifically their ability to recognize and bind to certain materials (carbon nanotubes in this case), researchers have developed virus-based "scaffolds" that
Alexander Wittig

Proof of the Riemann Hypothesis utilizing the theory of Alternative Facts - 0 views

  •  
    An excellent science coffee topic! This is a true breakthrough in pure mathematics with plentiful applications in the lesser sciences (such as theoretical physics). People tell me quantum gravity is already practically solved by this. Conway's powerful theory of Alternative Facts can render many difficult problems tractable. Here we demonstrate the power of AF to prove the Riemann Hypothesis, one of the most important unsolved problems in mathematics. We further suggest applications of AF to other challenging unsolved problems such as the zero-equals-one conjecture (which is also true) and the side-counting problem of the circle.
LeopoldS

Global Innovation Commons - 4 views

  •  
    nice initiative!
  • ...6 more comments...
  •  
    Any viral licence is a bad license...
  •  
    I'm pretty confident I'm about to open a can of worms, but mind explaining why? :)
  •  
    I am less worried about the can of worms ... actually eager to open it ... so why????
  •  
    Well, the topic GPL vs other open-source licenses (e.g., BSD, MIT, etc.) is old as the internet and it has provided material for long and glorious flame wars. The executive summary is that the GPL license (the one used by Linux) is a license which imposes some restrictions on the way you are allowed to (re)use the code. Specifically, if you re-use or modify GPL code and re-distribute it, you are required to make it available again under the GPL license. It is called "viral" because once you use a bit of GPL code, you are required to make the whole application GPL - so in this sense GPL code replicates like a virus. On the other side of the spectrum, there are the so-called BSD-like licenses which have more relaxed requirements. Usually, the only obligation they impose is to acknowledge somewhere (e.g., in a README file) that you have used some BSD code and who wrote it (this is called "attribution clause"), but they do not require to re-distribute the whole application under the same license. GPL critics usually claim that the license is not really "free" because it does not allow you to do whatever you want with the code without restrictions. GPL proponents claim that the requirements imposed by the GPL are necessary to safeguard the freedom of the code, in order to avoid being able to re-use GPL code without giving anything back to the community (which the BSD license allow: early versions of Microsoft Windows, for instance, had the networking code basically copy-pasted from BSD-licensed versions of Unix). In my opinion (and this point is often brought up in the debates) the division pro/against GPL mirrors somehow the division between anti/pro anarchism. Anarchists claim that the only way to be really free is the absence of laws, while non-anarchist maintain that the only practical way to be free is to have laws (which by definition limit certain freedoms). So you can see how the topic can quickly become inflammatory :) GPL at the current time is used by aro
  •  
    whoa, the comment got cut off. Anyway, I was just saying that at the present time the GPL license is used by around 65% of open source projects, including the Linux kernel, KDE, Samba, GCC, all the GNU utils, etc. The topic is much deeper than this brief summary, so if you are interested in it, Leopold, we can discuss it at length in another place.
  •  
    Thanks for the record long comment - am sure that this is longest ever made to an ACT diigo post! On the topic, I would rather lean for the GPL license (which I also advocated for the Marek viewer programme we put on source forge btw), mainly because I don't trust that open source is by nature delivering a better product and thus will prevail but I still would like to succeed, which I am not sure it would if there were mainly BSD like licenses around. ... but clearly, this is an outsider talking :-)
  •  
    btw: did not know the anarchist penchant of Marek :-)
  •  
    Well, not going into the discussion about GPL/BSD, the viral license in this particular case in my view simply undermines the "clean and clear" motivations of the initiative authors - why should *they* be credited for using something they have no rights for? And I don't like viral licences because they prevent using things released under this licence to all those people who want to release their stuff under a different licence, thus limiting the usefulness of the stuff released on that licence :) BSD is not a perfect license too, it also had major flaws And I'm not an anarchist, lol
Aurelie Heritier

'Sixth sense' really exists, scientists say - 1 views

  •  
    New research reveals that humans utilize a part of the brain that is organized topographically to determine, for example, the number of jelly beans in a bowl or the number of cookies in a jar.
Thijs Versloot

Nanophononic metamaterials to boost thermoelectric performance - 0 views

  •  
    Thermoelectric materials can see their performance radically improved via the utilization of an array of 'nanoscale pillars,' according to new research from the University of Colorado Boulder. These tiny pillars, built directly onto the thermoelectric material, will reduce the heat flow through the material by a factor of two while not affecting the electrical flow.
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
Beniamino Abis

Structure and Anonymity of the Bitcoin Transaction Graph - 1 views

shared by Beniamino Abis on 26 Sep 13 - No Cached
  •  
    Bitcoin utilizes a peer-to-peer network to issue anonymous payment transactions between different users. Dynamical effects have been found, of which some increase anonymity while others decrease it. Most importantly, several parameters of the Bitcoin transaction graph seem to have become stationary over the last 12-18 months. The implications are discussed.
Thijs Versloot

Communicate through the plasma sheath during re-entry - 1 views

  •  
    In order to overcome the communication blackout problem suffered by hypersonic vehicles, a matching approach has been proposed for the first time in this paper. It utilizes a double-positive (DPS) material layer surrounding a hypersonic vehicle antenna to match with the plasma sheath enclosing the vehicle. Or in more easy language, basically one provides an antenna as capacitor, in combination with the plasma sheath (an inductor), they form an electrical circuit which becomes transparent for long wavelength radiation (the communication signal). The reasons is that fluctuations are balanced by the twin system, preventing absorption/reflection of the incoming radiation. Elegant solution, but will only work on long wavelength communication, plus I am not sure whether the antenna needs active control (as the plasma sheath conditions change during the re-entry phase).
Joris _

SPACE.com -- Bigelow Aerospace Soars with Private Space Station Deals - 0 views

  • A private space company offering room on inflatable space habitats for research has found a robust international market
  • A question that continues to float through the halls of NASA and the Congress: Is there a commercial market for utilizing space?
Giusi Schiavone

cost-utility analysis of abolishing the law of gravity - 3 views

  •  
    Crazy and funny
ESA ACT

Utilization of Photon Orbital Angular Momentum in the Low-Frequency Radio Domain - 0 views

  •  
    We show numerically that vector antenna arrays can generate radio beams that exhibit spin and orbital angular momentum characteristics similar to those of helical Laguerre-Gauss laser beams in paraxial optics. For low frequencies (<~1 GHz), digital techni
dejanpetkow

Torsional Carbon Nanotube Artificial Muscles - 0 views

  • Actuator materials producing rotation are rare and demonstrated rotations are small, though rotary systems like electric motors, pumps, turbines and compressors are widely needed and utilized. Present motors can be rather complex and, therefore, difficult to miniaturize. We show that a short electrolyte-filled twist spun carbon nanotube yarn, which is much thinner than a human hair, functions as a torsional artificial muscle in a simple three-electrode electrochemical system, providing a reversible 15,000° rotation and 590 revolutions/minute. A hydrostatic actuation mechanism, like for nature’s muscular hydrostats, explains the simultaneous occurrence of lengthwise contraction and torsional rotation during the yarn volume increase caused by electrochemical double-layer charge injection. Use of a torsional yarn muscle as a mixer for a fluidic chip is demonstrated.
  •  
    I have no access to the pdf, but abstract sounds interesting.
Thijs Versloot

Charging batteries with latent heat #MIT - 0 views

  •  
    These features lead to a high heat-to-electricity energy conversion efficiency of 5.7% when cycled between 10 and 60 °C, opening a promising way to utilize low-grade heat.
Thijs Versloot

Is Westeros orbiting a binary star system? #ArXiv - 5 views

shared by Thijs Versloot on 21 May 14 - No Cached
Nicholas Lan liked it
  •  
    To right that appalling wrong, here we attempt to explain the apparently erratic seasonal changes in the world of G.R.R.M. A natural explanation for such phenomena is the unique behavior of a circumbinary planet. Thus, by speculating that the planet under scrutiny is orbiting a pair of stars, we utilize the power of numerical three-body dynamics to predict that, unfortunately, it is not possible to predict either the length, or the severity of any coming winter.
Thijs Versloot

Resource availability towards a self-sufficient Mars Colony - 0 views

  •  
    Regarding our discussion on resource self-sufficiency of a Mars colony. Would it ever be possible (from a resource perspective that is..) A NASA report on availability of resources. A self-sufficiency trade study described in Boston (1996) identifies the mission duration at which the development of local life support resources becomes advantageous. Within 30 days, without recycling, or with the equivalent leakage, it becomes advantageous to derive oxygen from local resources. The time constants for water and food are about 6 months and 3 years, respectively.
  •  
    I guess it depends on the number of astronauts that have to be supporte ... 3 years for food looks like a lot
LeopoldS

energy utilities paying attorneys to help prevent regulation - 2 views

  •  
    where real power sits ...
Nina Nadine Ridder

Material could harvest sunlight by day, release heat on demand hours or days later - 5 views

  •  
    Imagine if your clothing could, on demand, release just enough heat to keep you warm and cozy, allowing you to dial back on your thermostat settings and stay comfortable in a cooler room. Or, picture a car windshield that stores the sun's energy and then releases it as a burst of heat to melt away a layer of ice.
  •  
    interesting indeed: Such chemically-based storage materials, known as solar thermal fuels (STF), have been developed before, including in previous work by Grossman and his team. But those earlier efforts "had limited utility in solid-state applications" because they were designed to be used in liquid solutions and not capable of making durable solid-state films, Zhitomirsky says. The new approach is the first based on a solid-state material, in this case a polymer, and the first based on inexpensive materials and widespread manufacturing technology. Read more at: http://phys.org/news/2016-01-material-harvest-sunlight-day-demand.html#jCp
jcunha

Exploring gambles reveals foundational difficulty behind economic theory - 3 views

  •  
    In the wake of the financial crisis, many started questioning different aspects of the economic formalism. Here a mathematical "dynamic" alternative to economic utility theory (which has also been the target of some famous recent attacks, see prospect theory) is developed and applied to the St. Petersburg coin tossing paradox. A good read at http://scitation.aip.org/content/aip/journal/chaos/26/2/10.1063/1.4940236
1 - 19 of 19
Showing 20 items per page