Skip to main content

Home/ Advanced Concepts Team/ Group items tagged Sensing

Rss Feed Group items tagged

ESA ACT

Biomimetic tactile sensing - 0 views

shared by ESA ACT on 24 Apr 09 - Cached
  •  
    The vibrissae of rats (et al) are examined and transferred to robotic devices for orientation and tactile exploration.
ESA ACT

Robotic Bugs -- Robot That Senses Its Way With Flexible Antenna - 0 views

  •  
    Tobias - sounds somehow relevant... cockroaches, navigation, swarm intelligence...
ESA ACT

TED: MIT Students Turn Internet Into a Sixth Human Sense -- Video | Epicenter from Wire... - 0 views

  •  
    amazing
ESA ACT

Voltage sensing membrane proteins. - 0 views

  •  
    Molecules that monitor voltage - of any use?
ESA ACT

Extension of Human Senses - A NASA Division! - 0 views

  •  
    NEURO-ENGINEERING COMPUTATIONAL SCIENCES DIVISION NASA AMES RESEARCH CENTER
LeopoldS

Global Innovation Commons - 4 views

  •  
    nice initiative!
  • ...6 more comments...
  •  
    Any viral licence is a bad license...
  •  
    I'm pretty confident I'm about to open a can of worms, but mind explaining why? :)
  •  
    I am less worried about the can of worms ... actually eager to open it ... so why????
  •  
    Well, the topic GPL vs other open-source licenses (e.g., BSD, MIT, etc.) is old as the internet and it has provided material for long and glorious flame wars. The executive summary is that the GPL license (the one used by Linux) is a license which imposes some restrictions on the way you are allowed to (re)use the code. Specifically, if you re-use or modify GPL code and re-distribute it, you are required to make it available again under the GPL license. It is called "viral" because once you use a bit of GPL code, you are required to make the whole application GPL - so in this sense GPL code replicates like a virus. On the other side of the spectrum, there are the so-called BSD-like licenses which have more relaxed requirements. Usually, the only obligation they impose is to acknowledge somewhere (e.g., in a README file) that you have used some BSD code and who wrote it (this is called "attribution clause"), but they do not require to re-distribute the whole application under the same license. GPL critics usually claim that the license is not really "free" because it does not allow you to do whatever you want with the code without restrictions. GPL proponents claim that the requirements imposed by the GPL are necessary to safeguard the freedom of the code, in order to avoid being able to re-use GPL code without giving anything back to the community (which the BSD license allow: early versions of Microsoft Windows, for instance, had the networking code basically copy-pasted from BSD-licensed versions of Unix). In my opinion (and this point is often brought up in the debates) the division pro/against GPL mirrors somehow the division between anti/pro anarchism. Anarchists claim that the only way to be really free is the absence of laws, while non-anarchist maintain that the only practical way to be free is to have laws (which by definition limit certain freedoms). So you can see how the topic can quickly become inflammatory :) GPL at the current time is used by aro
  •  
    whoa, the comment got cut off. Anyway, I was just saying that at the present time the GPL license is used by around 65% of open source projects, including the Linux kernel, KDE, Samba, GCC, all the GNU utils, etc. The topic is much deeper than this brief summary, so if you are interested in it, Leopold, we can discuss it at length in another place.
  •  
    Thanks for the record long comment - am sure that this is longest ever made to an ACT diigo post! On the topic, I would rather lean for the GPL license (which I also advocated for the Marek viewer programme we put on source forge btw), mainly because I don't trust that open source is by nature delivering a better product and thus will prevail but I still would like to succeed, which I am not sure it would if there were mainly BSD like licenses around. ... but clearly, this is an outsider talking :-)
  •  
    btw: did not know the anarchist penchant of Marek :-)
  •  
    Well, not going into the discussion about GPL/BSD, the viral license in this particular case in my view simply undermines the "clean and clear" motivations of the initiative authors - why should *they* be credited for using something they have no rights for? And I don't like viral licences because they prevent using things released under this licence to all those people who want to release their stuff under a different licence, thus limiting the usefulness of the stuff released on that licence :) BSD is not a perfect license too, it also had major flaws And I'm not an anarchist, lol
Christophe Praz

Scientific method: Defend the integrity of physics - 2 views

  •  
    Interesting article about theoretical physics theories vs. experimental verification. Can we state that a theory can be so good that its existence supplants the need for data and testing ? If a theory is proved to be untestable experimentally, can we still say that it is a scientific theory ? (not in my opinion)
  •  
    There is an interesting approach by Feynman that it does not make sense to describe something of which we cannot measure the consequences. So a theory that is so removed from experiment that it cannot be backed by it is pointless and of no consequence. It is a bit as with the statement "if a tree falls in the forrest and nobody is there to hear it, does it make a sound?". We would typically extrapolate to say that it does make a sound. But actually nobody knows - you would have to take some kind of measurement. But even more fundamentally it does not make any difference! For all intents and purposes there is no point in forcing a prediction that you cannot measure and that therefore has noto reflect an event in your world.
  •  
    "Mathematics is the model of the universe, not the other way round" - M. R.
jcunha

Wireless 10 kW power transmission - 1 views

  •  
    Mitsubishi Heavy Industries said Friday that it has succeeded in transmitting 10 kW of power through 500 m. An announcement that comes just after JAXA scientists reported one more breakthrough in the quest for Space Solar Power Systems (http://phys.org/news/2015-03-japan-space-scientists-wireless-energy.html). One step closer to Power Generation from Space/
  •  
    from the press release (https://www.mhi-global.com/news/story/1503121879.html) "10 kilowatts (kW) of power was sent from a transmitting unit by microwave. The reception of power was confirmed at a receiver unit located at a distance of 500 meters (m) away by the illumination of LED lights, using part of power transmitted". So 10kW of transmission to light a few efficient LED lights??? In a 2011 report (https://www.mhi-global.com/company/technology/review/pdf/e484/e484017.pdf), MHI estimated this would generate the same electricity output as a 400-megawatt thermal plant - or enough to serve more than 150,000 homes during peak hours. The price? The same as publicly supplied power, according to its calculations. There are no results to boost these claims however. The main work they do now is focused on beam steering control. I guess the real application in mind is more targeted to terrestrial applications, eg wireless highway charging (http://www.bbc.com/future/story/20120312-wireless-highway-to-charge-cars). With the distances so much shorter, leading to much smaller antenna's and rectenna's this makes much more sense to me to develop.
benjaminroussel

Magnet Finge.rs - 2 views

  •  
    Bio-hacking/cyborg: implanting rare-earth magnets into your fingers to sense magnetic fields. At the same time quite awesome and quite extreme.
Dario Izzo

Miguel Nicolelis Says the Brain Is Not Computable, Bashes Kurzweil's Singularity | MIT ... - 9 views

  •  
    As I said ten years ago and psychoanalysts 100 years ago. Luis I am so sorry :) Also ... now that the commission funded the project blue brain is a rather big hit Btw Nicolelis is a rather credited neuro-scientist
  • ...14 more comments...
  •  
    nice article; Luzi would agree as well I assume; one aspect not clear to me is the causal relationship it seems to imply between consciousness and randomness ... anybody?
  •  
    This is the same thing Penrose has been saying for ages (and yes, I read the book). IF the human brain proves to be the only conceivable system capable of consciousness/intelligence AND IF we'll forever be limited to the Turing machine type of computation (which is what the "Not Computable" in the article refers to) AND IF the brain indeed is not computable, THEN AI people might need to worry... Because I seriously doubt the first condition will prove to be true, same with the second one, and because I don't really care about the third (brains is not my thing).. I'm not worried.
  •  
    In any case, all AI research is going in the wrong direction: the mainstream is not on how to go beyond Turing machines, rather how to program them well enough ...... and thats not bringing anywhere near the singularity
  •  
    It has not been shown that intelligence is not computable (only some people saying the human brain isn't, which is something different), so I wouldn't go so far as saying the mainstream is going in the wrong direction. But even if that indeed was the case, would it be a problem? If so, well, then someone should quickly go and tell all the people trading in financial markets that they should stop using computers... after all, they're dealing with uncomputable undecidable problems. :) (and research on how to go beyond Turing computation does exist, but how much would you want to devote your research to a non existent machine?)
  •  
    [warning: troll] If you are happy with developing algorithms that serve the financial market ... good for you :) After all they have been proved to be useful for humankind beyond any reasonable doubt.
  •  
    Two comments from me: 1) an apparently credible scientist takes Kurzweil seriously enough to engage with him in polemics... oops 2) what worries me most, I didn't get the retail store pun at the end of article...
  •  
    True, but after Google hired Kurzweil he is de facto being taken seriously ... so I guess Nicolelis reacted to this.
  •  
    Crazy scientist in residence... interesting marketing move, I suppose.
  •  
    Unfortunately, I can't upload my two kids to the cloud to make them sleep, that's why I comment only now :-). But, of course, I MUST add my comment to this discussion. I don't really get what Nicolelis point is, the article is just too short and at a too popular level. But please realize that the question is not just "computable" vs. "non-computable". A system may be computable (we have a collection of rules called "theory" that we can put on a computer and run in a finite time) and still it need not be predictable. Since the lack of predictability pretty obviously applies to the human brain (as it does to any sufficiently complex and nonlinear system) the question whether it is computable or not becomes rather academic. Markram and his fellows may come up with a incredible simulation program of the human brain, this will be rather useless since they cannot solve the initial value problem and even if they could they will be lost in randomness after a short simulation time due to horrible non-linearities... Btw: this is not my idea, it was pointed out by Bohr more than 100 years ago...
  •  
    I guess chaos is what you are referring to. Stuff like the Lorentz attractor. In which case I would say that the point is not to predict one particular brain (in which case you would be right): any initial conditions would be fine as far as any brain gets started :) that is the goal :)
  •  
    Kurzweil talks about downloading your brain to a computer, so he has a specific brain in mind; Markram talks about identifying neural basis of mental diseases, so he has at least pretty specific situations in mind. Chaos is not the only problem, even a perfectly linear brain (which is not a biological brain) is not predictable, since one cannot determine a complete set of initial conditions of a working (viz. living) brain (after having determined about 10% the brain is dead and the data useless). But the situation is even worse: from all we know a brain will only work with a suitable interaction with its environment. So these boundary conditions one has to determine as well. This is already twice impossible. But the situation is worse again: from all we know, the way the brain interacts with its environment at a neural level depends on his history (how this brain learned). So your boundary conditions (that are impossible to determine) depend on your initial conditions (that are impossible to determine). Thus the situation is rather impossible squared than twice impossible. I'm sure Markram will simulate something, but this will rather be the famous Boltzmann brain than a biological one. Boltzman brains work with any initial conditions and any boundary conditions... and are pretty dead!
  •  
    Say one has an accurate model of a brain. It may be the case that the initial and boundary conditions do not matter that much in order for the brain to function an exhibit macro-characteristics useful to make science. Again, if it is not one particular brain you are targeting, but the 'brain' as a general entity this would make sense if one has an accurate model (also to identify the neural basis of mental diseases). But in my opinion, the construction of such a model of the brain is impossible using a reductionist approach (that is taking the naive approach of putting together some artificial neurons and connecting them in a huge net). That is why both Kurzweil and Markram are doomed to fail.
  •  
    I think that in principle some kind of artificial brain should be feasible. But making a brain by just throwing together a myriad of neurons is probably as promising as throwing together some copper pipes and a heap of silica and expecting it to make calculations for you. Like in the biological system, I suspect, an artificial brain would have to grow from a small tiny functional unit by adding neurons and complexity slowly and in a way that in a stable way increases the "usefulness"/fitness. Apparently our brain's usefulness has to do with interpreting inputs of our sensors to the world and steering the body making sure that those sensors, the brain and the rest of the body are still alive 10 seconds from now (thereby changing the world -> sensor inputs -> ...). So the artificial brain might need sensors and a body to affect the "world" creating a much larger feedback loop than the brain itself. One might argue that the complexity of the sensor inputs is the reason why the brain needs to be so complex in the first place. I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain. Anyone? Or are they trying to simulate the human brain after it has been removed from the body? That might be somewhat easier I guess...
  •  
    Johannes: "I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain." In Artificial Life the whole environment+bodies&brains is simulated. You have also the whole embodied cognition movement that basically advocates for just that: no true intelligence until you model the system in its entirety. And from that you then have people building robotic bodies, and getting their "brains" to learn from scratch how to control them, and through the bodies, the environment. Right now, this is obviously closer to the complexity of insect brains, than human ones. (my take on this is: yes, go ahead and build robots, if the intelligence you want to get in the end is to be displayed in interactions with the real physical world...) It's easy to dismiss Markram's Blue Brain for all their clever marketing pronouncements that they're building a human-level consciousness on a computer, but from what I read of the project, they seem to be developing a platfrom onto which any scientist can plug in their model of a detail of a detail of .... of the human brain, and get it to run together with everyone else's models of other tiny parts of the brain. This is not the same as getting the artificial brain to interact with the real world, but it's a big step in enabling scientists to study their own models on more realistic settings, in which the models' outputs get to effect many other systems, and throuh them feed back into its future inputs. So Blue Brain's biggest contribution might be in making model evaluation in neuroscience less wrong, and that doesn't seem like a bad thing. At some point the reductionist approach needs to start moving in the other direction.
  •  
    @ Dario: absolutely agree, the reductionist approach is the main mistake. My point: if you take the reductionsit approach, then you will face the initial and boundary value problem. If one tries a non-reductionist approach, this problem may be much weaker. But off the record: there exists a non-reductionist theory of the brain, it's called psychology... @ Johannes: also agree, the only way the reductionist approach could eventually be successful is to actually grow the brain. Start with essentially one neuron and grow the whole complexity. But if you want to do this, bring up a kid! A brain without body might be easier? Why do you expect that a brain detached from its complete input/output system actually still works. I'm pretty sure it does not!
  •  
    @Luzi: That was exactly my point :-)
Nicholas Lan

The Future… One Hundred Years Ago - 13 views

  •  
    one of these again. french illustrations from 1910 of life in the year 2000. some pleasingly close. a lot of flying and robots. some inexplicable (bunch of people staring at a horse). some bmi.
  • ...5 more comments...
  •  
    I like them again and again ....
  •  
    what would be todays equivalents?
  •  
    Ha! The one about the horse is that "in 100 years there will be people who've never seen a live horse in their lives" :-) Actually it's more than true now with children asking my mother who works in the school "so, do those kangaroos really exist"? Children are fed with so much realistic BS on TV (dinosaur parks etc.) that they can hardly tell the difference between fiction and reality. If you already have offspring: have they seen, say, a live cow or chicken already? (This is most probably a reference to the quote: "Horse is as everyone can see")
  •  
    >what would be todays equivalents? Hmmm... what about technology forecasts?
  •  
    ah. that makes sense. what about the one where they're having dinner then?
  •  
    No idea... another one I don't get is the one with the waiter presenting some small black-white thing to the white hair guy on a chair.
  •  
    love the clockwork orange one
jaihobah

Google's AI Wizard Unveils a New Twist on Neural Networks - 2 views

  •  
    "Hinton's new approach, known as capsule networks, is a twist on neural networks intended to make machines better able to understand the world through images or video. In one of the papers posted last week, Hinton's capsule networks matched the accuracy of the best previous techniques on a standard test of how well software can learn to recognize handwritten digits." Links to papers: https://arxiv.org/abs/1710.09829 https://openreview.net/forum?id=HJWLfGWRb&noteId=HJWLfGWRb
  •  
    impressive!
  •  
    seems a very impressive guy :"Hinton formed his intuition that vision systems need such an inbuilt sense of geometry in 1979, when he was trying to figure out how humans use mental imagery. He first laid out a preliminary design for capsule networks in 2011. The fuller picture released last week was long anticipated by researchers in the field. "Everyone has been waiting for it and looking for the next great leap from Geoff," says Kyunghyun Cho, a professor"
jaihobah

DARPA Advanced Plant Technologies project - 2 views

  •  
    " The goal of the APT program is to control and direct plant physiology to detect chemical, biological, radiological, and/or nuclear threats, as well as electromagnetic signals. " Now that is an advanced concept...
  •  
    and look at this exceptional insight: "plants are easily deployed, self-powering, and ubiquitous in the environment, and the combination of these native abilities with specifically engineered sense-and-report traits will produce sensors occupying new and unique operational spaces" :-)
darioizzo2

Physics - Locating Objects with Quantum Radar - 1 views

shared by darioizzo2 on 29 May 20 - No Cached
  •  
    Of interest for debris monitoring and SSA? It has been suggested in the kelvins discussions.....
  •  
    this is something that I think would really make sense to look closer into, also checking what ESA might have already done on it
eblazquez

Bloomberg - The Only Crypto Story You Need - 0 views

  •  
    We're obviously not crypto-bros in the team, but this lengthy article is nonetheless very much worth a read to try to make sense of all this madness. Written by an expert in economics and finance, doesn't hold back any punches!
« First ‹ Previous 61 - 77 of 77
Showing 20 items per page