Skip to main content

Home/ Advanced Concepts Team/ Group items tagged generator

Rss Feed Group items tagged

Lionel Jacques

Cubesat to run applications - 0 views

  •  
    the ArduSat (Arduino - satellite) will be the first open platform allowing the general public to design and run their own space-based applications, games and experiments, steer the onboard cameras to take pictures on-demand, and even broadcast personalized messages back to Earth.
Athanasia Nikolaou

Nanoparticles Augment Plant Functions - 0 views

  •  
    "What's new: Choi said that although some researchers have used natural photosynthetic units to enhance the light-harvesting abilities of nanomaterials, this is the first time anyone has used nanomaterials to enhance the function of photosynthetic units."
johannessimon81

Nasa-funded study: industrial civilisation headed for 'irreversible collapse'? - 4 views

  •  
    Sounds relevant. Does ESA need to have a position on this question?
  •  
    This was on Slashdot now, with a link to the paper. It quite an iteresting study actually. "The scenarios most closely reflecting the reality of our world today are found in the third group of experiments (see section 5.3), where we introduced economic stratification. Under such conditions, we find that collapse is difficult to avoid."
  •  
    Interesting, but is it new? In general, I would say that history has shown us that it is inevitable that civilisations get replaced by new concepts (much is published about this, read eg Fog of War by Jona Lendering on the struggles between civilisations in ancient history, which have remarkably similar issues as today, yet on a different scale of course). "While some members of society might raise the alarm that the system is moving towards an impending collapse and therefore advocate structural changes to society in order to avoid it, Elites and their supporters, who opposed making these changes, could point to the long sustainable trajectory 'so far' in support of doing nothing." I guess this bang on it, the ones that can change the system, are not benefitted by doing so, hence enrichment, depletion, short term gain remain and might even accelerate to compensate for the loss in the rest of the system.
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
Thijs Versloot

Long-range chemical sensors using new high power continuum lasers - 0 views

  •  
    Short range chemical analysis methods exist already, but using new high power lasers one could extend the operation length to e.g aircraft.
  •  
    Isabelle?
  •  
    The optical setup is very simple and lightweight: a compact semi-conductor DFB laser source and an all optical fiber system for amplification and supercontinuum generation. Interesting for space applications!
Marcus Maertens

Gadget Genius - nanotechnology breakthrough is big deal for electronics : The Universit... - 2 views

  •  
    Quote: "This is exactly what we are pursuing - self-assembling materials that organize at smaller sizes, say, less than 20 or even 10 nanometers"
  •  
    Direct Self-Assembly (DSA) is one of the competitors for the next-generation 'lithography' together with direct-write via electron beam and the more traditional extreme UV (EUV) lithography. Although there are huge benefits to use DSA, the technology does have some drawbacks when it comes to line edge roughness. It seems however particularly good for repetitive structures that are used in memory chips. As long as EUV is struggling to get it working, DSA definitely has a fighting chance to enter the market one day.
Thijs Versloot

Power hiking, single footstep powering 600 #LEDS - 1 views

  •  
    nice indeed! " Triggered by commonly available ambient mechanical energy such as human footfalls, a NG with size smaller than a human palm can generate maximum short-circuit current of 2 mA, delivering instantaneous power output of 1.2 W to external load. The power output corresponds to an area power density of 313 W/m2 and a volume power density of 54 268 W/m3 at an open-circuit voltage of 1200 V. An energy conversion efficiency of 14.9% has been achieved. The power was capable of instantaneously lighting up as many as 600 multicolor commercial LED bulbs. The record high power output for the NG is attributed to optimized structure, proper materials selection and nanoscale surface modification. This work demonstrated the practicability of using NG to harvest large-scale mechanical energy, such as footsteps, rolling wheels, wind power, and ocean waves."
  •  
    You should be able to put it also in your shoes such that you may be able to power some gadgets. Thinking about it, I have seen many kids already running around with brightly lit sneakers!
Thijs Versloot

Telescope to track space junk using youth radio station - 0 views

  •  
    Team leader Professor Steven Tingay, Director of the MWA at Curtin University and Chief Investigator in the Australian Research Council Centre for All-sky Astrophysics (CAASTRO) said the MWA will be able to detect the space junk by listening in to the radio signals generated by stations including popular youth network Triple J.
Tom Gheysens

Scientists discover double meaning in genetic code - 4 views

  •  
    Does this have implications for AI algorithms??
  • ...1 more comment...
  •  
    Somehow, the mere fact does not surprise me. I always assumed that the genetic information is on multiple overlapping layers encoded. I do not see how this can be transferred exactly on genetic algorithms, but a good encoding on them is important and I guess that you could produce interesting effects by "overencoding" of parameters, apart from being more space-efficient.
  •  
    I was actually thinking exactly about this question during my bike ride this morning. I am surprised that some codons would need to have a double meaning though because there is already a surplus of codons to translate into just 20-22 proteins (depending on organism). So there should be about 44 codons left to prevent translation errors and in addition regulate gene expression. If - as the article suggests - a single codon can take a dual role, does it so in different situations (needing some other regulator do discern those)? Or does it just perform two functions that always need to happen simultaneously? I tried to learn more from the underlying paper: https://www.sciencemag.org/content/342/6164/1367.full.pdf All I got from that was a headache. :-\
  •  
    Probably both. Likely a consequence of energy preservation during translation. If you can do the same thing with less genes you save up on the effort required to reproduce. Also I suspect it has something to do with modularity. It makes sense that the gene regulating for "foot" cells also trigger the genes that generate "toe" cells for example. No point in having an extra if statement.
Thijs Versloot

Graphene #nantennas for power transfer and communication between tiny devices - 0 views

  •  
    Known technically as a surface plasmon polariton (SPP) wave, the effect will allow the nano-antennas to operate at the low end of the terahertz frequency range, between 0.1 and 10 terahertz - instead of at 150 terahertz With this antenna, we can cut the frequency by two orders of magnitude and cut the power needs by four orders of magnitude," said Jornet. "Using this antenna, we believe the energy-harvesting techniques developed by Dr. Wang would give us enough power to create a communications link between nanomachines." As always, graphene seems to be the answer to anything, but steady progress is being made although one needs to find out first an easy method of generating high quality graphene layers (btw that is also one of the reasons to do the supercapacitor study...)
  •  
    Well plasmonics is also the solution to everything it seems...
Thijs Versloot

NASA set to debut online software catalog April 10 - 1 views

  •  
    The catalog, a master list organized into 15 categories, is intended for industry, academia, other government agencies, and general public. The catalog covers technology topics ranging from project management systems, design tools, data handling, image processing, solutions for life support functions, aeronautics, structural analysis, and robotic and autonomous systems. NASA said the codes represent NASA's best solutions to an array of complex mission requirements. McMillan reported that "Within a few weeks of publishing the list, NASA says, it will also offer a searchable database of projects, and then, by next year, it will host the actual software code in its own online repository, a kind of GitHub for astronauts."
Christophe Praz

Can You Slow Down a Day Using Angular Momentum? - 4 views

  •  
    "Could you do this? Could a spinning human slow down the Earth? Theoretically, yes." Let's all put our ice skates on and spin to enjoy a longer daytime !
  •  
    Actually the length of a day fluctuates naturally. Some effects are periodic (e.g. due to seasons) while others accumulate to a general lengthening of the day (like the influence of tides): http://en.wikipedia.org/wiki/Fluctuations_in_the_length_of_day
  •  
    Is it not more efficient to just all start running eastward? We could have a new "Jump Day" frenzy :)
Nicholas Lan

Ancient Egyptians transported pyramid stones over wet sand - 1 views

  •  
    Apr 30, Physics/General Physics A large pile of sand accumulates in front of the sledge when this is pulled over dry sand (left). On the wet sand (right) this does not happen.
Thijs Versloot

New Quantum Theory to explain flow of time - 2 views

  •  
    Basically quantum entanglement, or more accurately the dispersal and expansion of mixed quantum states, results in an apparent flow of time. Quantum information leaks out and the result is the move from a pure state (hot coffee) to a mixed state (cooled down) in which equilibrium is reached. Theoretically it is possible to get back to a pure state (coffee spontaneously heating up) but this statistical unlikelihood gives the appereance of irreversibility and hence a flow o time. I think an interesting question is then: how much useful work can you extract from this system? (http://arxiv.org/abs/1302.2811) It should for macroscopic thermodynamic systems lead to the Carnot cycle, but on smaller scales it might be possible to formulate a more general expression. Anybody interested to look into it? Anna, Jo? :)
  •  
    What you propose is called Maxwell's demon: http://en.wikipedia.org/wiki/Maxwell%27s_demon Unfortunately (or maybe fortunately) thermodynamics is VERY robust. I guess if you really only want to harness AND USE the energy in a microscopic system you might have some chance of beating Carnot. But any way of transferring harvested energy to a macroscopic system seems to be limited by it (AFAIK).
Thijs Versloot

Vibrational free cooling systems for sensors - 1 views

  •  
    The system is based on two liquids which are adsorbed. As the sensor generates heat, the liquids desorb and the pressure builds up, it can then move to an expansion vessel which is held at a cooler temperature and the liquid then adsorb together again. This technique requires no mechanical compression and there are less vibration, leading to less wear and tear of components. It is being developed in a joint collaboration between UTwente and Dutch Space.
tvinko

Massively collaborative mathematics : Article : Nature - 28 views

  •  
    peer-to-peer theorem-proving
  • ...14 more comments...
  •  
    Or: mathematicians catch up with open-source software developers :)
  •  
    "Similar open-source techniques could be applied in fields such as [...] computer science, where the raw materials are informational and can be freely shared online." ... or we could reach the point, unthinkable only few years ago, of being able to exchange text messages in almost real time! OMG, think of the possibilities! Seriously, does the author even browse the internet?
  •  
    I do not agree with you F., you are citing out of context! Sharing messages does not make a collaboration, nor does a forum, .... You need a set of rules and a common objective. This is clearly observable in "some team", where these rules are lacking, making team work inexistent. The additional difficulties here are that it involves people that are almost strangers to each other, and the immateriality of the project. The support they are using (web, wiki) is only secondary. What they achieved is remarkable, disregarding the subject!
  •  
    I think we will just have to agree to disagree then :) Open source developers have been organizing themselves with emails since the early '90s, and most projects (e.g., the Linux kernel) still do not use anything else today. The Linux kernel mailing list gets around 400 messages per day, and they are managing just fine to scale as the number of contributors increases. I agree that what they achieved is remarkable, but it is more for "what" they achieved than "how". What they did does not remotely qualify as "massively" collaborative: again, many open source projects are managed collaboratively by thousands of people, and many of them are in the multi-million lines of code range. My personal opinion of why in the scientific world these open models are having so many difficulties is that the scientific community today is (globally, of course there are many exceptions) a closed, mostly conservative circle of people who are scared of changes. There is also the fact that the barrier of entry in a scientific community is very high, but I think that this should merely scale down the number of people involved and not change the community "qualitatively". I do not think that many research activities are so much more difficult than, e.g., writing an O(1) scheduler for an Operating System or writing a new balancing tree algorithm for efficiently storing files on a filesystem. Then there is the whole issue of scientific publishing, which, in its current form, is nothing more than a racket. No wonder traditional journals are scared to death by these open-science movements.
  •  
    here we go ... nice controversy! but maybe too many things mixed up together - open science journals vs traditional journals, conservatism of science community wrt programmers (to me one of the reasons for this might be the average age of both groups, which is probably more than 10 years apart ...) and then using emailing wrt other collaboration tools .... .... will have to look at the paper now more carefully ... (I am surprised to see no comment from José or Marek here :-)
  •  
    My point about your initial comment is that it is simplistic to infer that emails imply collaborative work. You actually use the word "organize", what does it mean indeed. In the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review). Mailing is just a coordination mean. In collaborations and team work, it is about rules, not only about the technology you use to potentially collaborate. Otherwise, all projects would be successful, and we would noy learn management at school! They did not write they managed the colloboration exclusively because of wikipedia and emails (or other 2.0 technology)! You are missing the part that makes it successful and remarkable as a project. On his blog the guy put a list of 12 rules for this project. None are related to emails, wikipedia, forums ... because that would be lame and your comment would make sense. Following your argumentation, the tools would be sufficient for collaboration. In the ACT, we have plenty of tools, but no team work. QED
  •  
    the question on the ACT team work is one that is coming back continuously and it always so far has boiled down to the question of how much there need and should be a team project to which everybody inthe team contributes in his / her way or how much we should leave smaller, flexible teams within the team form and progress, more following a bottom-up initiative than imposing one from top-down. At this very moment, there are at least 4 to 5 teams with their own tools and mechanisms which are active and operating within the team. - but hey, if there is a real will for one larger project of the team to which all or most members want to contribute, lets go for it .... but in my view, it should be on a convince rather than oblige basis ...
  •  
    It is, though, indicative that some of the team member do not see all the collaboration and team work happening around them. We always leave the small and agile sub-teams to form and organize themselves spontaneously, but clearly this method leaves out some people (be it for their own personal attitude or be it for pure chance) For those cases which we could think to provide the possibility to participate in an alternative, more structured, team work where we actually manage the hierachy, meritocracy and perform the project review (to use Joris words).
  •  
    I am, and was, involved in "collaboration" but I can say from experience that we are mostly a sum of individuals. In the end, it is always one or two individuals doing the job, and other waiting. Sometimes even, some people don't do what they are supposed to do, so nothing happens ... this could not be defined as team work. Don't get me wrong, this is the dynamic of the team and I am OK with it ... in the end it is less work for me :) team = 3 members or more. I am personally not looking for a 15 member team work, and it is not what I meant. Anyway, this is not exactly the subject of the paper.
  •  
    My opinion about this is that a research team, like the ACT, is a group of _people_ and not only brains. What I mean is that people have feelings, hate, anger, envy, sympathy, love, etc about the others. Unfortunately(?), this could lead to situations, where, in theory, a group of brains could work together, but not the same group of people. As far as I am concerned, this happened many times during my ACT period. And this is happening now with me in Delft, where I have the chance to be in an even more international group than the ACT. I do efficient collaborations with those people who are "close" to me not only in scientific interest, but also in some private sense. And I have people around me who have interesting topics and they might need my help and knowledge, but somehow, it just does not work. Simply lack of sympathy. You know what I mean, don't you? About the article: there is nothing new, indeed. However, why it worked: only brains and not the people worked together on a very specific problem. Plus maybe they were motivated by the idea of e-collaboration. No revolution.
  •  
    Joris, maybe I made myself not clear enough, but my point was only tangentially related to the tools. Indeed, it is the original article mention of "development of new online tools" which prompted my reply about emails. Let me try to say it more clearly: my point is that what they accomplished is nothing new methodologically (i.e., online collaboration of a loosely knit group of people), it is something that has been done countless times before. Do you think that now that it is mathematicians who are doing it makes it somehow special or different? Personally, I don't. You should come over to some mailing lists of mathematical open-source software (e.g., SAGE, Pari, ...), there's plenty of online collaborative research going on there :) I also disagree that, as you say, "in the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review)". First of all I think the main engine of any collaboration like this is the objective, i.e., wanting to get something done. Rules emerge from self-organization later on, and they may be completely different from project to project, ranging from almost anarchy to BDFL (benevolent dictator for life) style. Given this kind of variety that can be observed in open-source projects today, I am very skeptical that any kind of management rule can be said to be universal (and I am pretty sure that the overwhelming majority of project organizers never went to any "management school"). Then there is the social aspect that Tamas mentions above. From my personal experience, communities that put technical merit above everything else tend to remain very small and generally become irrelevant. The ability to work and collaborate with others is the main asset the a participant of a community can bring. I've seen many times on the Linux kernel mailing list contributions deemed "technically superior" being disregarded and not considered for inclusion in the kernel because it was clear that
  •  
    hey, just catched up the discussion. For me what is very new is mainly the framework where this collaborative (open) work is applied. I haven't seen this kind of working openly in any other field of academic research (except for the Boinc type project which are very different, because relying on non specialists for the work to be done). This raise several problems, and mainly the one of the credit, which has not really been solved as I read in the wiki (is an article is written, who writes it, what are the names on the paper). They chose to refer to the project, and not to the individual researchers, as a temporary solution... It is not so surprising for me that this type of work has been first done in the domain of mathematics. Perhaps I have an ideal view of this community but it seems that the result obtained is more important than who obtained it... In many areas of research this is not the case, and one reason is how the research is financed. To obtain money you need to have (scientific) credit, and to have credit you need to have papers with your name on it... so this model of research does not fit in my opinion with the way research is governed. Anyway we had a discussion on the Ariadnet on how to use it, and one idea was to do this kind of collaborative research; idea that was quickly abandoned...
  •  
    I don't really see much the problem with giving credit. It is not the first time a group of researchers collectively take credit for a result under a group umbrella, e.g., see Nicolas Bourbaki: http://en.wikipedia.org/wiki/Bourbaki Again, if the research process is completely transparent and publicly accessible there's no way to fake contributions or to give undue credit, and one could cite without problems a group paper in his/her CV, research grant application, etc.
  •  
    Well my point was more that it could be a problem with how the actual system works. Let say you want a grant or a position, then the jury will count the number of papers with you as a first author, and the other papers (at least in France)... and look at the impact factor of these journals. Then you would have to set up a rule for classifying the authors (endless and pointless discussions), and give an impact factor to the group...?
  •  
    it seems that i should visit you guys at estec... :-)
  •  
    urgently!! btw: we will have the ACT christmas dinner on the 9th in the evening ... are you coming?
Ma Ru

Probabilistic fluidic modular construction - 17 views

Looks cool... in simulation. And even there it seems to work terribly slow (note how much they have to speed it up).

Thijs Versloot

China team takes on tech challenge of supercavitation - 1 views

  •  
    "A Soviet supercavitation torpedo called Shkval was able to reach a speed of 370km/h or more - much faster than any other conventional torpedoes," he said. However, The SCMP highlighted two problems in supercavitation technology. First, the submerged vessel needed to be launched at high speeds, approaching 100km/h, to generate and maintain the air bubble. Secondly, it is difficult if not impossible to steer the vessel using conventional mechanisms, which are inside the bubble, without direct contact with water. As a result, its application has been limited to unmanned vessels, fired in a straight line.
  •  
    can't you just selectively inject the gas so that you control in which direction the bubble forms?
Tom Gheysens

Direct brain-to-brain communication demonstrated in human subjects -- ScienceDaily - 2 views

  •  
    In a first-of-its-kind study, an international team of neuroscientists and robotics engineers has demonstrated the viability of direct brain-to-brain communication in humans.
  •  
    Was just about to post it... :) It seems after transferring the EEG signals of one person, converting it to bits and stimulating some brain activity using magnetic stimulation (TMS) the receiving person actually sees 'flashes of light' in their peripheral vision. So its using your vision sense to get the information across. Would it not be better to try to see if you can generate some kind of signal in the part of your brain that is connected to 'hearing'? Or would this be me thinking too naive?
  •  
    "transferring the EEG signals of one person, converting it to bits and stimulating some brain activity using magnetic stimulation (TMS)" How is this "direct"?
Thijs Versloot

Magnetic bubble may give space probes a soft landing - 4 views

  •  
    I am also looking into this idea since some time and it seems NASA is already ahead, awarding two contract to investigate magnetoshell aerocapture. This could be interesting for probes that want to enter eg Marsian atmospheres at relatively high velocity. Or for multiple re-entry s/c at Earth. The idea of the experiment, The satellite will carry a copper coil, powered by a lithium-ion battery, that generates a magnetic field around the probe. As it descends, the spacecraft will eject a small amount of plasma. This gets trapped in the magnetic field, creating a protective bubble that stops air molecules colliding with the craft and producing heat.
  •  
    A few years back Mimmo has worked on this, rather from the theory side if I remember well ...
  •  
    The power requirements for such a thing must be HUGE!
« First ‹ Previous 101 - 120 of 237 Next › Last »
Showing 20 items per page