Skip to main content

Home/ Advanced Concepts Team/ Group items tagged system

Rss Feed Group items tagged

Joris _

Is It Time To Revamp Systems Engineering? | AVIATION WEEK - 1 views

  • They both believe the systems engineering processes that have served the aerospace and defense community since pre-Apollo days are no longer adequate for the large and complex systems ­industry is now developing.
  •  
    1) it has to actively work and produce a result that's what you intended 2) the design must be robust. 3) it should be efficient 4) it should minimize unintended consequences. "But we have to establish a formal, mathematically precise mechanism to measure complexity and adaptability . . . [where] adaptability means the system elements have sufficient margin, and can serve multiple purposes." "We need to break the paradigm of long cycles from design to product" some interesting questions....
  • ...1 more comment...
  •  
    indeed ... already hotly debated in CDF ... any suggestions in addition to what we already contributed to this (e.g. system level optimisation)
  •  
    what is the outcome of the CDF study ? I think actually that optimisation is not at all the key point. As it is stressed in this news, it is robustness (points 2 and 4). This is something we should think about ...
  •  
    SYSTEM OF SYSTEMS, SYSTEM OF SYSTEMS!!! :-D
Dario Izzo

Miguel Nicolelis Says the Brain Is Not Computable, Bashes Kurzweil's Singularity | MIT ... - 9 views

  •  
    As I said ten years ago and psychoanalysts 100 years ago. Luis I am so sorry :) Also ... now that the commission funded the project blue brain is a rather big hit Btw Nicolelis is a rather credited neuro-scientist
  • ...14 more comments...
  •  
    nice article; Luzi would agree as well I assume; one aspect not clear to me is the causal relationship it seems to imply between consciousness and randomness ... anybody?
  •  
    This is the same thing Penrose has been saying for ages (and yes, I read the book). IF the human brain proves to be the only conceivable system capable of consciousness/intelligence AND IF we'll forever be limited to the Turing machine type of computation (which is what the "Not Computable" in the article refers to) AND IF the brain indeed is not computable, THEN AI people might need to worry... Because I seriously doubt the first condition will prove to be true, same with the second one, and because I don't really care about the third (brains is not my thing).. I'm not worried.
  •  
    In any case, all AI research is going in the wrong direction: the mainstream is not on how to go beyond Turing machines, rather how to program them well enough ...... and thats not bringing anywhere near the singularity
  •  
    It has not been shown that intelligence is not computable (only some people saying the human brain isn't, which is something different), so I wouldn't go so far as saying the mainstream is going in the wrong direction. But even if that indeed was the case, would it be a problem? If so, well, then someone should quickly go and tell all the people trading in financial markets that they should stop using computers... after all, they're dealing with uncomputable undecidable problems. :) (and research on how to go beyond Turing computation does exist, but how much would you want to devote your research to a non existent machine?)
  •  
    [warning: troll] If you are happy with developing algorithms that serve the financial market ... good for you :) After all they have been proved to be useful for humankind beyond any reasonable doubt.
  •  
    Two comments from me: 1) an apparently credible scientist takes Kurzweil seriously enough to engage with him in polemics... oops 2) what worries me most, I didn't get the retail store pun at the end of article...
  •  
    True, but after Google hired Kurzweil he is de facto being taken seriously ... so I guess Nicolelis reacted to this.
  •  
    Crazy scientist in residence... interesting marketing move, I suppose.
  •  
    Unfortunately, I can't upload my two kids to the cloud to make them sleep, that's why I comment only now :-). But, of course, I MUST add my comment to this discussion. I don't really get what Nicolelis point is, the article is just too short and at a too popular level. But please realize that the question is not just "computable" vs. "non-computable". A system may be computable (we have a collection of rules called "theory" that we can put on a computer and run in a finite time) and still it need not be predictable. Since the lack of predictability pretty obviously applies to the human brain (as it does to any sufficiently complex and nonlinear system) the question whether it is computable or not becomes rather academic. Markram and his fellows may come up with a incredible simulation program of the human brain, this will be rather useless since they cannot solve the initial value problem and even if they could they will be lost in randomness after a short simulation time due to horrible non-linearities... Btw: this is not my idea, it was pointed out by Bohr more than 100 years ago...
  •  
    I guess chaos is what you are referring to. Stuff like the Lorentz attractor. In which case I would say that the point is not to predict one particular brain (in which case you would be right): any initial conditions would be fine as far as any brain gets started :) that is the goal :)
  •  
    Kurzweil talks about downloading your brain to a computer, so he has a specific brain in mind; Markram talks about identifying neural basis of mental diseases, so he has at least pretty specific situations in mind. Chaos is not the only problem, even a perfectly linear brain (which is not a biological brain) is not predictable, since one cannot determine a complete set of initial conditions of a working (viz. living) brain (after having determined about 10% the brain is dead and the data useless). But the situation is even worse: from all we know a brain will only work with a suitable interaction with its environment. So these boundary conditions one has to determine as well. This is already twice impossible. But the situation is worse again: from all we know, the way the brain interacts with its environment at a neural level depends on his history (how this brain learned). So your boundary conditions (that are impossible to determine) depend on your initial conditions (that are impossible to determine). Thus the situation is rather impossible squared than twice impossible. I'm sure Markram will simulate something, but this will rather be the famous Boltzmann brain than a biological one. Boltzman brains work with any initial conditions and any boundary conditions... and are pretty dead!
  •  
    Say one has an accurate model of a brain. It may be the case that the initial and boundary conditions do not matter that much in order for the brain to function an exhibit macro-characteristics useful to make science. Again, if it is not one particular brain you are targeting, but the 'brain' as a general entity this would make sense if one has an accurate model (also to identify the neural basis of mental diseases). But in my opinion, the construction of such a model of the brain is impossible using a reductionist approach (that is taking the naive approach of putting together some artificial neurons and connecting them in a huge net). That is why both Kurzweil and Markram are doomed to fail.
  •  
    I think that in principle some kind of artificial brain should be feasible. But making a brain by just throwing together a myriad of neurons is probably as promising as throwing together some copper pipes and a heap of silica and expecting it to make calculations for you. Like in the biological system, I suspect, an artificial brain would have to grow from a small tiny functional unit by adding neurons and complexity slowly and in a way that in a stable way increases the "usefulness"/fitness. Apparently our brain's usefulness has to do with interpreting inputs of our sensors to the world and steering the body making sure that those sensors, the brain and the rest of the body are still alive 10 seconds from now (thereby changing the world -> sensor inputs -> ...). So the artificial brain might need sensors and a body to affect the "world" creating a much larger feedback loop than the brain itself. One might argue that the complexity of the sensor inputs is the reason why the brain needs to be so complex in the first place. I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain. Anyone? Or are they trying to simulate the human brain after it has been removed from the body? That might be somewhat easier I guess...
  •  
    Johannes: "I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain." In Artificial Life the whole environment+bodies&brains is simulated. You have also the whole embodied cognition movement that basically advocates for just that: no true intelligence until you model the system in its entirety. And from that you then have people building robotic bodies, and getting their "brains" to learn from scratch how to control them, and through the bodies, the environment. Right now, this is obviously closer to the complexity of insect brains, than human ones. (my take on this is: yes, go ahead and build robots, if the intelligence you want to get in the end is to be displayed in interactions with the real physical world...) It's easy to dismiss Markram's Blue Brain for all their clever marketing pronouncements that they're building a human-level consciousness on a computer, but from what I read of the project, they seem to be developing a platfrom onto which any scientist can plug in their model of a detail of a detail of .... of the human brain, and get it to run together with everyone else's models of other tiny parts of the brain. This is not the same as getting the artificial brain to interact with the real world, but it's a big step in enabling scientists to study their own models on more realistic settings, in which the models' outputs get to effect many other systems, and throuh them feed back into its future inputs. So Blue Brain's biggest contribution might be in making model evaluation in neuroscience less wrong, and that doesn't seem like a bad thing. At some point the reductionist approach needs to start moving in the other direction.
  •  
    @ Dario: absolutely agree, the reductionist approach is the main mistake. My point: if you take the reductionsit approach, then you will face the initial and boundary value problem. If one tries a non-reductionist approach, this problem may be much weaker. But off the record: there exists a non-reductionist theory of the brain, it's called psychology... @ Johannes: also agree, the only way the reductionist approach could eventually be successful is to actually grow the brain. Start with essentially one neuron and grow the whole complexity. But if you want to do this, bring up a kid! A brain without body might be easier? Why do you expect that a brain detached from its complete input/output system actually still works. I'm pretty sure it does not!
  •  
    @Luzi: That was exactly my point :-)
LeopoldS

Peter Higgs: I wouldn't be productive enough for today's academic system | Science | Th... - 1 views

  •  
    what an interesting personality ... very symathetic Peter Higgs, the British physicist who gave his name to the Higgs boson, believes no university would employ him in today's academic system because he would not be considered "productive" enough.

    The emeritus professor at Edinburgh University, who says he has never sent an email, browsed the internet or even made a mobile phone call, published fewer than 10 papers after his groundbreaking work, which identified the mechanism by which subatomic material acquires mass, was published in 1964.

    He doubts a similar breakthrough could be achieved in today's academic culture, because of the expectations on academics to collaborate and keep churning out papers. He said: "It's difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964."

    Speaking to the Guardian en route to Stockholm to receive the 2013 Nobel prize for science, Higgs, 84, said he would almost certainly have been sacked had he not been nominated for the Nobel in 1980.

    Edinburgh University's authorities then took the view, he later learned, that he "might get a Nobel prize - and if he doesn't we can always get rid of him".

    Higgs said he became "an embarrassment to the department when they did research assessment exercises". A message would go around the department saying: "Please give a list of your recent publications." Higgs said: "I would send back a statement: 'None.' "

    By the time he retired in 1996, he was uncomfortable with the new academic culture. "After I retired it was quite a long time before I went back to my department. I thought I was well out of it. It wasn't my way of doing things any more. Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough."

    Higgs revealed that his career had also been jeopardised by his disagreements in the 1960s and 7
  •  
  •  
    interesting one - Luzi will like it :-)
Tom Gheysens

Biomimicr-E: Nature-Inspired Energy Systems | AAAS - 4 views

  •  
    some biomimicry used in energy systems... maybe it sparks some ideas
  •  
    not much new that has not been shared here before ... BUT: we have done relativley little on any of them. for good reasons?? don't know - maybe time to look into some of these again more closely Energy Efficiency( Termite mounds inspired regulated airflow for temperature control of large structures, preventing wasteful air conditioning and saving 10% energy.[1] Whale fins shapes informed the design of new-age wind turbine blades, with bumps/tubercles reducing drag by 30% and boosting power by 20%.[2][3][4] Stingray motion has motivated studies on this type of low-effort flapping glide, which takes advantage of the leading edge vortex, for new-age underwater robots and submarines.[5][6] Studies of microstructures found on shark skin that decrease drag and prevent accumulation of algae, barnacles, and mussels attached to their body have led to "anti-biofouling" technologies meant to address the 15% of marine vessel fuel use due to drag.[7][8][9][10] Energy Generation( Passive heliotropism exhibited by sunflowers has inspired research on a liquid crystalline elastomer and carbon nanotube system that improves the efficiency of solar panels by 10%, without using GPS and active repositioning panels to track the sun.[11][12][13] Mimicking the fluid dynamics principles utilized by schools of fish could help to optimize the arrangement of individual wind turbines in wind farms.[14] The nanoscale anti-reflection structures found on certain butterfly wings has led to a model to effectively harness solar energy.[15][16][17] Energy Storage( Inspired by the sunlight-to-energy conversion in plants, researchers are utilizing a protein in spinach to create a sort of photovoltaic cell that generates hydrogen from water (i.e. hydrogen fuel cell).[18][19] Utilizing a property of genetically-engineered viruses, specifically their ability to recognize and bind to certain materials (carbon nanotubes in this case), researchers have developed virus-based "scaffolds" that
Luís F. Simões

NASA Goddard to Auction off Patents for Automated Software Code Generation - 0 views

  • The technology was originally developed to handle coding of control code for spacecraft swarms, but it is broadly applicable to any commercial application where rule-based systems development is used.
  •  
    This is related to the "Verified Software" item in NewScientist's list of ideas that will change science. At the link below you'll find the text of the patents being auctioned: http://icapoceantomo.com/item-for-sale/exclusive-license-related-improved-methodology-formally-developing-control-systems :) Patent #7,627,538 ("Swarm autonomic agents with self-destruct capability") makes for quite an interesting read: "This invention relates generally to artificial intelligence and, more particularly, to architecture for collective interactions between autonomous entities." "In some embodiments, an evolvable synthetic neural system is operably coupled to one or more evolvable synthetic neural systems in a hierarchy." "In yet another aspect, an autonomous nanotechnology swarm may comprise a plurality of workers composed of self-similar autonomic components that are arranged to perform individual tasks in furtherance of a desired objective." "In still yet another aspect, a process to construct an environment to satisfy increasingly demanding external requirements may include instantiating an embryonic evolvable neural interface and evolving the embryonic evolvable neural interface towards complex complete connectivity." "In some embodiments, NBF 500 also includes genetic algorithms (GA) 504 at each interface between autonomic components. The GAs 504 may modify the intra-ENI 202 to satisfy requirements of the SALs 502 during learning, task execution or impairment of other subsystems."
santecarloni

Quantum Biology and the Puzzle of Coherence - Technology Review - 4 views

  •  
    Quantum processes shouldn't survive in hot, wet biological systems and yet a growing body of evidence suggests they do. Now physicists think they know how
  • ...2 more comments...
  •  
    Tobias, José and myself considered an ACT project in quantum biomimetics, but it never led anywhere. Perhaps the field is sexy enough now...
  •  
    Considered is the right word ... You unfortunately never passed the step after "considering" :-)
  •  
    Yes, because our bosses forced us to write strategic reports on "system of systems" :-)
  •  
    Oh these terrible ignorant slave masters .... Would love to see your "reports on system of systems" :-)
Luís F. Simões

Lockheed Martin buys first D-Wave quantum computing system - 1 views

  • D-Wave develops computing systems that leverage the physics of quantum mechanics in order to address problems that are hard for traditional methods to solve in a cost-effective amount of time. Examples of such problems include software verification and validation, financial risk analysis, affinity mapping and sentiment analysis, object recognition in images, medical imaging classification, compressed sensing and bioinformatics.
  •  
    According to the company's wikipedia page, the computer costs $ 10 million. Can we then declare Quantum Computing has officially arrived?! quotes from elsewhere in the site: "first commercial quantum computing system on the market"; "our current superconducting 128-qubit processor chip is housed inside a cryogenics system within a 10 square meter shielded room" Link to the company's scientific publications. Interestingly, this company seems to have been running a BOINC project, AQUA@home, to "predict the performance of superconducting adiabatic quantum computers on a variety of hard problems arising in fields ranging from materials science to machine learning. AQUA@home uses Internet-connected computers to help design and analyze quantum computing algorithms, using Quantum Monte Carlo techniques". List of papers coming out of it.
jcunha

HBP Neuromorphic Computing Platform Guidebook (WIP) - 0 views

  •  
    "The Neuromorphic Computing Platform allows neuroscientists and engineers to perform experiments with configurable neuromorphic computing systems. The platform provides two complementary, large-scale neuromorphic systems built in custom hardware at locations in Heidelberg, Germany (the "BrainScaleS" system, also known as the "physical model" or PM system) and Manchester, United Kingdom (the "SpiNNaker" system, also known as the "many core" or MC system)."
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
nikolas smyrlakis

ACM award concerning the Complexity of Interactions in Markets, Social Networks, and On... - 0 views

  •  
    "The Complexity of Nash Equilibria,", It also suggests that the Nash equilibrium may not be an accurate prediction of behavior in all situations. Daskalakis's research emphasizes the need for new, computationally meaningful methods for modeling strategic behavior in complex systems such as those encountered in financial markets, online systems, and social networks.
jmlloren

Open Journal Systems | Public Knowledge Project - 3 views

shared by jmlloren on 24 Nov 09 - Cached
pacome delva liked it
  •  
    Open Journal Systems (OJS) is a journal management and publishing system that has been developed by the Public Knowledge Project through its federally funded efforts to expand and improve access to research.
  •  
    seems nice, but would be a lot of work to implement and we already have something operational... It would add the search module and article in html (what about the compatibility with latex...?). For now I think we should focus on the next issue of acta futura !
santecarloni

Was a giant planet ejected from our solar system? - physicsworld.com - 0 views

  •  
    A fifth giant planet was kicked out of the early solar system, according to computer simulations by a US-based planetary scientist. The sacrifice of this gas giant paved the way for the stable configuration of planets seen today, says David Nesvorný, who believes that the expulsion prevented Jupiter from migrating inwards and scattering the Earth and its fellow inner planets.
  •  
    A fifth giant planet was kicked out of the early solar system, according to computer simulations by a US-based planetary scientist. The sacrifice of this gas giant paved the way for the stable configuration of planets seen today, says David Nesvorný, who believes that the expulsion prevented Jupiter from migrating inwards and scattering the Earth and its fellow inner planets.
Thijs Versloot

New Quantum Theory to explain flow of time - 2 views

  •  
    Basically quantum entanglement, or more accurately the dispersal and expansion of mixed quantum states, results in an apparent flow of time. Quantum information leaks out and the result is the move from a pure state (hot coffee) to a mixed state (cooled down) in which equilibrium is reached. Theoretically it is possible to get back to a pure state (coffee spontaneously heating up) but this statistical unlikelihood gives the appereance of irreversibility and hence a flow o time. I think an interesting question is then: how much useful work can you extract from this system? (http://arxiv.org/abs/1302.2811) It should for macroscopic thermodynamic systems lead to the Carnot cycle, but on smaller scales it might be possible to formulate a more general expression. Anybody interested to look into it? Anna, Jo? :)
  •  
    What you propose is called Maxwell's demon: http://en.wikipedia.org/wiki/Maxwell%27s_demon Unfortunately (or maybe fortunately) thermodynamics is VERY robust. I guess if you really only want to harness AND USE the energy in a microscopic system you might have some chance of beating Carnot. But any way of transferring harvested energy to a macroscopic system seems to be limited by it (AFAIK).
jmlloren

Exotic matter : Insight : Nature - 5 views

shared by jmlloren on 03 Aug 10 - Cached
LeopoldS liked it
  •  
    Trends in materials and condensed matter. Check out the topological insulators. amazing field.
  • ...12 more comments...
  •  
    Aparently very interesting, will it survive the short hype? Relevant work describing mirror charges of topological insulators and the classical boundary conditions were done by Ismo and Ari. But the two communities don't know each other and so they are never cited. Also a way to produce new things...
  •  
    Thanks for noticing! Indeed, I had no idea that Ari (don't know Ismo) was involved in the field. Was it before Kane's proposal or more recently? What I mostly like is that semiconductors are good candidates for 3D TI, however I got lost in the quantum field jargon. Yesterday, I got a headache trying to follow the Majorana fermions, the merons, skyrnions, axions, and so on. Luzi, are all these things familiar to you?
  •  
    Ismo Lindell described in the early 90's the mirror charge of what is now called topological insulator. He says that similar results were obtained already at the beginning of the 20th century... Ismo Lindell and Ari Sihvola in the recent years discussed engineering aspects of PEMCs (perfect electro-megnetic conductors,) which are more or less classical analogues of topological insulators. Fundamental aspects of PEMCs are well knwon in high-energy physics for a long time, recent works are mainly due to Friedrich Hehl and Yuri Obukhov. All these works are purely classical, so there is no charge quantisation, no considerations of electron spin etc. About Majorana fermions: yes, I spent several years of research on that topic. Axions: a topological state, of course, trivial :-) Also merons and skyrnions are topological states, but I'm less familiar with them.
  •  
    "Non-Abelian systems1, 2 contain composite particles that are neither fermions nor bosons and have a quantum statistics that is far richer than that offered by the fermion-boson dichotomy. The presence of such quasiparticles manifests itself in two remarkable ways. First, it leads to a degeneracy of the ground state that is not based on simple symmetry considerations and is robust against perturbations and interactions with the environment. Second, an interchange of two quasiparticles does not merely multiply the wavefunction by a sign, as is the case for fermions and bosons. Rather, it takes the system from one ground state to another. If a series of interchanges is made, the final state of the system will depend on the order in which these interchanges are being carried out, in sharp contrast to what happens when similar operations are performed on identical fermions or bosons." wow, this paper by Stern reads really weired ... any of you ever looked into this?
  •  
    C'mon Leopold, it's as trivial as the topological states, AKA axions! Regarding the question, not me!
  •  
    just looked up the wikipedia entry on axions .... at least they have some creativity in names giving: "In supersymmetric theories the axion has both a scalar and a fermionic superpartner. The fermionic superpartner of the axion is called the axino, the scalar superpartner is called the saxion. In some models, the saxion is the dilaton. They are all bundled up in a chiral superfield. The axino has been predicted to be the lightest supersymmetric particle in such a model.[24] In part due to this property, it is considered a candidate for the composition of dark matter.[25]"
  •  
    Thank's Leopold. Sorry Luzi for being ironic concerning the triviality of the axions. Now, Leo confirmed me that indeed is a trivial matter. I have problems with models where EVERYTHING is involved.
  •  
    Well, that's the theory of everything, isn't it?? Seriously: I don't think that theoretically there is a lot of new stuff here. Topological aspects of (non-Abelian) theories became extremely popular in the context of string theory. The reason is very simple: topological theories are much simpler than "normal" and since string theory anyway is far too complicated to be solved, people just consider purely topological theories, then claiming that this has something to do with the real world, which of course is plainly wrong. So what I think is new about these topological insulators are the claims that one can actually fabricate a material which more or less accurately mimics a topological theory and that these materials are of practical use. Still, they are a little bit the poor man's version of the topological theories fundamental physicists like to look at since electrdynamics is an Abelian theory.
  •  
    I have the feeling, not the knowledge, that you are right. However, I think that the implications of this light quantum field effects are great. The fact of being able to sustain two currents polarized in spin is a technological breakthrough.
  •  
    not sure how much I can contribute to your apparently educated debate here but if I remember well from my work for the master, these non-Abelian theories were all but "simple" as Luzi puts it ... and from a different perspective: to me the whole thing of being able to describe such non-Abelian systems nicely indicates that they should in one way or another also have some appearance in Nature (would be very surprised if not) - though this is of course no argument that makes string theory any better or closer to what Luzi called reality ....
  •  
    Well, electrodynamics remains an Abelian theory. From the theoretical point of view this is less interesting than non-Abelian ones, since in 4D the fibre bundle of a U(1) theory is trivial (great buzz words, eh!) But in topological insulators the point of view is slightly different since one always has the insulator (topological theory), its surrounding (propagating theory) and most importantly the interface between the two. This is a new situation that people from field and string theory were not really interested in.
  •  
    guys... how would you explain this to your gran mothers?
  •  
    *you* tried *your* best .... ??
Juxi Leitner

Networked Networks Are Prone to Epic Failure | Wired Science | Wired.com - 1 views

  • The interconnections fueled a cascading effect, with the failures coursing back and forth. A damaged node in the first network would pull down nodes in the second, which crashed nodes in the first, which brought down more in the second, and so on. And when they looked at data from a 2003 Italian power blackout, in which the electrical grid was linked to the computer network that controlled it, the patterns matched their models’ math.
  •  
    that would be an interesting "Systems of Systems" study for once ...
Luís F. Simões

New algorithm offers ability to influence systems such as living cells or social networks - 3 views

  • a new computational model that can analyze any type of complex network -- biological, social or electronic -- and reveal the critical points that can be used to control the entire system.
  • Slotine and his colleagues applied traditional control theory to these recent advances, devising a new model for controlling complex, self-assembling networks.
  • Yang-Yu Liu, Jean-Jacques Slotine, Albert-László Barabási. Controllability of complex networks. Nature, 2011; 473 (7346): 167 DOI: 10.1038/nature10011
  •  
    Sounds too super to be true, no?
  • ...3 more comments...
  •  
    cover story in the May 12 issue of Nature
  •  
    For each, they calculated the percentage of points that need to be controlled in order to gain control of the entire system.
  •  
    > Sounds too super to be true, no? Yeah, how else may it sound, being a combination of hi-quality (I assume) research targeted at attracting funding, raised to the power of Science Daily's pop-pseudo-scientific journalists' bu****it? Original article starts with a cool sentence too: > The ultimate proof of our understanding of natural or technological systems is reflected in our ability to control them. ...a good starting point for a never-ending philosophers' debate... Now seriously, because of a big name behind the study, I'm very curious to read the original article. Although I expect the conclusion to be that in practical cases (i.e. the cases of "networks" you *would like to* "control"), you need to control all nodes or something equally impractical...
  •  
    then I am looking forward to reading your conclusions here after you will have actually read the paper
nikolas smyrlakis

PARC (Palo Alto Research Center) - 0 views

  •  
    An interesting research centre in California! Focus areas: Business Services Electronic Materials, Devices, & Systems Information & Communication Technologies Biomedical Systems Cleantech
  • ...1 more comment...
  •  
    and some very ACT- like interesting internships / ideas they have Automatic summarization of related documents http://www.parc.com/job/43/automatic-summarization-of-related-documents.html (remember Kev's idea?) Bayesian diagnosis http://www.parc.com/job/34/bayesian-diagnosis---summer.html Autonomous robotics UAVs UGVs http://www.parc.com/job/36/autonomous-robotics---summer.html
  •  
    XEROX PARC was definitely heavily involved in computer development: eg. mouse, GUI, ethernet, OO programming, all came out of PARC, and all that without focusing on computers but printers...
  •  
    aaah its the XEROX centre, didn't know. Yep they made the mouse and then handed it over nicely to Apple after IBM thought it was useless
LeopoldS

Operation Socialist: How GCHQ Spies Hacked Belgium's Largest Telco - 4 views

  •  
    interesting story with many juicy details on how they proceed ... (similarly interesting nickname for the "operation" chosen by our british friends) "The spies used the IP addresses they had associated with the engineers as search terms to sift through their surveillance troves, and were quickly able to find what they needed to confirm the employees' identities and target them individually with malware. The confirmation came in the form of Google, Yahoo, and LinkedIn "cookies," tiny unique files that are automatically placed on computers to identify and sometimes track people browsing the Internet, often for advertising purposes. GCHQ maintains a huge repository named MUTANT BROTH that stores billions of these intercepted cookies, which it uses to correlate with IP addresses to determine the identity of a person. GCHQ refers to cookies internally as "target detection identifiers." Top-secret GCHQ documents name three male Belgacom engineers who were identified as targets to attack. The Intercept has confirmed the identities of the men, and contacted each of them prior to the publication of this story; all three declined comment and requested that their identities not be disclosed. GCHQ monitored the browsing habits of the engineers, and geared up to enter the most important and sensitive phase of the secret operation. The agency planned to perform a so-called "Quantum Insert" attack, which involves redirecting people targeted for surveillance to a malicious website that infects their computers with malware at a lightning pace. In this case, the documents indicate that GCHQ set up a malicious page that looked like LinkedIn to trick the Belgacom engineers. (The NSA also uses Quantum Inserts to target people, as The Intercept has previously reported.) A GCHQ document reviewing operations conducted between January and March 2011 noted that the hack on Belgacom was successful, and stated that the agency had obtained access to the company's
  •  
    I knew I wasn't using TOR often enough...
  •  
    Cool! It seems that after all it is best to restrict employees' internet access only to work-critical areas... @Paul TOR works on network level, so it would not help here much as cookies (application level) were exploited.
anonymous

Nasa validates 'impossible' space drive (Wired UK) - 3 views

  •  
    NASA validates the EmDrive (http://emdrive.com/) technology for converting electrical energy into thrust. (from the website: "Thrust is produced by the amplification of the radiation pressure of an electromagnetic wave propagated through a resonant waveguide assembly.")
  • ...3 more comments...
  •  
    I would be very very skeptic on this results and am actually ready to take bets that they are victims of something else than "new physics" ... some measurement error e.g.
  •  
    Assuming that this system is feasible, and taking the results of Chinese team (Thrust of 720 mN http://www.wired.co.uk/news/archive/2013-02/06/emdrive-and-cold-fusion), I wonder whether this would allow for some actual trajectory maneuvers (and to which degree). If so, can we simulate some possible trajectories, e.g. compare the current solutions to this one ? For example, Shawyer (original author) claims that this system would be capable of stabilizing ISS without need for refueling. Other article on the same topic: http://www.theverge.com/2014/8/1/5959637/nasa-cannae-drive-tests-have-promising-results
  •  
    To be exact, the chinese reported 720mN and the americans found ~50microN. The first one I simply do not believe and the second one seems more credible, yet it has to be said that measuring such low thrust levels on a thrust-stand is very difficult and prone to measurement errors. @Krzys, the thrust level of 720mN is within the same range of other electric propulsion systems which are considered - and even used in some cases - for station keeping, also for the ISS actually (for which there are also ideas to use a high power system delivering several Newtons of thrust). Then on the idea, I do not rule out that an interaction between the EM waves and 'vacuum' could be possible, however if this would be true then this surely would be detectable in any particle accelerator as it would produce background events/noise. The energy densities involved and the conversion to thrust via some form of interaction with the vacuum surely could not provide thrusts in the range reported by the chinese, nor the americans. The laws of momentum conservation would still need to apply. Finally, 'quantum vacuum virtual plasma'.. really?
  •  
    I have to join the skeptics on this one ...
LeopoldS

NIAC 2014 Phase I Selections | NASA - 4 views

  •  
    12 new NIAC 1 studies - many topics familiar to us ... please have a look at those closest to your expertise to see if there is anything new/worth investigating (and in general to be knowledgeable on them since we will get questions sooner or later on them)
    Principal Investigator Proposal Title Organization City, State, Zip Code
    Atchison, Justin Swarm Flyby Gravimetry Johns Hopkins University Baltimore, MD 21218-2680
    Boland, Eugene Mars Ecopoiesis Test Bed Techshot, Inc. Greenville, IN 47124-9515
    Cash, Webster The Aragoscope: Ultra-High Resolution Optics at Low Cost University of Colorado Boulder, CO 80309-0389
    Chen, Bin 3D Photocatalytic Air Processor for Dramatic Reduction of Life Support Mass & Complexity NASA ARC Moffett Field, CA 94035-0000
    Hoyt, Robert WRANGLER: Capture and De-Spin of Asteroids and Space Debris Tethers Unlimited Bothel, WA 98011-8808
    Matthies, Larry Titan Aerial Daughtercraft NASA JPL Pasadena, CA 91109-8001
    Miller, Timothy Using the Hottest Particles in the Universe to Probe Icy Solar System Worlds John Hopkins University Laurel, MD 20723-6005
    Nosanov, Jeffrey PERISCOPE: PERIapsis Subsurface Cave OPtical Explorer NASA JPL Pasadena, CA 91109-8001
    Oleson, Steven Titan Submarine: Exploring the Depths of Kraken NASA GRC Cleveland, OH 44135-3127
    Ono, Masahiro Comet Hitchhiker: Harvesting Kinetic Energy from Small Bodies to Enable Fast and Low-Cost Deep Space Exploration NASA JPL Pasadena, CA 91109-8001
    Streetman, Brett Exploration Architecture with Quantum Inertial Gravimetry and In Situ ChipSat Sensors Draper Laboratory Cambridge, MA 02139-3539
    Wiegmann, Bruce Heliopause Electrostatic Rapid Transit System (HERTS) NASA MSFC Huntsville, AL 35812-0000
  •  
    Eh, the swarm flyby gravimetry is very similar to the "measuring gravitational fields" project I proposed in the brewery
1 - 20 of 341 Next › Last »
Showing 20 items per page