Skip to main content

Home/ Advanced Concepts Team/ Group items tagged situational

Rss Feed Group items tagged

Dario Izzo

Miguel Nicolelis Says the Brain Is Not Computable, Bashes Kurzweil's Singularity | MIT ... - 9 views

  •  
    As I said ten years ago and psychoanalysts 100 years ago. Luis I am so sorry :) Also ... now that the commission funded the project blue brain is a rather big hit Btw Nicolelis is a rather credited neuro-scientist
  • ...14 more comments...
  •  
    nice article; Luzi would agree as well I assume; one aspect not clear to me is the causal relationship it seems to imply between consciousness and randomness ... anybody?
  •  
    This is the same thing Penrose has been saying for ages (and yes, I read the book). IF the human brain proves to be the only conceivable system capable of consciousness/intelligence AND IF we'll forever be limited to the Turing machine type of computation (which is what the "Not Computable" in the article refers to) AND IF the brain indeed is not computable, THEN AI people might need to worry... Because I seriously doubt the first condition will prove to be true, same with the second one, and because I don't really care about the third (brains is not my thing).. I'm not worried.
  •  
    In any case, all AI research is going in the wrong direction: the mainstream is not on how to go beyond Turing machines, rather how to program them well enough ...... and thats not bringing anywhere near the singularity
  •  
    It has not been shown that intelligence is not computable (only some people saying the human brain isn't, which is something different), so I wouldn't go so far as saying the mainstream is going in the wrong direction. But even if that indeed was the case, would it be a problem? If so, well, then someone should quickly go and tell all the people trading in financial markets that they should stop using computers... after all, they're dealing with uncomputable undecidable problems. :) (and research on how to go beyond Turing computation does exist, but how much would you want to devote your research to a non existent machine?)
  •  
    [warning: troll] If you are happy with developing algorithms that serve the financial market ... good for you :) After all they have been proved to be useful for humankind beyond any reasonable doubt.
  •  
    Two comments from me: 1) an apparently credible scientist takes Kurzweil seriously enough to engage with him in polemics... oops 2) what worries me most, I didn't get the retail store pun at the end of article...
  •  
    True, but after Google hired Kurzweil he is de facto being taken seriously ... so I guess Nicolelis reacted to this.
  •  
    Crazy scientist in residence... interesting marketing move, I suppose.
  •  
    Unfortunately, I can't upload my two kids to the cloud to make them sleep, that's why I comment only now :-). But, of course, I MUST add my comment to this discussion. I don't really get what Nicolelis point is, the article is just too short and at a too popular level. But please realize that the question is not just "computable" vs. "non-computable". A system may be computable (we have a collection of rules called "theory" that we can put on a computer and run in a finite time) and still it need not be predictable. Since the lack of predictability pretty obviously applies to the human brain (as it does to any sufficiently complex and nonlinear system) the question whether it is computable or not becomes rather academic. Markram and his fellows may come up with a incredible simulation program of the human brain, this will be rather useless since they cannot solve the initial value problem and even if they could they will be lost in randomness after a short simulation time due to horrible non-linearities... Btw: this is not my idea, it was pointed out by Bohr more than 100 years ago...
  •  
    I guess chaos is what you are referring to. Stuff like the Lorentz attractor. In which case I would say that the point is not to predict one particular brain (in which case you would be right): any initial conditions would be fine as far as any brain gets started :) that is the goal :)
  •  
    Kurzweil talks about downloading your brain to a computer, so he has a specific brain in mind; Markram talks about identifying neural basis of mental diseases, so he has at least pretty specific situations in mind. Chaos is not the only problem, even a perfectly linear brain (which is not a biological brain) is not predictable, since one cannot determine a complete set of initial conditions of a working (viz. living) brain (after having determined about 10% the brain is dead and the data useless). But the situation is even worse: from all we know a brain will only work with a suitable interaction with its environment. So these boundary conditions one has to determine as well. This is already twice impossible. But the situation is worse again: from all we know, the way the brain interacts with its environment at a neural level depends on his history (how this brain learned). So your boundary conditions (that are impossible to determine) depend on your initial conditions (that are impossible to determine). Thus the situation is rather impossible squared than twice impossible. I'm sure Markram will simulate something, but this will rather be the famous Boltzmann brain than a biological one. Boltzman brains work with any initial conditions and any boundary conditions... and are pretty dead!
  •  
    Say one has an accurate model of a brain. It may be the case that the initial and boundary conditions do not matter that much in order for the brain to function an exhibit macro-characteristics useful to make science. Again, if it is not one particular brain you are targeting, but the 'brain' as a general entity this would make sense if one has an accurate model (also to identify the neural basis of mental diseases). But in my opinion, the construction of such a model of the brain is impossible using a reductionist approach (that is taking the naive approach of putting together some artificial neurons and connecting them in a huge net). That is why both Kurzweil and Markram are doomed to fail.
  •  
    I think that in principle some kind of artificial brain should be feasible. But making a brain by just throwing together a myriad of neurons is probably as promising as throwing together some copper pipes and a heap of silica and expecting it to make calculations for you. Like in the biological system, I suspect, an artificial brain would have to grow from a small tiny functional unit by adding neurons and complexity slowly and in a way that in a stable way increases the "usefulness"/fitness. Apparently our brain's usefulness has to do with interpreting inputs of our sensors to the world and steering the body making sure that those sensors, the brain and the rest of the body are still alive 10 seconds from now (thereby changing the world -> sensor inputs -> ...). So the artificial brain might need sensors and a body to affect the "world" creating a much larger feedback loop than the brain itself. One might argue that the complexity of the sensor inputs is the reason why the brain needs to be so complex in the first place. I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain. Anyone? Or are they trying to simulate the human brain after it has been removed from the body? That might be somewhat easier I guess...
  •  
    Johannes: "I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain." In Artificial Life the whole environment+bodies&brains is simulated. You have also the whole embodied cognition movement that basically advocates for just that: no true intelligence until you model the system in its entirety. And from that you then have people building robotic bodies, and getting their "brains" to learn from scratch how to control them, and through the bodies, the environment. Right now, this is obviously closer to the complexity of insect brains, than human ones. (my take on this is: yes, go ahead and build robots, if the intelligence you want to get in the end is to be displayed in interactions with the real physical world...) It's easy to dismiss Markram's Blue Brain for all their clever marketing pronouncements that they're building a human-level consciousness on a computer, but from what I read of the project, they seem to be developing a platfrom onto which any scientist can plug in their model of a detail of a detail of .... of the human brain, and get it to run together with everyone else's models of other tiny parts of the brain. This is not the same as getting the artificial brain to interact with the real world, but it's a big step in enabling scientists to study their own models on more realistic settings, in which the models' outputs get to effect many other systems, and throuh them feed back into its future inputs. So Blue Brain's biggest contribution might be in making model evaluation in neuroscience less wrong, and that doesn't seem like a bad thing. At some point the reductionist approach needs to start moving in the other direction.
  •  
    @ Dario: absolutely agree, the reductionist approach is the main mistake. My point: if you take the reductionsit approach, then you will face the initial and boundary value problem. If one tries a non-reductionist approach, this problem may be much weaker. But off the record: there exists a non-reductionist theory of the brain, it's called psychology... @ Johannes: also agree, the only way the reductionist approach could eventually be successful is to actually grow the brain. Start with essentially one neuron and grow the whole complexity. But if you want to do this, bring up a kid! A brain without body might be easier? Why do you expect that a brain detached from its complete input/output system actually still works. I'm pretty sure it does not!
  •  
    @Luzi: That was exactly my point :-)
jaihobah

The Cure For Fear | New Republic - 2 views

  •  
    A long read but very interesting and well written.
  •  
    PS: Does this quote from the article not sound a lot like Inception? 'In any given situation, the brain will retrieve old memories to inform an organism's behavior. If the memory is relevant to the situation, the organism can act on the information; if it is not relevant, then the organism can learn from the situation and create a new memory. With reconsolidation, researchers argued, there seemed to be a brief window in between the retrieval of an old memory and the creation of a new memory in which the old memory is vulnerable to manipulation.'
LeopoldS

Wired and Shrewd, Young Egyptians Guide Revolt - NYTimes.com - 0 views

  •  
    nice account how some tech savvyness helps in these situations
nikolas smyrlakis

ACM award concerning the Complexity of Interactions in Markets, Social Networks, and On... - 0 views

  •  
    "The Complexity of Nash Equilibria,", It also suggests that the Nash equilibrium may not be an accurate prediction of behavior in all situations. Daskalakis's research emphasizes the need for new, computationally meaningful methods for modeling strategic behavior in complex systems such as those encountered in financial markets, online systems, and social networks.
ESA ACT

Google Smart Meter App Not Ready for Finals | Wired Science from Wired.com - 0 views

  •  
    interesting that Google is getting involved into this. The article describes well the situation though, lots of ideas, existing technology but no framework and solid products yet
ESA ACT

Twibright Optar - 0 views

  •  
    Optar stands for OPTical ARchiver. It's a codec for encoding data on paper. A proposal for fighting the digital obsolescence: Digital obsolescence is a situation where a digital resource is no longer readable because the physical media, the reader require
LeopoldS

Extraneous factors in judicial decisions - 1 views

  •  
    astonishing! whenever you go to a judge, make sure that he has a full belly ... I am sure we can apply this things also for other situations: exams etc
Thijs Versloot

The big data brain drain - 3 views

  •  
    Echoing this, in 2009 Google researchers Alon Halevy, Peter Norvig, and Fernando Pereira penned an article under the title The Unreasonable Effectiveness of Data. In it, they describe the surprising insight that given enough data, often the choice of mathematical model stops being as important - that particularly for their task of automated language translation, "simple models and a lot of data trump more elaborate models based on less data." If we make the leap and assume that this insight can be at least partially extended to fields beyond natural language processing, what we can expect is a situation in which domain knowledge is increasingly trumped by "mere" data-mining skills. I would argue that this prediction has already begun to pan-out: in a wide array of academic fields, the ability to effectively process data is superseding other more classical modes of research.
Tom Gheysens

Scientists discover double meaning in genetic code - 4 views

  •  
    Does this have implications for AI algorithms??
  • ...1 more comment...
  •  
    Somehow, the mere fact does not surprise me. I always assumed that the genetic information is on multiple overlapping layers encoded. I do not see how this can be transferred exactly on genetic algorithms, but a good encoding on them is important and I guess that you could produce interesting effects by "overencoding" of parameters, apart from being more space-efficient.
  •  
    I was actually thinking exactly about this question during my bike ride this morning. I am surprised that some codons would need to have a double meaning though because there is already a surplus of codons to translate into just 20-22 proteins (depending on organism). So there should be about 44 codons left to prevent translation errors and in addition regulate gene expression. If - as the article suggests - a single codon can take a dual role, does it so in different situations (needing some other regulator do discern those)? Or does it just perform two functions that always need to happen simultaneously? I tried to learn more from the underlying paper: https://www.sciencemag.org/content/342/6164/1367.full.pdf All I got from that was a headache. :-\
  •  
    Probably both. Likely a consequence of energy preservation during translation. If you can do the same thing with less genes you save up on the effort required to reproduce. Also I suspect it has something to do with modularity. It makes sense that the gene regulating for "foot" cells also trigger the genes that generate "toe" cells for example. No point in having an extra if statement.
tvinko

Massively collaborative mathematics : Article : Nature - 28 views

  •  
    peer-to-peer theorem-proving
  • ...14 more comments...
  •  
    Or: mathematicians catch up with open-source software developers :)
  •  
    "Similar open-source techniques could be applied in fields such as [...] computer science, where the raw materials are informational and can be freely shared online." ... or we could reach the point, unthinkable only few years ago, of being able to exchange text messages in almost real time! OMG, think of the possibilities! Seriously, does the author even browse the internet?
  •  
    I do not agree with you F., you are citing out of context! Sharing messages does not make a collaboration, nor does a forum, .... You need a set of rules and a common objective. This is clearly observable in "some team", where these rules are lacking, making team work inexistent. The additional difficulties here are that it involves people that are almost strangers to each other, and the immateriality of the project. The support they are using (web, wiki) is only secondary. What they achieved is remarkable, disregarding the subject!
  •  
    I think we will just have to agree to disagree then :) Open source developers have been organizing themselves with emails since the early '90s, and most projects (e.g., the Linux kernel) still do not use anything else today. The Linux kernel mailing list gets around 400 messages per day, and they are managing just fine to scale as the number of contributors increases. I agree that what they achieved is remarkable, but it is more for "what" they achieved than "how". What they did does not remotely qualify as "massively" collaborative: again, many open source projects are managed collaboratively by thousands of people, and many of them are in the multi-million lines of code range. My personal opinion of why in the scientific world these open models are having so many difficulties is that the scientific community today is (globally, of course there are many exceptions) a closed, mostly conservative circle of people who are scared of changes. There is also the fact that the barrier of entry in a scientific community is very high, but I think that this should merely scale down the number of people involved and not change the community "qualitatively". I do not think that many research activities are so much more difficult than, e.g., writing an O(1) scheduler for an Operating System or writing a new balancing tree algorithm for efficiently storing files on a filesystem. Then there is the whole issue of scientific publishing, which, in its current form, is nothing more than a racket. No wonder traditional journals are scared to death by these open-science movements.
  •  
    here we go ... nice controversy! but maybe too many things mixed up together - open science journals vs traditional journals, conservatism of science community wrt programmers (to me one of the reasons for this might be the average age of both groups, which is probably more than 10 years apart ...) and then using emailing wrt other collaboration tools .... .... will have to look at the paper now more carefully ... (I am surprised to see no comment from José or Marek here :-)
  •  
    My point about your initial comment is that it is simplistic to infer that emails imply collaborative work. You actually use the word "organize", what does it mean indeed. In the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review). Mailing is just a coordination mean. In collaborations and team work, it is about rules, not only about the technology you use to potentially collaborate. Otherwise, all projects would be successful, and we would noy learn management at school! They did not write they managed the colloboration exclusively because of wikipedia and emails (or other 2.0 technology)! You are missing the part that makes it successful and remarkable as a project. On his blog the guy put a list of 12 rules for this project. None are related to emails, wikipedia, forums ... because that would be lame and your comment would make sense. Following your argumentation, the tools would be sufficient for collaboration. In the ACT, we have plenty of tools, but no team work. QED
  •  
    the question on the ACT team work is one that is coming back continuously and it always so far has boiled down to the question of how much there need and should be a team project to which everybody inthe team contributes in his / her way or how much we should leave smaller, flexible teams within the team form and progress, more following a bottom-up initiative than imposing one from top-down. At this very moment, there are at least 4 to 5 teams with their own tools and mechanisms which are active and operating within the team. - but hey, if there is a real will for one larger project of the team to which all or most members want to contribute, lets go for it .... but in my view, it should be on a convince rather than oblige basis ...
  •  
    It is, though, indicative that some of the team member do not see all the collaboration and team work happening around them. We always leave the small and agile sub-teams to form and organize themselves spontaneously, but clearly this method leaves out some people (be it for their own personal attitude or be it for pure chance) For those cases which we could think to provide the possibility to participate in an alternative, more structured, team work where we actually manage the hierachy, meritocracy and perform the project review (to use Joris words).
  •  
    I am, and was, involved in "collaboration" but I can say from experience that we are mostly a sum of individuals. In the end, it is always one or two individuals doing the job, and other waiting. Sometimes even, some people don't do what they are supposed to do, so nothing happens ... this could not be defined as team work. Don't get me wrong, this is the dynamic of the team and I am OK with it ... in the end it is less work for me :) team = 3 members or more. I am personally not looking for a 15 member team work, and it is not what I meant. Anyway, this is not exactly the subject of the paper.
  •  
    My opinion about this is that a research team, like the ACT, is a group of _people_ and not only brains. What I mean is that people have feelings, hate, anger, envy, sympathy, love, etc about the others. Unfortunately(?), this could lead to situations, where, in theory, a group of brains could work together, but not the same group of people. As far as I am concerned, this happened many times during my ACT period. And this is happening now with me in Delft, where I have the chance to be in an even more international group than the ACT. I do efficient collaborations with those people who are "close" to me not only in scientific interest, but also in some private sense. And I have people around me who have interesting topics and they might need my help and knowledge, but somehow, it just does not work. Simply lack of sympathy. You know what I mean, don't you? About the article: there is nothing new, indeed. However, why it worked: only brains and not the people worked together on a very specific problem. Plus maybe they were motivated by the idea of e-collaboration. No revolution.
  •  
    Joris, maybe I made myself not clear enough, but my point was only tangentially related to the tools. Indeed, it is the original article mention of "development of new online tools" which prompted my reply about emails. Let me try to say it more clearly: my point is that what they accomplished is nothing new methodologically (i.e., online collaboration of a loosely knit group of people), it is something that has been done countless times before. Do you think that now that it is mathematicians who are doing it makes it somehow special or different? Personally, I don't. You should come over to some mailing lists of mathematical open-source software (e.g., SAGE, Pari, ...), there's plenty of online collaborative research going on there :) I also disagree that, as you say, "in the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review)". First of all I think the main engine of any collaboration like this is the objective, i.e., wanting to get something done. Rules emerge from self-organization later on, and they may be completely different from project to project, ranging from almost anarchy to BDFL (benevolent dictator for life) style. Given this kind of variety that can be observed in open-source projects today, I am very skeptical that any kind of management rule can be said to be universal (and I am pretty sure that the overwhelming majority of project organizers never went to any "management school"). Then there is the social aspect that Tamas mentions above. From my personal experience, communities that put technical merit above everything else tend to remain very small and generally become irrelevant. The ability to work and collaborate with others is the main asset the a participant of a community can bring. I've seen many times on the Linux kernel mailing list contributions deemed "technically superior" being disregarded and not considered for inclusion in the kernel because it was clear that
  •  
    hey, just catched up the discussion. For me what is very new is mainly the framework where this collaborative (open) work is applied. I haven't seen this kind of working openly in any other field of academic research (except for the Boinc type project which are very different, because relying on non specialists for the work to be done). This raise several problems, and mainly the one of the credit, which has not really been solved as I read in the wiki (is an article is written, who writes it, what are the names on the paper). They chose to refer to the project, and not to the individual researchers, as a temporary solution... It is not so surprising for me that this type of work has been first done in the domain of mathematics. Perhaps I have an ideal view of this community but it seems that the result obtained is more important than who obtained it... In many areas of research this is not the case, and one reason is how the research is financed. To obtain money you need to have (scientific) credit, and to have credit you need to have papers with your name on it... so this model of research does not fit in my opinion with the way research is governed. Anyway we had a discussion on the Ariadnet on how to use it, and one idea was to do this kind of collaborative research; idea that was quickly abandoned...
  •  
    I don't really see much the problem with giving credit. It is not the first time a group of researchers collectively take credit for a result under a group umbrella, e.g., see Nicolas Bourbaki: http://en.wikipedia.org/wiki/Bourbaki Again, if the research process is completely transparent and publicly accessible there's no way to fake contributions or to give undue credit, and one could cite without problems a group paper in his/her CV, research grant application, etc.
  •  
    Well my point was more that it could be a problem with how the actual system works. Let say you want a grant or a position, then the jury will count the number of papers with you as a first author, and the other papers (at least in France)... and look at the impact factor of these journals. Then you would have to set up a rule for classifying the authors (endless and pointless discussions), and give an impact factor to the group...?
  •  
    it seems that i should visit you guys at estec... :-)
  •  
    urgently!! btw: we will have the ACT christmas dinner on the 9th in the evening ... are you coming?
Thijs Versloot

Time 'Emerges' from #Quantum Entanglement #arXiv - 1 views

  •  
    Time is an emergent phenomenon that is a side effect of quantum entanglement, say physicists. And they have the first exprimental results to prove it
  • ...5 more comments...
  •  
    I always feel like people make too big a deal out of entanglement. In my opinion it is just a combination of a conserved quantity and an initial lack of knowledge. Imagine that I had a machine that always creates one blue and one red ping-pong ball at the same time (|b > and |r > respectively). The machine now puts both balls into identical packages (so I cannot observe them) and one of the packages is sent to Tokio. I did not know which ball was sent to Tokio and which stayed with me - they are in a superposition (|br >+|rb >), meaning that either the blue ball is with me and the red one in Tokio or vice versa - they are entangled. So far no magic has happened. Now I call my friend in Tokio who got the ball: "What color was the ball you received in that package?" He replies: "The ball that I got was blue. Why did you send me ball in the first place?" Now, the fact that he told me makes the superpositon wavefunction collapse (yes, that is what the Copenhagen interpretation would tell us). As a result I know without opening my box that it contains a red ball. But this is really because there is an underlying conservation law and because now I know the other state. I don't see how just looking at the conserved quantity I am in a timeless state outside of the 'universe' - this is just one way of interpreting it. By the way, the wave function for my box with the undetermined ball does not collapse when the other ball is observed by my friend in Tokio. Only when he tells me does the wavefunction collapse - he did not even know that I had a complementary ball. On the other hand if he knew about the way the experiment was conducted then he would have known that I had to have a red ball - the wavefunction collapses as soon as he observed his ball. For him it is determined that my ball must be red. For me however the superposition is intact until he tells me. ;-)
  •  
    Sorry, Johannes, you just develop a simple hidden-parameters theory and it's experimentally proven that these don't work. Entangeled states are neither the blue nor the red ball they are really bluered (or redblue) till the point the measurement is done.
  •  
    Hm, to me this looks like a bad joke... The "emergent time" concept used is still the old proposal by Page and Whotters where time emerges from something fundamentally unobservable (the wave function of the Universe). That's as good as claiming that time emerges from God. If I understand correctly, the paper now deals with the situation where a finite system is taken as "Mini-Universe" and the experimentalist in the lab can play "God of the Mini-Universe". This works, of course, but it doesn't really tell us anything about emergent time, does it?
  •  
    Actually, it has not been proven conclusively that hidden variable theories don' work - although this is the opinion of most physicists these days. But a non-local hidden variable would still be allowed - I don't see why that could not be equivalent to a conserved quantity within the system. As far as the two balls go it is fine to say they are undetermined instead of saying they are in bluered or redblue state - for all intents and purposes it does not affect us (because if it would the wavefunction would have collapsed) so we can't say anything about it in the first place.
  •  
    Non-local hidden variables may work, but in my opinion they don't add anything to the picture. The (at least to non-physicists) contraintuitive fact that there cannot be a variable that determines ab initio the color of the ball going to Tokio will remain (in your example this may not even be true since the example is too simple...).
  •  
    I guess I tentatively agree with you on both points. In the end there might anyway be surprisingly little overlap between the way that we describe what nature does and HOW it does it... :-D
  •  
    Congratulations! 100% agree.
Nina Nadine Ridder

Robots collaborate to deliver meds, supplies, and even drinks - 2 views

  •  
    At the recent Robotics Science and Systems (RSS) conference, a CSAIL team presented a new system of three robots that can work together to deliver items quickly, accurately and, perhaps most importantly, in unpredictable environments. The team says its models could extend to a variety of other applications, including hospitals, disaster situations, and even restaurants and bars.
Loretta Latronico Poulain

Agent-based computer models could anticipate future economic crisis - 1 views

  •  
    "The Illinois Commerce Commission wanted to make sure that if they deregulated the power market, individual producers of electricity would not be able to manipulate the market during times of high demand by withholding capacity or charging excessive rates. The Argonne model found that during certain times of heavy load such a situation could emerge, which led to the recommendation that independent monitors maintain some oversight of the power market." Interesting this study on power grids !
jmlloren

Exotic matter : Insight : Nature - 5 views

shared by jmlloren on 03 Aug 10 - Cached
LeopoldS liked it
  •  
    Trends in materials and condensed matter. Check out the topological insulators. amazing field.
  • ...12 more comments...
  •  
    Aparently very interesting, will it survive the short hype? Relevant work describing mirror charges of topological insulators and the classical boundary conditions were done by Ismo and Ari. But the two communities don't know each other and so they are never cited. Also a way to produce new things...
  •  
    Thanks for noticing! Indeed, I had no idea that Ari (don't know Ismo) was involved in the field. Was it before Kane's proposal or more recently? What I mostly like is that semiconductors are good candidates for 3D TI, however I got lost in the quantum field jargon. Yesterday, I got a headache trying to follow the Majorana fermions, the merons, skyrnions, axions, and so on. Luzi, are all these things familiar to you?
  •  
    Ismo Lindell described in the early 90's the mirror charge of what is now called topological insulator. He says that similar results were obtained already at the beginning of the 20th century... Ismo Lindell and Ari Sihvola in the recent years discussed engineering aspects of PEMCs (perfect electro-megnetic conductors,) which are more or less classical analogues of topological insulators. Fundamental aspects of PEMCs are well knwon in high-energy physics for a long time, recent works are mainly due to Friedrich Hehl and Yuri Obukhov. All these works are purely classical, so there is no charge quantisation, no considerations of electron spin etc. About Majorana fermions: yes, I spent several years of research on that topic. Axions: a topological state, of course, trivial :-) Also merons and skyrnions are topological states, but I'm less familiar with them.
  •  
    "Non-Abelian systems1, 2 contain composite particles that are neither fermions nor bosons and have a quantum statistics that is far richer than that offered by the fermion-boson dichotomy. The presence of such quasiparticles manifests itself in two remarkable ways. First, it leads to a degeneracy of the ground state that is not based on simple symmetry considerations and is robust against perturbations and interactions with the environment. Second, an interchange of two quasiparticles does not merely multiply the wavefunction by a sign, as is the case for fermions and bosons. Rather, it takes the system from one ground state to another. If a series of interchanges is made, the final state of the system will depend on the order in which these interchanges are being carried out, in sharp contrast to what happens when similar operations are performed on identical fermions or bosons." wow, this paper by Stern reads really weired ... any of you ever looked into this?
  •  
    C'mon Leopold, it's as trivial as the topological states, AKA axions! Regarding the question, not me!
  •  
    just looked up the wikipedia entry on axions .... at least they have some creativity in names giving: "In supersymmetric theories the axion has both a scalar and a fermionic superpartner. The fermionic superpartner of the axion is called the axino, the scalar superpartner is called the saxion. In some models, the saxion is the dilaton. They are all bundled up in a chiral superfield. The axino has been predicted to be the lightest supersymmetric particle in such a model.[24] In part due to this property, it is considered a candidate for the composition of dark matter.[25]"
  •  
    Thank's Leopold. Sorry Luzi for being ironic concerning the triviality of the axions. Now, Leo confirmed me that indeed is a trivial matter. I have problems with models where EVERYTHING is involved.
  •  
    Well, that's the theory of everything, isn't it?? Seriously: I don't think that theoretically there is a lot of new stuff here. Topological aspects of (non-Abelian) theories became extremely popular in the context of string theory. The reason is very simple: topological theories are much simpler than "normal" and since string theory anyway is far too complicated to be solved, people just consider purely topological theories, then claiming that this has something to do with the real world, which of course is plainly wrong. So what I think is new about these topological insulators are the claims that one can actually fabricate a material which more or less accurately mimics a topological theory and that these materials are of practical use. Still, they are a little bit the poor man's version of the topological theories fundamental physicists like to look at since electrdynamics is an Abelian theory.
  •  
    I have the feeling, not the knowledge, that you are right. However, I think that the implications of this light quantum field effects are great. The fact of being able to sustain two currents polarized in spin is a technological breakthrough.
  •  
    not sure how much I can contribute to your apparently educated debate here but if I remember well from my work for the master, these non-Abelian theories were all but "simple" as Luzi puts it ... and from a different perspective: to me the whole thing of being able to describe such non-Abelian systems nicely indicates that they should in one way or another also have some appearance in Nature (would be very surprised if not) - though this is of course no argument that makes string theory any better or closer to what Luzi called reality ....
  •  
    Well, electrodynamics remains an Abelian theory. From the theoretical point of view this is less interesting than non-Abelian ones, since in 4D the fibre bundle of a U(1) theory is trivial (great buzz words, eh!) But in topological insulators the point of view is slightly different since one always has the insulator (topological theory), its surrounding (propagating theory) and most importantly the interface between the two. This is a new situation that people from field and string theory were not really interested in.
  •  
    guys... how would you explain this to your gran mothers?
  •  
    *you* tried *your* best .... ??
Juxi Leitner

The Associated Press: Launch set for US satellite to monitor space junk - 0 views

  • It's designed to give the Air Force its first full-time, space-based surveillance of satellites and debris in Earth's orbit. It monitors them for possible collisions.
Juxi Leitner

Slashdot Science Story | Calculating Environmental Damage From Space Tourism Rockets - 3 views

  •  
    Cynthia - please have a look ... can we check the OoM?
  •  
  •  
    Yeeesss :) So the "non-commercial" rockets do not emit soot? And how many "non-commercial" launches per year are there in comparison to the commercial ones? Finally commercial space-flight seems more realisable than ever, and "non-commercial" guys will do everything to prevent situation in which they have to compete on an open market... Coming years should be very interesting...
Isabelle DB

Evidence for a Collective Intelligence Factor in the Performance of Human Groups - 2 views

  •  
    What do you think of this one ?
  • ...2 more comments...
  •  
    Great! Women perhaps are not more intelligent as individuals, but now at least they have more collective intelligence... Interesting research topic, though, but I doubt that any of these results can be generalized to real live situations.
  •  
    Maybe by passing the message to ensure some men understand it would be their interest to have (more) women in their teams ? No problem at the ACT, this maybe why it works so well ? :-))
  •  
    Well, that's perhaps the reason, why meetings were always so f... boring while I was at ACT :D
  •  
    Lots more resources on collective intelligence: http://cci.mit.edu/
pacome delva

Radiocarbon Daters Tune Up Their Time Machine - 2 views

  •  
    funny how a curve can change (pre)History !
  •  
    Reminds me of this: http://xkcd.com/687/ :)
  •  
    xkcd must be the new calvin and hobbes, where luzi usually has an example for any given situation
nikolas smyrlakis

mentored by the Advanced Concepts Team for Google Summer of Code 2010 - 4 views

  •  
    you propably already know,I post it for the twitter account and for your comments
  • ...4 more comments...
  •  
    once again one of these initiatives that came up from a situation and that would never have been possible with a top-down approach .... fantastic! and as Dario said: we are apparently where NASA still has to go with this :-)
  •  
    Actually, NASA Ames did that already within the NASA Open Source Agreement in 2008 for a V&V software!
  •  
    indeed ... you are right .... interesting project btw - they started in 1999, were in 2005 the first NASA project on Sourceforge and won several awards .... then this entry why they did not participate last year: "05/01/09: Skipping this years Google Summer-of-Code - many of you have asked why we are not participating in this years Summer of Code. The answer is that both John and Peter were too busy with other assignments to set this up in time. We will be back in 2010. At least we were able to compensate with a limited number of NASA internships to continue some of last years projects." .... but I could not find them in this years selected list - any clue?
  •  
    but in any case, according to the apple guru, Java is a dying technology, so their project might as well ...
  •  
    They participate under the name "The Java Pathfinder Team" (http://babelfish.arc.nasa.gov/trac/jpf/wiki/events/soc2010). It is actually a very useful project for both education and industry (Airbus created a consortium on model checking soft, and there is a lot of research on it) As far as I know, TAS had some plans of using Java onboard spacecrafts, 2 years ago. Not sure the industry is really sensible about Jobs' opinions ;) particularly if there is no better alternative!
Joris _

Hawking: Aliens are out there, likely to be Bad News * The Register - 3 views

  •  
    I think it's time to quote "Calvin & Hobbes" (yes, I'm still the guy who knows a C&H for any situation in life) "Sometimes I think the surest sign that intelligent life exists elsewhere in the universe is that none of it has tried to contact us." One should sue this idiot for racism against extraterrestrials!!
1 - 20 of 22 Next ›
Showing 20 items per page