Skip to main content

Home/ Advanced Concepts Team/ Group items tagged and

Rss Feed Group items tagged

tvinko

Massively collaborative mathematics : Article : Nature - 28 views

  •  
    peer-to-peer theorem-proving
  • ...14 more comments...
  •  
    Or: mathematicians catch up with open-source software developers :)
  •  
    "Similar open-source techniques could be applied in fields such as [...] computer science, where the raw materials are informational and can be freely shared online." ... or we could reach the point, unthinkable only few years ago, of being able to exchange text messages in almost real time! OMG, think of the possibilities! Seriously, does the author even browse the internet?
  •  
    I do not agree with you F., you are citing out of context! Sharing messages does not make a collaboration, nor does a forum, .... You need a set of rules and a common objective. This is clearly observable in "some team", where these rules are lacking, making team work inexistent. The additional difficulties here are that it involves people that are almost strangers to each other, and the immateriality of the project. The support they are using (web, wiki) is only secondary. What they achieved is remarkable, disregarding the subject!
  •  
    I think we will just have to agree to disagree then :) Open source developers have been organizing themselves with emails since the early '90s, and most projects (e.g., the Linux kernel) still do not use anything else today. The Linux kernel mailing list gets around 400 messages per day, and they are managing just fine to scale as the number of contributors increases. I agree that what they achieved is remarkable, but it is more for "what" they achieved than "how". What they did does not remotely qualify as "massively" collaborative: again, many open source projects are managed collaboratively by thousands of people, and many of them are in the multi-million lines of code range. My personal opinion of why in the scientific world these open models are having so many difficulties is that the scientific community today is (globally, of course there are many exceptions) a closed, mostly conservative circle of people who are scared of changes. There is also the fact that the barrier of entry in a scientific community is very high, but I think that this should merely scale down the number of people involved and not change the community "qualitatively". I do not think that many research activities are so much more difficult than, e.g., writing an O(1) scheduler for an Operating System or writing a new balancing tree algorithm for efficiently storing files on a filesystem. Then there is the whole issue of scientific publishing, which, in its current form, is nothing more than a racket. No wonder traditional journals are scared to death by these open-science movements.
  •  
    here we go ... nice controversy! but maybe too many things mixed up together - open science journals vs traditional journals, conservatism of science community wrt programmers (to me one of the reasons for this might be the average age of both groups, which is probably more than 10 years apart ...) and then using emailing wrt other collaboration tools .... .... will have to look at the paper now more carefully ... (I am surprised to see no comment from José or Marek here :-)
  •  
    My point about your initial comment is that it is simplistic to infer that emails imply collaborative work. You actually use the word "organize", what does it mean indeed. In the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review). Mailing is just a coordination mean. In collaborations and team work, it is about rules, not only about the technology you use to potentially collaborate. Otherwise, all projects would be successful, and we would noy learn management at school! They did not write they managed the colloboration exclusively because of wikipedia and emails (or other 2.0 technology)! You are missing the part that makes it successful and remarkable as a project. On his blog the guy put a list of 12 rules for this project. None are related to emails, wikipedia, forums ... because that would be lame and your comment would make sense. Following your argumentation, the tools would be sufficient for collaboration. In the ACT, we have plenty of tools, but no team work. QED
  •  
    the question on the ACT team work is one that is coming back continuously and it always so far has boiled down to the question of how much there need and should be a team project to which everybody inthe team contributes in his / her way or how much we should leave smaller, flexible teams within the team form and progress, more following a bottom-up initiative than imposing one from top-down. At this very moment, there are at least 4 to 5 teams with their own tools and mechanisms which are active and operating within the team. - but hey, if there is a real will for one larger project of the team to which all or most members want to contribute, lets go for it .... but in my view, it should be on a convince rather than oblige basis ...
  •  
    It is, though, indicative that some of the team member do not see all the collaboration and team work happening around them. We always leave the small and agile sub-teams to form and organize themselves spontaneously, but clearly this method leaves out some people (be it for their own personal attitude or be it for pure chance) For those cases which we could think to provide the possibility to participate in an alternative, more structured, team work where we actually manage the hierachy, meritocracy and perform the project review (to use Joris words).
  •  
    I am, and was, involved in "collaboration" but I can say from experience that we are mostly a sum of individuals. In the end, it is always one or two individuals doing the job, and other waiting. Sometimes even, some people don't do what they are supposed to do, so nothing happens ... this could not be defined as team work. Don't get me wrong, this is the dynamic of the team and I am OK with it ... in the end it is less work for me :) team = 3 members or more. I am personally not looking for a 15 member team work, and it is not what I meant. Anyway, this is not exactly the subject of the paper.
  •  
    My opinion about this is that a research team, like the ACT, is a group of _people_ and not only brains. What I mean is that people have feelings, hate, anger, envy, sympathy, love, etc about the others. Unfortunately(?), this could lead to situations, where, in theory, a group of brains could work together, but not the same group of people. As far as I am concerned, this happened many times during my ACT period. And this is happening now with me in Delft, where I have the chance to be in an even more international group than the ACT. I do efficient collaborations with those people who are "close" to me not only in scientific interest, but also in some private sense. And I have people around me who have interesting topics and they might need my help and knowledge, but somehow, it just does not work. Simply lack of sympathy. You know what I mean, don't you? About the article: there is nothing new, indeed. However, why it worked: only brains and not the people worked together on a very specific problem. Plus maybe they were motivated by the idea of e-collaboration. No revolution.
  •  
    Joris, maybe I made myself not clear enough, but my point was only tangentially related to the tools. Indeed, it is the original article mention of "development of new online tools" which prompted my reply about emails. Let me try to say it more clearly: my point is that what they accomplished is nothing new methodologically (i.e., online collaboration of a loosely knit group of people), it is something that has been done countless times before. Do you think that now that it is mathematicians who are doing it makes it somehow special or different? Personally, I don't. You should come over to some mailing lists of mathematical open-source software (e.g., SAGE, Pari, ...), there's plenty of online collaborative research going on there :) I also disagree that, as you say, "in the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review)". First of all I think the main engine of any collaboration like this is the objective, i.e., wanting to get something done. Rules emerge from self-organization later on, and they may be completely different from project to project, ranging from almost anarchy to BDFL (benevolent dictator for life) style. Given this kind of variety that can be observed in open-source projects today, I am very skeptical that any kind of management rule can be said to be universal (and I am pretty sure that the overwhelming majority of project organizers never went to any "management school"). Then there is the social aspect that Tamas mentions above. From my personal experience, communities that put technical merit above everything else tend to remain very small and generally become irrelevant. The ability to work and collaborate with others is the main asset the a participant of a community can bring. I've seen many times on the Linux kernel mailing list contributions deemed "technically superior" being disregarded and not considered for inclusion in the kernel because it was clear that
  •  
    hey, just catched up the discussion. For me what is very new is mainly the framework where this collaborative (open) work is applied. I haven't seen this kind of working openly in any other field of academic research (except for the Boinc type project which are very different, because relying on non specialists for the work to be done). This raise several problems, and mainly the one of the credit, which has not really been solved as I read in the wiki (is an article is written, who writes it, what are the names on the paper). They chose to refer to the project, and not to the individual researchers, as a temporary solution... It is not so surprising for me that this type of work has been first done in the domain of mathematics. Perhaps I have an ideal view of this community but it seems that the result obtained is more important than who obtained it... In many areas of research this is not the case, and one reason is how the research is financed. To obtain money you need to have (scientific) credit, and to have credit you need to have papers with your name on it... so this model of research does not fit in my opinion with the way research is governed. Anyway we had a discussion on the Ariadnet on how to use it, and one idea was to do this kind of collaborative research; idea that was quickly abandoned...
  •  
    I don't really see much the problem with giving credit. It is not the first time a group of researchers collectively take credit for a result under a group umbrella, e.g., see Nicolas Bourbaki: http://en.wikipedia.org/wiki/Bourbaki Again, if the research process is completely transparent and publicly accessible there's no way to fake contributions or to give undue credit, and one could cite without problems a group paper in his/her CV, research grant application, etc.
  •  
    Well my point was more that it could be a problem with how the actual system works. Let say you want a grant or a position, then the jury will count the number of papers with you as a first author, and the other papers (at least in France)... and look at the impact factor of these journals. Then you would have to set up a rule for classifying the authors (endless and pointless discussions), and give an impact factor to the group...?
  •  
    it seems that i should visit you guys at estec... :-)
  •  
    urgently!! btw: we will have the ACT christmas dinner on the 9th in the evening ... are you coming?
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
Luís F. Simões

Shell energy scenarios to 2050 - 6 views

  •  
    just in case you were feeling happy and optimistic
  • ...7 more comments...
  •  
    An energy scenario published by an oil company? Allow me to be sceptical...
  •  
    Indeed, Shell is an energy company, not just oil, for some time now ... The two scenarii are, in their approach, dependant of economic and political situation, which is right now impossible to forecast. Reference to Kyoto is surprising, almost out-dated! But overall, I find it rather optimistic at some stages, and probably the timeline (p37-39) is unlikely with recent events.
  •  
    the report was published in 2008, which explains the reference to Kyoto, as the follow-up to it was much more uncertain at that point. The Blueprint scenario is indeed optimistic, but also quite unlikely I'd say. I don't see humanity suddenly becoming so wise and coordinated. Sadly, I see something closer to the Scramble scenario as much more likely to occur.
  •  
    not an oil company??? please have a look at the percentage of their revenues coming from oil and gas and then compare this with all their other energy activities together and you will see very quickly that it is only window dressing ... they are an oil and gas company ... and nothing more
  •  
    not JUST oil. From a description: "Shell is a global group of energy and petrochemical companies." Of course revenues coming from oil are the biggest, the investment turnover on other energy sources is small for now. Knowing that most of their revenues is from an expendable source, to guarantee their future, they invest elsewhere. They have invested >1b$ in renewable energy, including biofuels. They had the largest wind power business among so-called "oil" companies. Oil only defines what they do "best". As a comparison, some time ago, Apple were selling only computers and now they sell phones. But I would not say Apple is just a phone company.
  •  
    window dressing only ... e.g. Net cash from operating activities (pre-tax) in 2008: 70 Billion$ net income in 2008: 26 Billion revenues in 2008: 88 Billion Their investments and revenues in renewables don't even show up in their annual financial reports since probably they are under the heading of "marketing" which is already 1.7 Billion $ ... this is what they report on their investments: Capital investment, portfolio actions and business development Capital investment in 2009 was $24 billion. This represents a 26% decrease from 2008, which included over $8 billion in acquisitions, primarily relating to Duvernay Oil Corp. Capital investment included exploration expenditure of $4.5 billion (2008: $11.0 billion). In Abu Dhabi, Shell signed an agreement with Abu Dhabi National Oil Company to extend the GASCO joint venture for a further 20 years. In Australia, Shell and its partners took the final investment decision (FID) for the Gorgon LNG project (Shell share 25%). Gorgon will supply global gas markets to at least 2050, with a capacity of 15 million tonnes (100% basis) of LNG per year and a major carbon capture and storage scheme. Shell has announced a front-end engineering and design study for a floating LNG (FLNG) project, with the potential to deploy these facilities at the Prelude offshore gas discovery in Australia (Shell share 100%). In Australia, Shell confirmed that it has accepted Woodside Petroleum Ltd.'s entitlement offer of new shares at a total cost of $0.8 billion, maintaining its 34.27% share in the company; $0.4 billion was paid in 2009 with the remainder paid in 2010. In Bolivia and Brazil, Shell sold its share in a gas pipeline and in a thermoelectric power plant and its related assets for a total of around $100 million. In Canada, the Government of Alberta and the national government jointly announced their intent to contribute $0.8 billion of funding towards the Quest carbon capture and sequestration project. Quest, which is at the f
  •  
    thanks for the info :) They still have their 50% share in the wind farm in Noordzee (you can see it from ESTEC on a clear day). Look for Shell International Renewables, other subsidiaries and joint-ventures. I guess, the report is about the oil branch. http://sustainabilityreport.shell.com/2009/servicepages/downloads/files/all_shell_sr09.pdf http://www.noordzeewind.nl/
  •  
    no - its about Shell globally - all Shell .. these participations are just peanuts please read the intro of the CEO in the pdf you linked to: he does not even mention renewables! their entire sustainability strategy is about oil and gas - just making it (look) nicer and environmentally friendlier
  •  
    Fair enough, for me even peanuts are worthy and I am not able to judge. Not all big-profit companies, like Shell, are evil :( Look in the pdf what is in the upstream and downstream you mentionned above. Non-shell sources for examples and more objectivity: http://www.nuon.com/company/Innovative-projects/noordzeewind.jsp http://www.e-energymarket.com/news/single-news/article/ferrari-tops-bahrain-gp-using-shell-biofuel.html thanks.
santecarloni

[1101.6015] Radio beam vorticity and orbital angular momentum - 1 views

  • It has been known for a century that electromagnetic fields can transport not only energy and linear momentum but also angular momentum. However, it was not until twenty years ago, with the discovery in laser optics of experimental techniques for the generation, detection and manipulation of photons in well-defined, pure orbital angular momentum (OAM) states, that twisted light and its pertinent optical vorticity and phase singularities began to come into widespread use in science and technology. We have now shown experimentally how OAM and vorticity can be readily imparted onto radio beams. Our results extend those of earlier experiments on angular momentum and vorticity in radio in that we used a single antenna and reflector to directly generate twisted radio beams and verified that their topological properties agree with theoretical predictions. This opens the possibility to work with photon OAM at frequencies low enough to allow the use of antennas and digital signal processing, thus enabling software controlled experimentation also with first-order quantities, and not only second (and higher) order quantities as in optics-type experiments. Since the OAM state space is infinite, our findings provide new tools for achieving high efficiency in radio communications and radar technology.
  •  
    It has been known for a century that electromagnetic fields can transport not only energy and linear momentum but also angular momentum. However, it was not until twenty years ago, with the discovery in laser optics of experimental techniques for the generation, detection and manipulation of photons in well-defined, pure orbital angular momentum (OAM) states, that twisted light and its pertinent optical vorticity and phase singularities began to come into widespread use in science and technology. We have now shown experimentally how OAM and vorticity can be readily imparted onto radio beams. Our results extend those of earlier experiments on angular momentum and vorticity in radio in that we used a single antenna and reflector to directly generate twisted radio beams and verified that their topological properties agree with theoretical predictions. This opens the possibility to work with photon OAM at frequencies low enough to allow the use of antennas and digital signal processing, thus enabling software controlled experimentation also with first-order quantities, and not only second (and higher) order quantities as in optics-type experiments. Since the OAM state space is infinite, our findings provide new tools for achieving high efficiency in radio communications and radar technology.
  •  
    and how can we use this?
Dario Izzo

Miguel Nicolelis Says the Brain Is Not Computable, Bashes Kurzweil's Singularity | MIT ... - 9 views

  •  
    As I said ten years ago and psychoanalysts 100 years ago. Luis I am so sorry :) Also ... now that the commission funded the project blue brain is a rather big hit Btw Nicolelis is a rather credited neuro-scientist
  • ...14 more comments...
  •  
    nice article; Luzi would agree as well I assume; one aspect not clear to me is the causal relationship it seems to imply between consciousness and randomness ... anybody?
  •  
    This is the same thing Penrose has been saying for ages (and yes, I read the book). IF the human brain proves to be the only conceivable system capable of consciousness/intelligence AND IF we'll forever be limited to the Turing machine type of computation (which is what the "Not Computable" in the article refers to) AND IF the brain indeed is not computable, THEN AI people might need to worry... Because I seriously doubt the first condition will prove to be true, same with the second one, and because I don't really care about the third (brains is not my thing).. I'm not worried.
  •  
    In any case, all AI research is going in the wrong direction: the mainstream is not on how to go beyond Turing machines, rather how to program them well enough ...... and thats not bringing anywhere near the singularity
  •  
    It has not been shown that intelligence is not computable (only some people saying the human brain isn't, which is something different), so I wouldn't go so far as saying the mainstream is going in the wrong direction. But even if that indeed was the case, would it be a problem? If so, well, then someone should quickly go and tell all the people trading in financial markets that they should stop using computers... after all, they're dealing with uncomputable undecidable problems. :) (and research on how to go beyond Turing computation does exist, but how much would you want to devote your research to a non existent machine?)
  •  
    [warning: troll] If you are happy with developing algorithms that serve the financial market ... good for you :) After all they have been proved to be useful for humankind beyond any reasonable doubt.
  •  
    Two comments from me: 1) an apparently credible scientist takes Kurzweil seriously enough to engage with him in polemics... oops 2) what worries me most, I didn't get the retail store pun at the end of article...
  •  
    True, but after Google hired Kurzweil he is de facto being taken seriously ... so I guess Nicolelis reacted to this.
  •  
    Crazy scientist in residence... interesting marketing move, I suppose.
  •  
    Unfortunately, I can't upload my two kids to the cloud to make them sleep, that's why I comment only now :-). But, of course, I MUST add my comment to this discussion. I don't really get what Nicolelis point is, the article is just too short and at a too popular level. But please realize that the question is not just "computable" vs. "non-computable". A system may be computable (we have a collection of rules called "theory" that we can put on a computer and run in a finite time) and still it need not be predictable. Since the lack of predictability pretty obviously applies to the human brain (as it does to any sufficiently complex and nonlinear system) the question whether it is computable or not becomes rather academic. Markram and his fellows may come up with a incredible simulation program of the human brain, this will be rather useless since they cannot solve the initial value problem and even if they could they will be lost in randomness after a short simulation time due to horrible non-linearities... Btw: this is not my idea, it was pointed out by Bohr more than 100 years ago...
  •  
    I guess chaos is what you are referring to. Stuff like the Lorentz attractor. In which case I would say that the point is not to predict one particular brain (in which case you would be right): any initial conditions would be fine as far as any brain gets started :) that is the goal :)
  •  
    Kurzweil talks about downloading your brain to a computer, so he has a specific brain in mind; Markram talks about identifying neural basis of mental diseases, so he has at least pretty specific situations in mind. Chaos is not the only problem, even a perfectly linear brain (which is not a biological brain) is not predictable, since one cannot determine a complete set of initial conditions of a working (viz. living) brain (after having determined about 10% the brain is dead and the data useless). But the situation is even worse: from all we know a brain will only work with a suitable interaction with its environment. So these boundary conditions one has to determine as well. This is already twice impossible. But the situation is worse again: from all we know, the way the brain interacts with its environment at a neural level depends on his history (how this brain learned). So your boundary conditions (that are impossible to determine) depend on your initial conditions (that are impossible to determine). Thus the situation is rather impossible squared than twice impossible. I'm sure Markram will simulate something, but this will rather be the famous Boltzmann brain than a biological one. Boltzman brains work with any initial conditions and any boundary conditions... and are pretty dead!
  •  
    Say one has an accurate model of a brain. It may be the case that the initial and boundary conditions do not matter that much in order for the brain to function an exhibit macro-characteristics useful to make science. Again, if it is not one particular brain you are targeting, but the 'brain' as a general entity this would make sense if one has an accurate model (also to identify the neural basis of mental diseases). But in my opinion, the construction of such a model of the brain is impossible using a reductionist approach (that is taking the naive approach of putting together some artificial neurons and connecting them in a huge net). That is why both Kurzweil and Markram are doomed to fail.
  •  
    I think that in principle some kind of artificial brain should be feasible. But making a brain by just throwing together a myriad of neurons is probably as promising as throwing together some copper pipes and a heap of silica and expecting it to make calculations for you. Like in the biological system, I suspect, an artificial brain would have to grow from a small tiny functional unit by adding neurons and complexity slowly and in a way that in a stable way increases the "usefulness"/fitness. Apparently our brain's usefulness has to do with interpreting inputs of our sensors to the world and steering the body making sure that those sensors, the brain and the rest of the body are still alive 10 seconds from now (thereby changing the world -> sensor inputs -> ...). So the artificial brain might need sensors and a body to affect the "world" creating a much larger feedback loop than the brain itself. One might argue that the complexity of the sensor inputs is the reason why the brain needs to be so complex in the first place. I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain. Anyone? Or are they trying to simulate the human brain after it has been removed from the body? That might be somewhat easier I guess...
  •  
    Johannes: "I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain." In Artificial Life the whole environment+bodies&brains is simulated. You have also the whole embodied cognition movement that basically advocates for just that: no true intelligence until you model the system in its entirety. And from that you then have people building robotic bodies, and getting their "brains" to learn from scratch how to control them, and through the bodies, the environment. Right now, this is obviously closer to the complexity of insect brains, than human ones. (my take on this is: yes, go ahead and build robots, if the intelligence you want to get in the end is to be displayed in interactions with the real physical world...) It's easy to dismiss Markram's Blue Brain for all their clever marketing pronouncements that they're building a human-level consciousness on a computer, but from what I read of the project, they seem to be developing a platfrom onto which any scientist can plug in their model of a detail of a detail of .... of the human brain, and get it to run together with everyone else's models of other tiny parts of the brain. This is not the same as getting the artificial brain to interact with the real world, but it's a big step in enabling scientists to study their own models on more realistic settings, in which the models' outputs get to effect many other systems, and throuh them feed back into its future inputs. So Blue Brain's biggest contribution might be in making model evaluation in neuroscience less wrong, and that doesn't seem like a bad thing. At some point the reductionist approach needs to start moving in the other direction.
  •  
    @ Dario: absolutely agree, the reductionist approach is the main mistake. My point: if you take the reductionsit approach, then you will face the initial and boundary value problem. If one tries a non-reductionist approach, this problem may be much weaker. But off the record: there exists a non-reductionist theory of the brain, it's called psychology... @ Johannes: also agree, the only way the reductionist approach could eventually be successful is to actually grow the brain. Start with essentially one neuron and grow the whole complexity. But if you want to do this, bring up a kid! A brain without body might be easier? Why do you expect that a brain detached from its complete input/output system actually still works. I'm pretty sure it does not!
  •  
    @Luzi: That was exactly my point :-)
LeopoldS

Operation Socialist: How GCHQ Spies Hacked Belgium's Largest Telco - 4 views

  •  
    interesting story with many juicy details on how they proceed ... (similarly interesting nickname for the "operation" chosen by our british friends) "The spies used the IP addresses they had associated with the engineers as search terms to sift through their surveillance troves, and were quickly able to find what they needed to confirm the employees' identities and target them individually with malware. The confirmation came in the form of Google, Yahoo, and LinkedIn "cookies," tiny unique files that are automatically placed on computers to identify and sometimes track people browsing the Internet, often for advertising purposes. GCHQ maintains a huge repository named MUTANT BROTH that stores billions of these intercepted cookies, which it uses to correlate with IP addresses to determine the identity of a person. GCHQ refers to cookies internally as "target detection identifiers." Top-secret GCHQ documents name three male Belgacom engineers who were identified as targets to attack. The Intercept has confirmed the identities of the men, and contacted each of them prior to the publication of this story; all three declined comment and requested that their identities not be disclosed. GCHQ monitored the browsing habits of the engineers, and geared up to enter the most important and sensitive phase of the secret operation. The agency planned to perform a so-called "Quantum Insert" attack, which involves redirecting people targeted for surveillance to a malicious website that infects their computers with malware at a lightning pace. In this case, the documents indicate that GCHQ set up a malicious page that looked like LinkedIn to trick the Belgacom engineers. (The NSA also uses Quantum Inserts to target people, as The Intercept has previously reported.) A GCHQ document reviewing operations conducted between January and March 2011 noted that the hack on Belgacom was successful, and stated that the agency had obtained access to the company's
  •  
    I knew I wasn't using TOR often enough...
  •  
    Cool! It seems that after all it is best to restrict employees' internet access only to work-critical areas... @Paul TOR works on network level, so it would not help here much as cookies (application level) were exploited.
Luís F. Simões

Bitcoin P2P Currency: The Most Dangerous Project We've Ever Seen - 10 views

  • After month of research and discovery, we’ve learned the following:1. Bitcoin is a technologically sound project.2. Bitcoin is unstoppable without end-user prosecution.3. Bitcoin is the most dangerous open-source project ever created.4. Bitcoin may be the most dangerous technological project since the internet itself.5. Bitcoin is a political statement by technotarians (technological libertarians).*6. Bitcoins will change the world unless governments ban them with harsh penalties.
  • The benefits of a currency like this:a) Your coins can’t be frozen (like a Paypal account can be)b) Your coins can’t be trackedc) Your coins can’t be taxedd) Transaction costs are extremely low (sorry credit card companies)
  • An individual with the name -- or perhaps handle -- of Satoshi Nakamoto first wrote about bitcoins in a paper called Bitcoin: A Peer-to-Peer Electronic Cash System.
  • ...1 more annotation...
  • * We made this term up to describe the “good people” of the internet who believe in the fundamental rights of individuals to be free, have free speech, fight hypocrisy and stand behind logic, technology and science over religion, political structure and tradition. These are the people who build and support things like Wikileaks, Anonymous, Linux and Wikipedia. They think that people can, and should, govern themselves. They are against external forms of control such as DRM, laws that are bought and sold by lobbyists, and religions like Scientology. They include splinter groups that enforce these ideals in the form of hacktivism, such as the takedown of the Sony Playstation Network after Sony tried to prosecute a hacker for unlocking its console.
  •  
    Sounds good!
  • ...9 more comments...
  •  
    wow it's frigthening! it's the dream of every anarchist, every drug, arm, human dealer! the world made as a global fiscal paradise... the idea is clever however it will not replace real money because 1 - no one will build a fortune on bitcoin if a technological breakthrough can ruin them 2 - government never allowed parallel money to flourish on their territory, so it will be almost impossible to change bitcoin against euros or dollars
  •  
    interesting stuff anyone read cryptonomicon by neal stephenson? similar theme.
  •  
    :) yes. One of the comments on reddit was precisely drawing the parallels with Neal Stephenson's Snowcrash / Diamond Age / Cryptonomicon. Interesting stuff indeed. It has a lot of potential for misuse, but also opens up new possibilities. We've discussed recently how emerging technologies will drive social change. Whether it's the likes of NSA / CIA who will benefit the most from the Twitters, Facebooks and so on, by gaining greater power for control, or whether individuals are being empowered to at least an identical degree. We saw last year VISA / PayPal censoring WikiLeaks... Well, here's a way for any individual to support such an organization, in a fully anonymous and uncontrollable way...
  •  
    One of my colleagues has made a nice, short write-up about BitCoin: http://www.pds.ewi.tudelft.nl/~victor/bitcoin.html
  •  
    very nice analysis indeed - thanks Tamas for sharing it!
  •  
    mmm I'm not an expert but it seemed to me that, even if these criticisms are true, there is one fundamental difference between the money you exchange on internet via your bank, and bitcoins. The first one is virtual money and the second one aims at being real, physical, money, even if digital, in the same way as banknotes, coins, or gold.
  •  
    An algorithm wanna-be central bank issuing untraceable tax free money between internet users? not more likely than the end of the world supposed to take place tomorrow, in my opinion. Algorithms don't usually assault women though !:P
  •  
    well, most money is anyway just virtual and only based on expectations and trust ... (see e.g. http://en.wikipedia.org/wiki/Money_supply) and thus if people trust that this "money" has some value in the sense that they can get something of value to them in exchange, then not much more is needed it seems to me ...
  •  
    @Leopold: ok let's use the rigth words then. Bitcoin aim at being a currency ("physical objects generally accepted as a medium of exchange" from wikipedia), different than the "demand deposit". In the article proposed by Tamas he compares what cannot be compared (currencies, demand deposits and their mean of exchange). The interesting question is wether one can create a digital currency which is too difficult to counterfeit. As far as I know, there is no existing digital currency except this bitcoins (and maybe the currencies from games as second life and others, but which are of limited use in real world).
  •  
    well of course money is trust, and even more loans and credit and even more stock and bond markets. It all represents trust and expectations. However since the first banks 500 years ago and the first loans etc. etc., and as well the fact that bonds and currencies bring down whole countries (Greece lately), and are mainly controlled by large financial centres and (central) banks, banks have always been on the winning side no matter what and that isn't going to change easily. So if you are talking about these new currencies it would be a new era, not just a new currency. So should Greece convert its debt to bitcoins ;P ?
  •  
    well, from 1936 to 1993 the central bank of france was owned by the state and was supposed to serve the general interest...
LeopoldS

Helix Nebula - Helix Nebula Vision - 0 views

  •  
    The partnership brings together leading IT providers and three of Europe's leading research centres, CERN, EMBL and ESA in order to provide computing capacity and services that elastically meet big science's growing demand for computing power.

    Helix Nebula provides an unprecedented opportunity for the global cloud services industry to work closely on the Large Hadron Collider through the large-scale, international ATLAS experiment, as well as with the molecular biology and earth observation. The three flagship use cases will be used to validate the approach and to enable a cost-benefit analysis. Helix Nebula will lead these communities through a two year pilot-phase, during which procurement processes and governance issues for the public/private partnership will be addressed.

    This game-changing strategy will boost scientific innovation and bring new discoveries through novel services and products. At the same time, Helix Nebula will ensure valuable scientific data is protected by a secure data layer that is interoperable across all member states. In addition, the pan-European partnership fits in with the Digital Agenda of the European Commission and its strategy for cloud computing on the continent. It will ensure that services comply with Europe's stringent privacy and security regulations and satisfy the many requirements of policy makers, standards bodies, scientific and research communities, industrial suppliers and SMEs.

    Initially based on the needs of European big-science, Helix Nebula ultimately paves the way for a Cloud Computing platform that offers a unique resource to governments, businesses and citizens.
  •  
    "Helix Nebula will lead these communities through a two year pilot-phase, during which procurement processes and governance issues for the public/private partnership will be addressed." And here I was thinking cloud computing was old news 3 years ago :)
Francesco Biscani

STLport: An Interview with A. Stepanov - 2 views

  • Generic programming is a programming method that is based in finding the most abstract representations of efficient algorithms.
  • I spent several months programming in Java.
  • for the first time in my life programming in a new language did not bring me new insights
  • ...2 more annotations...
  • it has no intellectual value whatsoever
  • Java is clearly an example of a money oriented programming (MOP).
  •  
    One of the authors of the STL (C++'s Standard Template Library) explains generic programming and slams Java.
  • ...6 more comments...
  •  
    "Java is clearly an example of a money oriented programming (MOP)." Exactly. And for the industry it's the money that matters. Whatever mathematicians think about it.
  •  
    It is actually a good thing that it is "MOP" (even though I do not agree with this term): that is what makes it inter-operable, light and easy to learn. There is no point in writing fancy codes, if it does not bring anything to the end-user, but only for geeks to discuss incomprehensible things in forums. Anyway, I am pretty sure we can find a Java guy slamming C++ ;)
  •  
    Personally, I never understood what the point of Java is, given that: 1) I do not know of any developer (maybe Marek?) that uses it for intellectual pleasure/curiosity/fun whatever, given the possibility of choice - this to me speaks loudly on the objective qualities of the language more than any industrial-corporate marketing bullshit (for the record, I argue that Python is more interoperable, lighter and easier to learn than Java - which is why, e.g., Google is using it heavily); 2) I have used a software developed in Java maybe a total of 5 times on any computer/laptop I owned over 15 years. I cannot name of one single Java project that I find necessary or even useful; for my usage of computers, Java could disappear overnight without even noticing. Then of course one can argue as much as one wants about the "industry choosing Java", to which I would counterargue with examples of industry doing stupid things and making absurd choices. But I suppose it would be a kind of pointless discussion, so I'll just stop here :)
  •  
    "At Google, python is one of the 3 "official languages" alongside with C++ and Java". Java runs everywhere (the byte code itself) that is I think the only reason it became famous. Python, I guess, is more heavy if it were to run on your web browser! I think every language has its pros and cons, but I agree Java is not the answer to everything... Java is used in MATLAB, some web applications, mobile phones apps, ... I would be a bit in trouble if it were to disappear today :(
  •  
    I personally do not believe in interoperability :)
  •  
    Well, I bet you'd notice an overnight disappearance of java, because half of the internet would vanish... J2EE technologies are just omnipresent there... I'd rather not even *think* about developing a web application/webservice/web-whatever in standard C++... is it actually possible?? Perhaps with some weird Microsoft solutions... I bet your bank online services are written in Java. Certainly not in PHP+MySQL :) Industry has chosen Java not because of industrial-corporate marketing bullshit, but because of economics... it enables you develop robustly, reliably, error-prone, modular, well integrated etc... software. And the costs? Well, using java technologies you can set-up enterprise-quality web application servers, get a fully featured development environment (which is better than ANY C/C++/whatever development environment I've EVER seen) at the cost of exactly 0 (zero!) USD/GBP/EUR... Since many years now, the central issue in software development is not implementing algorithms, it's building applications. And that's where Java outperforms many other technologies. The final remark, because I may be mistakenly taken for an apostle of Java or something... I love the idea of generic programming, C++ is my favourite programming language (and I used to read Stroustroup before sleep), at leisure time I write programs in Python... But if I were to start a software development company, then, apart from some very niche applications like computer games, it most probably would use Java as main technology.
  •  
    "I'd rather not even *think* about developing a web application/webservice/web-whatever in standard C++... is it actually possible?? Perhaps with some weird Microsoft solutions... I bet your bank online services are written in Java. Certainly not in PHP+MySQL :)" Doing in C++ would be awesomely crazy, I agree :) But as I see it there are lots of huge websites that operate on PHP, see for instance Facebook. For the banks and the enterprise market, as a general rule I tend to take with a grain of salt whatever spin comes out from them; in the end behind every corporate IT decision there is a little smurf just trying to survive and have the back covered :) As they used to say in the old times, "No one ever got fired for buying IBM". "Industry has chosen Java not because of industrial-corporate marketing bullshit, but because of economics... it enables you develop robustly, reliably, error-prone, modular, well integrated etc... software. And the costs? Well, using java technologies you can set-up enterprise-quality web application servers, get a fully featured development environment (which is better than ANY C/C++/whatever development environment I've EVER seen) at the cost of exactly 0 (zero!) USD/GBP/EUR... Since many years now, the central issue in software development is not implementing algorithms, it's building applications. And that's where Java outperforms many other technologies." Apart from the IDE considerations (on which I cannot comment, since I'm not a IDE user myself), I do not see how Java beats the competition in this regard (again, Python and the huge software ecosystem surrounding it). My impression is that Java's success is mostly due to Sun pushing it like there is no tomorrow and bundling it with their hardware business.
  •  
    OK, I think there is a bit of everything, wrong and right, but you have to acknowledge that Python is not always the simplest. For info, Facebook uses Java (if you upload picture for instance), and PHP is very limited. So definitely, in company, engineers like you and me select the language, it is not a marketing or political thing. And in the case of fb, they come up with the conclusion that PHP, and Java don't do everything but complement each other. As you say Python as many things around, but it might be too much for simple applications. Otherwise, I would seriously be interested by a study of how to implement a Python-like system on-board spacecrafts and what are the advantages over mixing C, Ada and Java.
Athanasia Nikolaou

Nature Paper: Rivers and streams release more CO2 than previously believed - 6 views

  •  
    Another underestimated source of CO2, are turbulent waters. "The stronger the turbulences at the water's surface, the more CO2 is released into the atmosphere. The combination of maps and data revealed that, while the CO2 emissions from lakes and reservoirs are lower than assumed, those from rivers and streams are three times as high as previously believed." Alltogether the emitted CO2 equates to roughly one-fifth of the emissions caused by humans. Yet more stuff to model...
  • ...10 more comments...
  •  
    This could also be a mechanism to counter human CO2 emission ... the more we emit, the less turbulent rivers and stream, the less CO2 is emitted there ... makes sense?
  •  
    I guess there is a natural equilibrium there. Once the climate warms up enough for all rivers and streams to evaporate they will not contribute CO2 anymore - which stops their contribution to global warming. So the problem is also the solution (as always).
  •  
    "The source of inland water CO2 is still not known with certainty and new studies are needed to research the mechanisms controlling CO2 evasion globally." It is another source of CO2 this one, and the turbulence in the rivers is independent of our emissions in CO2 and just facilitates the process of releasing CO2 waters. Dario, if I understood correct you have in mind a finite quantity of CO2 that the atmosphere can accomodate, and to my knowledge this does not happen, so I cannot find a relevant feedback there. Johannes, H2O is a powerful greenhouse gas :-)
  •  
    Nasia I think you did not get my point (a joke, really, that Johannes continued) .... by emitting more CO2 we warm up the planet thus drying up rivers and lakes which will, in turn emit less CO2 :) No finite quantity of CO2 in the atmosphere is needed to close this loop ... ... as for the H2O it could just go into non turbulent waters rather than staying into the atmosphere ...
  •  
    Really awkward joke explanation: I got the joke of Johannes, but maybe you did not get mine: by warming up the planet to get rid of the rivers and their problems, the water of the rivers will be accomodated in the atmosphere, therefore, the greenhouse gas of water.
  •  
    from my previous post: "... as for the H2O it could just go into non turbulent waters rather than staying into the atmosphere ..."
  •  
    I guess the emphasis is on "could"... ;-) Also, everybody knows that rain is cold - so more water in the atmosphere makes the climate colder.
  •  
    do you have the nature paper also? looks like very nice, meticulous typically german research lasting over 10 years with painstakingly many researchers from all over the world involved .... and while important the total is still only 20% of human emissions ... so a variation in it does not seem to change the overall picture
  •  
    here is the nature paper : http://www.nature.com/nature/journal/v503/n7476/full/nature12760.html I appreciate Johannes' and Dario's jokes, since climate is the common ground that all of us can have an opinion, taking honours from experiencing weather. But, the same as if I am trying to make jokes for material science, or A.I. I take a high risk of failing(!) :-S Water is a greenhouse gas, rain rather releases latent heat to the environment in order to be formed, Johannes, nice trolling effort ;-) Between this and the next jokes to come, I would stop to take a look here, provided you have 10 minutes: how/where rain forms http://www.scribd.com/doc/58033704/Tephigrams-for-Dummies
  •  
    omg
  •  
    Nasia, I thought about your statement carefully - and I cannot agree with you. Water is not a greenhouse gas. It is instead a liquid. Also, I can't believe you keep feeding the troll! :-P But on a more topical note: I think it is an over-simplification to call water a greenhouse gas - water is one of the most important mechanisms in the way Earth handles heat input from the sun. The latent heat that you mention actually cools Earth: solar energy that would otherwise heat Earth's surface is ABSORBED as latent heat by water which consequently evaporates - the same water condenses into rain drops at high altitudes and releases this stored heat. In effect the water cycle is a mechanism of heat transport from low altitude to high altitude where the chance of infrared radiation escaping into space is much higher due to the much thinner layer of atmosphere above (including the smaller abundance of greenhouse gasses). Also, as I know you are well aware, the cloud cover that results from water condensation in the troposphere dramatically increases albedo which has a cooling effect on climate. Furthermore the heat capacity of wet air ("humid heat") is much larger than that of dry air - so any advective heat transfer due to air currents is more efficient in wet air - transporting heat from warm areas to a natural heat sink e.g. polar regions. Of course there are also climate heating effects of water like the absorption of IR radiation. But I stand by my statement (as defended in the above) that rain cools the atmosphere. Oh and also some nice reading material on the complexities related to climate feedback due to sea surface temperature: http://journals.ametsoc.org/doi/abs/10.1175/1520-0442(1993)006%3C2049%3ALSEOTR%3E2.0.CO%3B2
  •  
    I enjoy trolling conversations when there is a gain for both sides at the end :-) . I had to check upon some of the facts in order to explain my self properly. The IPCC report states the greenhouse gases here, and water vapour is included: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-2-1.html Honestly, I read only the abstract of the article you posted, which is a very interesting hypothesis on the mechanism of regulating sea surface temperature, but it is very localized to the tropics (vivid convection, storms) a region of which I have very little expertise, and is difficult to study because it has non-hydrostatic dynamics. The only thing I can comment there is that the authors define constant relative humidity for the bottom layer, supplied by the oceanic surface, which limits the implementation of the concept on other earth regions. Also, we may confuse during the conversation the greenhouse gas with the Radiative Forcing of each greenhouse gas: I see your point of the latent heat trapped in the water vapour, and I agree, but the effect of the water is that it traps even as latent heat an amount of LR that would otherwise escape back to space. That is the greenhouse gas identity and an image to see the absorption bands in the atmosphere and how important the water is, without vain authority-based arguments that miss the explanation in the end: http://www.google.nl/imgres?imgurl=http://www.solarchords.com/uploaded/82/87-33833-450015_44absorbspec.gif&imgrefurl=http://www.solarchords.com/agw-science/4/greenhouse--1-radiation/33784/&h=468&w=458&sz=28&tbnid=x2NtfKh5OPM7lM:&tbnh=98&tbnw=96&zoom=1&usg=__KldteWbV19nVPbbsC4jsOgzCK6E=&docid=cMRZ9f22jbtYPM&sa=X&ei=SwynUq2TMqiS0QXVq4C4Aw&ved=0CDkQ9QEwAw
LeopoldS

Sex differences in the structural connectome of the human brain - 0 views

  •  
    it seems that there are indications that we are differently wired .... Sex differences in human behavior show adaptive complementarity: Males have better motor and spatial abilities, whereas females have superior memory and social cognition skills. Studies also show sex differences in human brains but do not explain this complementarity. In this work, we modeled the structural connectome using diffusion tensor imaging in a sample of 949 youths (aged 8-22 y, 428 males and 521 females) and discovered unique sex differences in brain connectivity during the course of development. Connection-wise statistical analysis, as well as analysis of regional and global network measures, presented a comprehensive description of network characteristics. In all supratentorial regions, males had greater within-hemispheric connectivity, as well as enhanced modularity and transitivity, whereas between-hemispheric connectivity and cross-module participation predominated in females. However, this effect was reversed in the cerebellar connections. Analysis of these changes developmentally demonstrated differences in trajectory between males and females mainly in adolescence and in adulthood. Overall, the results suggest that male brains are structured to facilitate connectivity between perception and coordinated action, whereas female brains are designed to facilitate communication between analytical and intuitive processing modes.
  •  
    I like this abstract: sex, sex, sex, sex, SEX, SEX, SEX, SEX...!!! I wonder if the "sex differences" are related to gender-specific differences...
LeopoldS

An optical lattice clock with accuracy and stability at the 10-18 level : Nature : Natu... - 0 views

  •  
    Progress in atomic, optical and quantum science1, 2 has led to rapid improvements in atomic clocks. At the same time, atomic clock research has helped to advance the frontiers of science, affecting both fundamental and applied research. The ability to control quantum states of individual atoms and photons is central to quantum information science and precision measurement, and optical clocks based on single ions have achieved the lowest systematic uncertainty of any frequency standard3, 4, 5. Although many-atom lattice clocks have shown advantages in measurement precision over trapped-ion clocks6, 7, their accuracy has remained 16 times worse8, 9, 10. Here we demonstrate a many-atom system that achieves an accuracy of 6.4 × 10−18, which is not only better than a single-ion-based clock, but also reduces the required measurement time by two orders of magnitude. By systematically evaluating all known sources of uncertainty, including in situ monitoring of the blackbody radiation environment, we improve the accuracy of optical lattice clocks by a factor of 22. This single clock has simultaneously achieved the best known performance in the key characteristics necessary for consideration as a primary standard-stability and accuracy. More stable and accurate atomic clocks will benefit a wide range of fields, such as the realization and distribution of SI units11, the search for time variation of fundamental constants12, clock-based geodesy13 and other precision tests of the fundamental laws of nature. This work also connects to the development of quantum sensors and many-body quantum state engineering14 (such as spin squeezing) to advance measurement precision beyond the standard quantum limit.
LeopoldS

Ministry of Science and Technology of the People's Republic of China - 0 views

  •  
    University Alliance for Low Carbon Energy   Three universities, including Tsinghua University, University of Cambridge, and the Massachusetts Institute of Technology, have fostered up an alliance on November 15, 2009 to advocate low carbon energy and climate change adaptation The alliance will mainly work on 6 major areas: clean coal technology and CCS, homebuilding energy efficiency, industrial energy efficiency and sustainable transport, biomass energy and other renewable energy, advanced nuclear energy, intelligent power grid, and energy policies/planning. A steering panel made up of the senior experts from the three universities (two from each) will be established to review, evaluate, and endorse the goals, projects, fund raising activities, and collaborations under the alliance. With the Headquarters at the campus of Tsinghua University and branch offices at other two universities, the alliance will be chaired by a scientist selected from Tsinghua University.   According to a briefing, the alliance will need a budget of USD 3-5 million, mainly from the donations of government, industry, and all walks of life. In this context, the R&D findings derived from the alliance will find its applications in improving people's life.
Luzi Bergamin

First circuit breaker for high voltage direct current - 2 views

  •  
    Doesn't really sound sexy, but this is of utmost importance for next generation grids for renewable energy.
  •  
    I agree on the significance indeed - a small boost also for my favourite Desertec project ... Though their language is a bit too "grandiose": "ABB has successfully designed and developed a hybrid DC breaker after years of research, functional testing and simulation in the R&D laboratories. This breaker is a breakthrough that solves a technical challenge that has been unresolved for over a hundred years and was perhaps one the main influencers in the 'war of currents' outcome. The 'hybrid' breaker combines mechanical and power electronics switching that enables it to interrupt power flows equivalent to the output of a nuclear power station within 5 milliseconds - that's as fast as a honey bee takes per flap of its wing - and more than 30 times faster than the reaction time of an Olympic 100-meter medalist to react to the starter's gun! But its not just about speed. The challenge was to do it 'ultra-fast' with minimal operational losses and this has been achieved by combining advanced ultrafast mechanical actuators with our inhouse semiconductor IGBT valve technologies or power electronics (watch video: Hybrid HVDC Breaker - How does it work). In terms of significance, this breaker is a 'game changer'. It removes a significant stumbling block in the development of HVDC transmission grids where planning can start now. These grids will enable interconnection and load balancing between HVDC power superhighways integrating renewables and transporting bulk power across long distances with minimal losses. DC grids will enable sharing of resources like lines and converter stations that provides reliability and redundancy in a power network in an economically viable manner with minimal losses. ABB's new Hybrid HVDC breaker, in simple terms will enable the transmission system to maintain power flow even if there is a fault on one of the lines. This is a major achievement for the global R&D team in ABB who have worked for years on the challeng
LeopoldS

David Miranda, schedule 7 and the danger that all reporters now face | Alan Rusbridger ... - 0 views

  •  
    During one of these meetings I asked directly whether the government would move to close down the Guardian's reporting through a legal route - by going to court to force the surrender of the material on which we were working. The official confirmed that, in the absence of handover or destruction, this was indeed the government's intention. Prior restraint, near impossible in the US, was now explicitly and imminently on the table in the UK. But my experience over WikiLeaks - the thumb drive and the first amendment - had already prepared me for this moment. I explained to the man from Whitehall about the nature of international collaborations and the way in which, these days, media organisations could take advantage of the most permissive legal environments. Bluntly, we did not have to do our reporting from London. Already most of the NSA stories were being reported and edited out of New York. And had it occurred to him that Greenwald lived in Brazil?

    The man was unmoved. And so one of the more bizarre moments in the Guardian's long history occurred - with two GCHQ security experts overseeing the destruction of hard drives in the Guardian's basement just to make sure there was nothing in the mangled bits of metal which could possibly be of any interest to passing Chinese agents. "We can call off the black helicopters," joked one as we swept up the remains of a MacBook Pro.

    Whitehall was satisfied, but it felt like a peculiarly pointless piece of symbolism that understood nothing about the digital age. We will continue to do patient, painstaking reporting on the Snowden documents, we just won't do it in London. The seizure of Miranda's laptop, phones, hard drives and camera will similarly have no effect on Greenwald's work.

    The state that is building such a formidable apparatus of surveillance will do its best to prevent journalists from reporting on it. Most journalists can see that. But I wonder how many have truly understood
  •  
    Sarah Harrison is a lawyer that has been staying with Snowden in Hong Kong and Moscow. She is a UK citizen and her family is there. After the miranda case where the boyfriend of the reporter was detained at the airport, can Sarah return safely home? Will her family be pressured by the secret service? http://www.bbc.co.uk/news/world-latin-america-23759834
jmlloren

Exotic matter : Insight : Nature - 5 views

shared by jmlloren on 03 Aug 10 - Cached
LeopoldS liked it
  •  
    Trends in materials and condensed matter. Check out the topological insulators. amazing field.
  • ...12 more comments...
  •  
    Aparently very interesting, will it survive the short hype? Relevant work describing mirror charges of topological insulators and the classical boundary conditions were done by Ismo and Ari. But the two communities don't know each other and so they are never cited. Also a way to produce new things...
  •  
    Thanks for noticing! Indeed, I had no idea that Ari (don't know Ismo) was involved in the field. Was it before Kane's proposal or more recently? What I mostly like is that semiconductors are good candidates for 3D TI, however I got lost in the quantum field jargon. Yesterday, I got a headache trying to follow the Majorana fermions, the merons, skyrnions, axions, and so on. Luzi, are all these things familiar to you?
  •  
    Ismo Lindell described in the early 90's the mirror charge of what is now called topological insulator. He says that similar results were obtained already at the beginning of the 20th century... Ismo Lindell and Ari Sihvola in the recent years discussed engineering aspects of PEMCs (perfect electro-megnetic conductors,) which are more or less classical analogues of topological insulators. Fundamental aspects of PEMCs are well knwon in high-energy physics for a long time, recent works are mainly due to Friedrich Hehl and Yuri Obukhov. All these works are purely classical, so there is no charge quantisation, no considerations of electron spin etc. About Majorana fermions: yes, I spent several years of research on that topic. Axions: a topological state, of course, trivial :-) Also merons and skyrnions are topological states, but I'm less familiar with them.
  •  
    "Non-Abelian systems1, 2 contain composite particles that are neither fermions nor bosons and have a quantum statistics that is far richer than that offered by the fermion-boson dichotomy. The presence of such quasiparticles manifests itself in two remarkable ways. First, it leads to a degeneracy of the ground state that is not based on simple symmetry considerations and is robust against perturbations and interactions with the environment. Second, an interchange of two quasiparticles does not merely multiply the wavefunction by a sign, as is the case for fermions and bosons. Rather, it takes the system from one ground state to another. If a series of interchanges is made, the final state of the system will depend on the order in which these interchanges are being carried out, in sharp contrast to what happens when similar operations are performed on identical fermions or bosons." wow, this paper by Stern reads really weired ... any of you ever looked into this?
  •  
    C'mon Leopold, it's as trivial as the topological states, AKA axions! Regarding the question, not me!
  •  
    just looked up the wikipedia entry on axions .... at least they have some creativity in names giving: "In supersymmetric theories the axion has both a scalar and a fermionic superpartner. The fermionic superpartner of the axion is called the axino, the scalar superpartner is called the saxion. In some models, the saxion is the dilaton. They are all bundled up in a chiral superfield. The axino has been predicted to be the lightest supersymmetric particle in such a model.[24] In part due to this property, it is considered a candidate for the composition of dark matter.[25]"
  •  
    Thank's Leopold. Sorry Luzi for being ironic concerning the triviality of the axions. Now, Leo confirmed me that indeed is a trivial matter. I have problems with models where EVERYTHING is involved.
  •  
    Well, that's the theory of everything, isn't it?? Seriously: I don't think that theoretically there is a lot of new stuff here. Topological aspects of (non-Abelian) theories became extremely popular in the context of string theory. The reason is very simple: topological theories are much simpler than "normal" and since string theory anyway is far too complicated to be solved, people just consider purely topological theories, then claiming that this has something to do with the real world, which of course is plainly wrong. So what I think is new about these topological insulators are the claims that one can actually fabricate a material which more or less accurately mimics a topological theory and that these materials are of practical use. Still, they are a little bit the poor man's version of the topological theories fundamental physicists like to look at since electrdynamics is an Abelian theory.
  •  
    I have the feeling, not the knowledge, that you are right. However, I think that the implications of this light quantum field effects are great. The fact of being able to sustain two currents polarized in spin is a technological breakthrough.
  •  
    not sure how much I can contribute to your apparently educated debate here but if I remember well from my work for the master, these non-Abelian theories were all but "simple" as Luzi puts it ... and from a different perspective: to me the whole thing of being able to describe such non-Abelian systems nicely indicates that they should in one way or another also have some appearance in Nature (would be very surprised if not) - though this is of course no argument that makes string theory any better or closer to what Luzi called reality ....
  •  
    Well, electrodynamics remains an Abelian theory. From the theoretical point of view this is less interesting than non-Abelian ones, since in 4D the fibre bundle of a U(1) theory is trivial (great buzz words, eh!) But in topological insulators the point of view is slightly different since one always has the insulator (topological theory), its surrounding (propagating theory) and most importantly the interface between the two. This is a new situation that people from field and string theory were not really interested in.
  •  
    guys... how would you explain this to your gran mothers?
  •  
    *you* tried *your* best .... ??
LeopoldS

Peter Higgs: I wouldn't be productive enough for today's academic system | Science | Th... - 1 views

  •  
    what an interesting personality ... very symathetic Peter Higgs, the British physicist who gave his name to the Higgs boson, believes no university would employ him in today's academic system because he would not be considered "productive" enough.

    The emeritus professor at Edinburgh University, who says he has never sent an email, browsed the internet or even made a mobile phone call, published fewer than 10 papers after his groundbreaking work, which identified the mechanism by which subatomic material acquires mass, was published in 1964.

    He doubts a similar breakthrough could be achieved in today's academic culture, because of the expectations on academics to collaborate and keep churning out papers. He said: "It's difficult to imagine how I would ever have enough peace and quiet in the present sort of climate to do what I did in 1964."

    Speaking to the Guardian en route to Stockholm to receive the 2013 Nobel prize for science, Higgs, 84, said he would almost certainly have been sacked had he not been nominated for the Nobel in 1980.

    Edinburgh University's authorities then took the view, he later learned, that he "might get a Nobel prize - and if he doesn't we can always get rid of him".

    Higgs said he became "an embarrassment to the department when they did research assessment exercises". A message would go around the department saying: "Please give a list of your recent publications." Higgs said: "I would send back a statement: 'None.' "

    By the time he retired in 1996, he was uncomfortable with the new academic culture. "After I retired it was quite a long time before I went back to my department. I thought I was well out of it. It wasn't my way of doing things any more. Today I wouldn't get an academic job. It's as simple as that. I don't think I would be regarded as productive enough."

    Higgs revealed that his career had also been jeopardised by his disagreements in the 1960s and 7
  •  
  •  
    interesting one - Luzi will like it :-)
LeopoldS

Plant sciences: Plants drink mineral water : Nature : Nature Publishing Group - 1 views

  •  
    Here we go: we might not need liquid water after all on mars to get some nice flowering plants there! ... and terraform ? :-) Thirsty plants can extract water from the crystalline structure of gypsum, a rock-forming mineral found in soil on Earth and Mars.

    Some plants grow on gypsum outcrops and remain active even during dry summer months, despite having shallow roots that cannot reach the water table. Sara Palacio of the Pyrenean Institute of Ecology in Jaca, Spain, and her colleagues compared the isotopic composition of sap from one such plant, called Helianthemum squamatum (pictured), with gypsum crystallization water and water found free in the soil. The team found that up to 90% of the plant's summer water supply came from gypsum.

    The study has implications for the search for life in extreme environments on this planet and others.

    Nature Commun 5, 4660 (2014)
  •  
    Very interesting indeed. Attention is to be put on the form of calcium sulfate that is found on Mars. If it is hydrated (gypsum Ca(SO4)*2(H2O)) it works, but if it is dehydrated there is no water for the roots to take in. The Curiosity Rover tries to find out, but has uncertainty in recognising the hydrogen presence in the mineral: Copying : "(...) 3.2 Hydration state of calcium sulfates Calcium sulfates occur as a non-hydrated phase (anhydrite, CaSO4) or as one of two hydrated phases (bassanite, CaSO4.1/2H2O, which can contain a somewhat variable water content, and gypsum, CaSO4.2H2O). ChemCam identifies the presence of hydrogen at 656 nm, as already found in soils and dust [Meslin et al., 2013] and within fluvial conglomerates [Williams et al., 2013]. However, the quantification of H is strongly affected by matrix effects [Schröder et al., 2013], i.e. effects including major or even minor element chemistry, optical and mechanical properties, that can result in variations of emission lines unrelated to actual quantitative variations of the element in question in the sample. Due to these effects, discriminating between bassanite and gypsum is difficult. (...)"
Francesco Biscani

What Should We Teach New Software Developers? Why? | January 2010 | Communications of t... - 3 views

shared by Francesco Biscani on 15 Jan 10 - Cached
Dario Izzo liked it
  • Industry wants to rely on tried-and-true tools and techniques, but is also addicted to dreams of "silver bullets," "transformative breakthroughs," "killer apps," and so forth.
  • This leads to immense conservatism in the choice of basic tools (such as programming languages and operating systems) and a desire for monocultures (to minimize training and deployment costs).
  • The idea of software development as an assembly line manned by semi-skilled interchangeable workers is fundamentally flawed and wasteful.
  •  
    Nice opinion piece by the creator of C++ Bjarne Stroustrup. Substitute "industry" with "science" and many considerations still apply :)
  •  
    "for many, "programming" has become a strange combination of unprincipled hacking and invoking other people's libraries (with only the vaguest idea of what's going on). The notions of "maintenance" and "code quality" are typically forgotten or poorly understood. " ... seen so many of those students :( and ad "My suggestion is to define a structure of CS education based on a core plus specializations and application areas", I am not saying the austrian university system is good, but e.g. the CS degrees in Vienna are done like this, there is a core which is the same for everybody 4-5 semester, and then you specialise in e.g. software engineering or computational mgmt and so forth, and then after 2 semester you specialize again into one of I think 7 or 8 master degrees ... It does not make it easy for industry to hire people, as I have noticed, they sometimes really have no clue what the difference between Software Engineering is compared to Computational Intelligence, at least in HR :/
Joris _

What the strange persistence of rockets can teach us about innovation. - 5 views

  •  
    If I could write, this is exactly what I would write about rocket, GO, and so on... :) "we are decadent and tired. But none of the bright young up-and-coming economies seem to be interested in anything besides aping what the United States and the USSR did years ago. We may, in other words, need to look beyond strictly U.S.-centric explanations for such failures of imagination and initiative. ... Those are places we need to go if we are not to end up as the Ottoman Empire of the 21st century, and yet in spite of all of the lip service that is paid to innovation in such areas, it frequently seems as though we are trapped in a collective stasis." "But those who do concern themselves with the formal regulation of "technology" might wish to worry less about possible negative effects of innovation and more about the damage being done to our environment and our prosperity by the mid-20th-century technologies that no sane and responsible person would propose today, but in which we remain trapped by mysterious and ineffable forces."
  • ...4 more comments...
  •  
    Very interesting, though I'm amused how the author tends to (subconsciously?) shift the blame to non-US dictators :-) Suggestion that in absence of cold war US might have abandoned HB and ICBM programmes is ridiculous.
  •  
    Interesting, this was written by Neal Stephenson ( http://en.wikipedia.org/wiki/Neal_Stephenson#Works ). Great article indeed. The videos of the event from which this arose might be equally interesting: Here Be Dragons: Governing a Technologically Uncertain Future http://newamerica.net/events/2011/here_be_dragons "To employ a commonly used metaphor, our current proficiency in rocket-building is the result of a hill-climbing approach; we started at one place on the technological landscape-which must be considered a random pick, given that it was chosen for dubious reasons by a maniac-and climbed the hill from there, looking for small steps that could be taken to increase the size and efficiency of the device."
  •  
    You know Luis, when I read this quote, I could help thinking about GO, which would be kind of ironic considering the context but not far from what happens in the field :p
  •  
    Fantastic!!!
  •  
    Would have been nice if it were historically more accurate and less polemic / superficial
  •  
    mmmh... the wheel is also an old invention... there is an idea behind but this article is not very deepfull, and I really don't think the problem is with innovation and lack of creative young people !!! look at what is done in the financial sector...
1 - 20 of 2116 Next › Last »
Showing 20 items per page