Skip to main content

Home/ Advanced Concepts Team/ Group items tagged make

Rss Feed Group items tagged

tvinko

Massively collaborative mathematics : Article : Nature - 28 views

  •  
    peer-to-peer theorem-proving
  • ...14 more comments...
  •  
    Or: mathematicians catch up with open-source software developers :)
  •  
    "Similar open-source techniques could be applied in fields such as [...] computer science, where the raw materials are informational and can be freely shared online." ... or we could reach the point, unthinkable only few years ago, of being able to exchange text messages in almost real time! OMG, think of the possibilities! Seriously, does the author even browse the internet?
  •  
    I do not agree with you F., you are citing out of context! Sharing messages does not make a collaboration, nor does a forum, .... You need a set of rules and a common objective. This is clearly observable in "some team", where these rules are lacking, making team work inexistent. The additional difficulties here are that it involves people that are almost strangers to each other, and the immateriality of the project. The support they are using (web, wiki) is only secondary. What they achieved is remarkable, disregarding the subject!
  •  
    I think we will just have to agree to disagree then :) Open source developers have been organizing themselves with emails since the early '90s, and most projects (e.g., the Linux kernel) still do not use anything else today. The Linux kernel mailing list gets around 400 messages per day, and they are managing just fine to scale as the number of contributors increases. I agree that what they achieved is remarkable, but it is more for "what" they achieved than "how". What they did does not remotely qualify as "massively" collaborative: again, many open source projects are managed collaboratively by thousands of people, and many of them are in the multi-million lines of code range. My personal opinion of why in the scientific world these open models are having so many difficulties is that the scientific community today is (globally, of course there are many exceptions) a closed, mostly conservative circle of people who are scared of changes. There is also the fact that the barrier of entry in a scientific community is very high, but I think that this should merely scale down the number of people involved and not change the community "qualitatively". I do not think that many research activities are so much more difficult than, e.g., writing an O(1) scheduler for an Operating System or writing a new balancing tree algorithm for efficiently storing files on a filesystem. Then there is the whole issue of scientific publishing, which, in its current form, is nothing more than a racket. No wonder traditional journals are scared to death by these open-science movements.
  •  
    here we go ... nice controversy! but maybe too many things mixed up together - open science journals vs traditional journals, conservatism of science community wrt programmers (to me one of the reasons for this might be the average age of both groups, which is probably more than 10 years apart ...) and then using emailing wrt other collaboration tools .... .... will have to look at the paper now more carefully ... (I am surprised to see no comment from José or Marek here :-)
  •  
    My point about your initial comment is that it is simplistic to infer that emails imply collaborative work. You actually use the word "organize", what does it mean indeed. In the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review). Mailing is just a coordination mean. In collaborations and team work, it is about rules, not only about the technology you use to potentially collaborate. Otherwise, all projects would be successful, and we would noy learn management at school! They did not write they managed the colloboration exclusively because of wikipedia and emails (or other 2.0 technology)! You are missing the part that makes it successful and remarkable as a project. On his blog the guy put a list of 12 rules for this project. None are related to emails, wikipedia, forums ... because that would be lame and your comment would make sense. Following your argumentation, the tools would be sufficient for collaboration. In the ACT, we have plenty of tools, but no team work. QED
  •  
    the question on the ACT team work is one that is coming back continuously and it always so far has boiled down to the question of how much there need and should be a team project to which everybody inthe team contributes in his / her way or how much we should leave smaller, flexible teams within the team form and progress, more following a bottom-up initiative than imposing one from top-down. At this very moment, there are at least 4 to 5 teams with their own tools and mechanisms which are active and operating within the team. - but hey, if there is a real will for one larger project of the team to which all or most members want to contribute, lets go for it .... but in my view, it should be on a convince rather than oblige basis ...
  •  
    It is, though, indicative that some of the team member do not see all the collaboration and team work happening around them. We always leave the small and agile sub-teams to form and organize themselves spontaneously, but clearly this method leaves out some people (be it for their own personal attitude or be it for pure chance) For those cases which we could think to provide the possibility to participate in an alternative, more structured, team work where we actually manage the hierachy, meritocracy and perform the project review (to use Joris words).
  •  
    I am, and was, involved in "collaboration" but I can say from experience that we are mostly a sum of individuals. In the end, it is always one or two individuals doing the job, and other waiting. Sometimes even, some people don't do what they are supposed to do, so nothing happens ... this could not be defined as team work. Don't get me wrong, this is the dynamic of the team and I am OK with it ... in the end it is less work for me :) team = 3 members or more. I am personally not looking for a 15 member team work, and it is not what I meant. Anyway, this is not exactly the subject of the paper.
  •  
    My opinion about this is that a research team, like the ACT, is a group of _people_ and not only brains. What I mean is that people have feelings, hate, anger, envy, sympathy, love, etc about the others. Unfortunately(?), this could lead to situations, where, in theory, a group of brains could work together, but not the same group of people. As far as I am concerned, this happened many times during my ACT period. And this is happening now with me in Delft, where I have the chance to be in an even more international group than the ACT. I do efficient collaborations with those people who are "close" to me not only in scientific interest, but also in some private sense. And I have people around me who have interesting topics and they might need my help and knowledge, but somehow, it just does not work. Simply lack of sympathy. You know what I mean, don't you? About the article: there is nothing new, indeed. However, why it worked: only brains and not the people worked together on a very specific problem. Plus maybe they were motivated by the idea of e-collaboration. No revolution.
  •  
    Joris, maybe I made myself not clear enough, but my point was only tangentially related to the tools. Indeed, it is the original article mention of "development of new online tools" which prompted my reply about emails. Let me try to say it more clearly: my point is that what they accomplished is nothing new methodologically (i.e., online collaboration of a loosely knit group of people), it is something that has been done countless times before. Do you think that now that it is mathematicians who are doing it makes it somehow special or different? Personally, I don't. You should come over to some mailing lists of mathematical open-source software (e.g., SAGE, Pari, ...), there's plenty of online collaborative research going on there :) I also disagree that, as you say, "in the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review)". First of all I think the main engine of any collaboration like this is the objective, i.e., wanting to get something done. Rules emerge from self-organization later on, and they may be completely different from project to project, ranging from almost anarchy to BDFL (benevolent dictator for life) style. Given this kind of variety that can be observed in open-source projects today, I am very skeptical that any kind of management rule can be said to be universal (and I am pretty sure that the overwhelming majority of project organizers never went to any "management school"). Then there is the social aspect that Tamas mentions above. From my personal experience, communities that put technical merit above everything else tend to remain very small and generally become irrelevant. The ability to work and collaborate with others is the main asset the a participant of a community can bring. I've seen many times on the Linux kernel mailing list contributions deemed "technically superior" being disregarded and not considered for inclusion in the kernel because it was clear that
  •  
    hey, just catched up the discussion. For me what is very new is mainly the framework where this collaborative (open) work is applied. I haven't seen this kind of working openly in any other field of academic research (except for the Boinc type project which are very different, because relying on non specialists for the work to be done). This raise several problems, and mainly the one of the credit, which has not really been solved as I read in the wiki (is an article is written, who writes it, what are the names on the paper). They chose to refer to the project, and not to the individual researchers, as a temporary solution... It is not so surprising for me that this type of work has been first done in the domain of mathematics. Perhaps I have an ideal view of this community but it seems that the result obtained is more important than who obtained it... In many areas of research this is not the case, and one reason is how the research is financed. To obtain money you need to have (scientific) credit, and to have credit you need to have papers with your name on it... so this model of research does not fit in my opinion with the way research is governed. Anyway we had a discussion on the Ariadnet on how to use it, and one idea was to do this kind of collaborative research; idea that was quickly abandoned...
  •  
    I don't really see much the problem with giving credit. It is not the first time a group of researchers collectively take credit for a result under a group umbrella, e.g., see Nicolas Bourbaki: http://en.wikipedia.org/wiki/Bourbaki Again, if the research process is completely transparent and publicly accessible there's no way to fake contributions or to give undue credit, and one could cite without problems a group paper in his/her CV, research grant application, etc.
  •  
    Well my point was more that it could be a problem with how the actual system works. Let say you want a grant or a position, then the jury will count the number of papers with you as a first author, and the other papers (at least in France)... and look at the impact factor of these journals. Then you would have to set up a rule for classifying the authors (endless and pointless discussions), and give an impact factor to the group...?
  •  
    it seems that i should visit you guys at estec... :-)
  •  
    urgently!! btw: we will have the ACT christmas dinner on the 9th in the evening ... are you coming?
Dario Izzo

Miguel Nicolelis Says the Brain Is Not Computable, Bashes Kurzweil's Singularity | MIT ... - 9 views

  •  
    As I said ten years ago and psychoanalysts 100 years ago. Luis I am so sorry :) Also ... now that the commission funded the project blue brain is a rather big hit Btw Nicolelis is a rather credited neuro-scientist
  • ...14 more comments...
  •  
    nice article; Luzi would agree as well I assume; one aspect not clear to me is the causal relationship it seems to imply between consciousness and randomness ... anybody?
  •  
    This is the same thing Penrose has been saying for ages (and yes, I read the book). IF the human brain proves to be the only conceivable system capable of consciousness/intelligence AND IF we'll forever be limited to the Turing machine type of computation (which is what the "Not Computable" in the article refers to) AND IF the brain indeed is not computable, THEN AI people might need to worry... Because I seriously doubt the first condition will prove to be true, same with the second one, and because I don't really care about the third (brains is not my thing).. I'm not worried.
  •  
    In any case, all AI research is going in the wrong direction: the mainstream is not on how to go beyond Turing machines, rather how to program them well enough ...... and thats not bringing anywhere near the singularity
  •  
    It has not been shown that intelligence is not computable (only some people saying the human brain isn't, which is something different), so I wouldn't go so far as saying the mainstream is going in the wrong direction. But even if that indeed was the case, would it be a problem? If so, well, then someone should quickly go and tell all the people trading in financial markets that they should stop using computers... after all, they're dealing with uncomputable undecidable problems. :) (and research on how to go beyond Turing computation does exist, but how much would you want to devote your research to a non existent machine?)
  •  
    [warning: troll] If you are happy with developing algorithms that serve the financial market ... good for you :) After all they have been proved to be useful for humankind beyond any reasonable doubt.
  •  
    Two comments from me: 1) an apparently credible scientist takes Kurzweil seriously enough to engage with him in polemics... oops 2) what worries me most, I didn't get the retail store pun at the end of article...
  •  
    True, but after Google hired Kurzweil he is de facto being taken seriously ... so I guess Nicolelis reacted to this.
  •  
    Crazy scientist in residence... interesting marketing move, I suppose.
  •  
    Unfortunately, I can't upload my two kids to the cloud to make them sleep, that's why I comment only now :-). But, of course, I MUST add my comment to this discussion. I don't really get what Nicolelis point is, the article is just too short and at a too popular level. But please realize that the question is not just "computable" vs. "non-computable". A system may be computable (we have a collection of rules called "theory" that we can put on a computer and run in a finite time) and still it need not be predictable. Since the lack of predictability pretty obviously applies to the human brain (as it does to any sufficiently complex and nonlinear system) the question whether it is computable or not becomes rather academic. Markram and his fellows may come up with a incredible simulation program of the human brain, this will be rather useless since they cannot solve the initial value problem and even if they could they will be lost in randomness after a short simulation time due to horrible non-linearities... Btw: this is not my idea, it was pointed out by Bohr more than 100 years ago...
  •  
    I guess chaos is what you are referring to. Stuff like the Lorentz attractor. In which case I would say that the point is not to predict one particular brain (in which case you would be right): any initial conditions would be fine as far as any brain gets started :) that is the goal :)
  •  
    Kurzweil talks about downloading your brain to a computer, so he has a specific brain in mind; Markram talks about identifying neural basis of mental diseases, so he has at least pretty specific situations in mind. Chaos is not the only problem, even a perfectly linear brain (which is not a biological brain) is not predictable, since one cannot determine a complete set of initial conditions of a working (viz. living) brain (after having determined about 10% the brain is dead and the data useless). But the situation is even worse: from all we know a brain will only work with a suitable interaction with its environment. So these boundary conditions one has to determine as well. This is already twice impossible. But the situation is worse again: from all we know, the way the brain interacts with its environment at a neural level depends on his history (how this brain learned). So your boundary conditions (that are impossible to determine) depend on your initial conditions (that are impossible to determine). Thus the situation is rather impossible squared than twice impossible. I'm sure Markram will simulate something, but this will rather be the famous Boltzmann brain than a biological one. Boltzman brains work with any initial conditions and any boundary conditions... and are pretty dead!
  •  
    Say one has an accurate model of a brain. It may be the case that the initial and boundary conditions do not matter that much in order for the brain to function an exhibit macro-characteristics useful to make science. Again, if it is not one particular brain you are targeting, but the 'brain' as a general entity this would make sense if one has an accurate model (also to identify the neural basis of mental diseases). But in my opinion, the construction of such a model of the brain is impossible using a reductionist approach (that is taking the naive approach of putting together some artificial neurons and connecting them in a huge net). That is why both Kurzweil and Markram are doomed to fail.
  •  
    I think that in principle some kind of artificial brain should be feasible. But making a brain by just throwing together a myriad of neurons is probably as promising as throwing together some copper pipes and a heap of silica and expecting it to make calculations for you. Like in the biological system, I suspect, an artificial brain would have to grow from a small tiny functional unit by adding neurons and complexity slowly and in a way that in a stable way increases the "usefulness"/fitness. Apparently our brain's usefulness has to do with interpreting inputs of our sensors to the world and steering the body making sure that those sensors, the brain and the rest of the body are still alive 10 seconds from now (thereby changing the world -> sensor inputs -> ...). So the artificial brain might need sensors and a body to affect the "world" creating a much larger feedback loop than the brain itself. One might argue that the complexity of the sensor inputs is the reason why the brain needs to be so complex in the first place. I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain. Anyone? Or are they trying to simulate the human brain after it has been removed from the body? That might be somewhat easier I guess...
  •  
    Johannes: "I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain." In Artificial Life the whole environment+bodies&brains is simulated. You have also the whole embodied cognition movement that basically advocates for just that: no true intelligence until you model the system in its entirety. And from that you then have people building robotic bodies, and getting their "brains" to learn from scratch how to control them, and through the bodies, the environment. Right now, this is obviously closer to the complexity of insect brains, than human ones. (my take on this is: yes, go ahead and build robots, if the intelligence you want to get in the end is to be displayed in interactions with the real physical world...) It's easy to dismiss Markram's Blue Brain for all their clever marketing pronouncements that they're building a human-level consciousness on a computer, but from what I read of the project, they seem to be developing a platfrom onto which any scientist can plug in their model of a detail of a detail of .... of the human brain, and get it to run together with everyone else's models of other tiny parts of the brain. This is not the same as getting the artificial brain to interact with the real world, but it's a big step in enabling scientists to study their own models on more realistic settings, in which the models' outputs get to effect many other systems, and throuh them feed back into its future inputs. So Blue Brain's biggest contribution might be in making model evaluation in neuroscience less wrong, and that doesn't seem like a bad thing. At some point the reductionist approach needs to start moving in the other direction.
  •  
    @ Dario: absolutely agree, the reductionist approach is the main mistake. My point: if you take the reductionsit approach, then you will face the initial and boundary value problem. If one tries a non-reductionist approach, this problem may be much weaker. But off the record: there exists a non-reductionist theory of the brain, it's called psychology... @ Johannes: also agree, the only way the reductionist approach could eventually be successful is to actually grow the brain. Start with essentially one neuron and grow the whole complexity. But if you want to do this, bring up a kid! A brain without body might be easier? Why do you expect that a brain detached from its complete input/output system actually still works. I'm pretty sure it does not!
  •  
    @Luzi: That was exactly my point :-)
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
santecarloni

Hydrogel electronics makes its debut - physicsworld.com - 0 views

  •  
    A new type of hydrogel could make for high-performance energy-storage electrodes and biosensors.
santecarloni

'Carpet' makes objects invisible to sound - physicsworld.com - 0 views

  •  
    Researchers in the US have made a "carpet cloak" that makes objects invisible to sound waves. The device is the first such cloak to work in air and could be used to improve the acoustics in concert halls or even to control unwanted noise.
andreiaries

Mars Exploration Rover Mission: Press Releases - 6 views

  •  
    "has a new capability to make its own choices about whether to make additional observations of rocks that it spots on arrival at a new location"
Christophe Praz

Scientific method: Defend the integrity of physics - 2 views

  •  
    Interesting article about theoretical physics theories vs. experimental verification. Can we state that a theory can be so good that its existence supplants the need for data and testing ? If a theory is proved to be untestable experimentally, can we still say that it is a scientific theory ? (not in my opinion)
  •  
    There is an interesting approach by Feynman that it does not make sense to describe something of which we cannot measure the consequences. So a theory that is so removed from experiment that it cannot be backed by it is pointless and of no consequence. It is a bit as with the statement "if a tree falls in the forrest and nobody is there to hear it, does it make a sound?". We would typically extrapolate to say that it does make a sound. But actually nobody knows - you would have to take some kind of measurement. But even more fundamentally it does not make any difference! For all intents and purposes there is no point in forcing a prediction that you cannot measure and that therefore has noto reflect an event in your world.
  •  
    "Mathematics is the model of the universe, not the other way round" - M. R.
Guido de Croon

Robotic insects make first controlled flight - 3 views

  •  
    The Robobee takes off without guide wires! It is still powered via a wire, and the control is done with the help of a VICON system and on an external computer, but this still is an amazing feat!
  •  
    The way they make this thing is just as impressive. The manufacturing technique is "pop-up book" folding, a method that has been developed by the same group and that allows a two dimensional monolithic MEMS structures to be easily assembled into a 3D structure. I actually put this as an item of the "Technology List 2020" on the wiki this morning.
  •  
    I agree, manufacturing is the amazing thing here ..... as soon as the power-consumption/density problem is solved these things will really take off :)
johannessimon81

Asteroid mining could lead to self-sustaining space stations - VIDEO!!! - 5 views

  •  
    Let's all start up some crazy space companies together: harvest hydrogen on Jupiter, trap black holes as unlimited energy supplies, use high temperatures close to the sun to bake bread! Apparently it is really easy to do just about anything and Deep Space Industries is really good at it. Plus: in their video they show Mars One concepts while referring to ESA and NASA.
  • ...3 more comments...
  •  
    I really wonder what they wanna mine out there? Is there such a high demand on... rocks?! And do they really think they can collect fuel somewhere?
  •  
    Well they want to avoid having to send resources into space and rather make it all in space. The first mission is just to find possible asteroids worth mining and bring some asteroid rocks to Earth for analysis. In 2020 they want to start mining for precious metals (e.g. nickel), water and such.They also want to put up a 3D printer in space so that it would extract, separate and/or fuse asteroidal resources together and then print the needed structures already in space. And even though on earth it's just rocks, in space a tonne of them has an estimated value of 1 million dollars (as opposed to 4000 USD on Earth). Although I like the idea, I would put DSI in the same basket as those Mars One nutters 'cause it's not gonna happen.
  •  
    I will get excited once they demonstrate they can put a random rock into their machine and out comes a bicycle (then the obvious next step is a space station).
  •  
    hmm aside from the technological feasibility, their approach still should be taken as an example, and deserve a little support. By tackling such difficult problems, they will devise innovative stuffs. Plus, even if this doom-to-fail endeavour may still seem you useless, it creates jobs and make people think... it is already a positive! Final word: how is that different from what Planetary Resources plan to do? It is founded by a bunch of so-called "nuts" ... (http://www.planetaryresources.com/team/) ! a little thought: "We must never be afraid to go too far, for success lies just beyond" - Proust
  •  
    I don't think that this proposal is very different from the one by Planetary Resources. My scepticism is rooted in the fact that - at least to my knowledge - fully autonomous mining technology has not even been demonstrated on Earth. I am sure that their proposition is in principle (technically) feasible but at the same time I do not believe that a privately funded company will find enough people to finance a multi-billion dollar R&D project that may or may not lead to an economically sensible outcome, i.e. generate profit (not income - you have to pay back the R&D cost first) within the next 25 years. And on that timescale anything can happen - for all we know we will all be slaves to the singularity by the time they start mining. I do think that people who tackle difficult problems deserve support - and lots of it. It seems however that up till now they have only tackled making a promotional video... About job creation (sorry for the sarcasm): if usefulness is not so important my proposal would be to give shovels to two people - person A digs a hole and person B fills up the same hole at the same time. The good thing about this is that you can increase the number of jobs created simply by handing out more shovels.
Marcus Maertens

Making a mini Mona Lisa: Nanotechnique creates image on surface less than a third the h... - 2 views

  •  
    Making the Mona Lisa with ThermoChemical NanoLithography.
Athanasia Nikolaou

Nature Paper: Rivers and streams release more CO2 than previously believed - 6 views

  •  
    Another underestimated source of CO2, are turbulent waters. "The stronger the turbulences at the water's surface, the more CO2 is released into the atmosphere. The combination of maps and data revealed that, while the CO2 emissions from lakes and reservoirs are lower than assumed, those from rivers and streams are three times as high as previously believed." Alltogether the emitted CO2 equates to roughly one-fifth of the emissions caused by humans. Yet more stuff to model...
  • ...10 more comments...
  •  
    This could also be a mechanism to counter human CO2 emission ... the more we emit, the less turbulent rivers and stream, the less CO2 is emitted there ... makes sense?
  •  
    I guess there is a natural equilibrium there. Once the climate warms up enough for all rivers and streams to evaporate they will not contribute CO2 anymore - which stops their contribution to global warming. So the problem is also the solution (as always).
  •  
    "The source of inland water CO2 is still not known with certainty and new studies are needed to research the mechanisms controlling CO2 evasion globally." It is another source of CO2 this one, and the turbulence in the rivers is independent of our emissions in CO2 and just facilitates the process of releasing CO2 waters. Dario, if I understood correct you have in mind a finite quantity of CO2 that the atmosphere can accomodate, and to my knowledge this does not happen, so I cannot find a relevant feedback there. Johannes, H2O is a powerful greenhouse gas :-)
  •  
    Nasia I think you did not get my point (a joke, really, that Johannes continued) .... by emitting more CO2 we warm up the planet thus drying up rivers and lakes which will, in turn emit less CO2 :) No finite quantity of CO2 in the atmosphere is needed to close this loop ... ... as for the H2O it could just go into non turbulent waters rather than staying into the atmosphere ...
  •  
    Really awkward joke explanation: I got the joke of Johannes, but maybe you did not get mine: by warming up the planet to get rid of the rivers and their problems, the water of the rivers will be accomodated in the atmosphere, therefore, the greenhouse gas of water.
  •  
    from my previous post: "... as for the H2O it could just go into non turbulent waters rather than staying into the atmosphere ..."
  •  
    I guess the emphasis is on "could"... ;-) Also, everybody knows that rain is cold - so more water in the atmosphere makes the climate colder.
  •  
    do you have the nature paper also? looks like very nice, meticulous typically german research lasting over 10 years with painstakingly many researchers from all over the world involved .... and while important the total is still only 20% of human emissions ... so a variation in it does not seem to change the overall picture
  •  
    here is the nature paper : http://www.nature.com/nature/journal/v503/n7476/full/nature12760.html I appreciate Johannes' and Dario's jokes, since climate is the common ground that all of us can have an opinion, taking honours from experiencing weather. But, the same as if I am trying to make jokes for material science, or A.I. I take a high risk of failing(!) :-S Water is a greenhouse gas, rain rather releases latent heat to the environment in order to be formed, Johannes, nice trolling effort ;-) Between this and the next jokes to come, I would stop to take a look here, provided you have 10 minutes: how/where rain forms http://www.scribd.com/doc/58033704/Tephigrams-for-Dummies
  •  
    omg
  •  
    Nasia, I thought about your statement carefully - and I cannot agree with you. Water is not a greenhouse gas. It is instead a liquid. Also, I can't believe you keep feeding the troll! :-P But on a more topical note: I think it is an over-simplification to call water a greenhouse gas - water is one of the most important mechanisms in the way Earth handles heat input from the sun. The latent heat that you mention actually cools Earth: solar energy that would otherwise heat Earth's surface is ABSORBED as latent heat by water which consequently evaporates - the same water condenses into rain drops at high altitudes and releases this stored heat. In effect the water cycle is a mechanism of heat transport from low altitude to high altitude where the chance of infrared radiation escaping into space is much higher due to the much thinner layer of atmosphere above (including the smaller abundance of greenhouse gasses). Also, as I know you are well aware, the cloud cover that results from water condensation in the troposphere dramatically increases albedo which has a cooling effect on climate. Furthermore the heat capacity of wet air ("humid heat") is much larger than that of dry air - so any advective heat transfer due to air currents is more efficient in wet air - transporting heat from warm areas to a natural heat sink e.g. polar regions. Of course there are also climate heating effects of water like the absorption of IR radiation. But I stand by my statement (as defended in the above) that rain cools the atmosphere. Oh and also some nice reading material on the complexities related to climate feedback due to sea surface temperature: http://journals.ametsoc.org/doi/abs/10.1175/1520-0442(1993)006%3C2049%3ALSEOTR%3E2.0.CO%3B2
  •  
    I enjoy trolling conversations when there is a gain for both sides at the end :-) . I had to check upon some of the facts in order to explain my self properly. The IPCC report states the greenhouse gases here, and water vapour is included: http://www.ipcc.ch/publications_and_data/ar4/wg1/en/faq-2-1.html Honestly, I read only the abstract of the article you posted, which is a very interesting hypothesis on the mechanism of regulating sea surface temperature, but it is very localized to the tropics (vivid convection, storms) a region of which I have very little expertise, and is difficult to study because it has non-hydrostatic dynamics. The only thing I can comment there is that the authors define constant relative humidity for the bottom layer, supplied by the oceanic surface, which limits the implementation of the concept on other earth regions. Also, we may confuse during the conversation the greenhouse gas with the Radiative Forcing of each greenhouse gas: I see your point of the latent heat trapped in the water vapour, and I agree, but the effect of the water is that it traps even as latent heat an amount of LR that would otherwise escape back to space. That is the greenhouse gas identity and an image to see the absorption bands in the atmosphere and how important the water is, without vain authority-based arguments that miss the explanation in the end: http://www.google.nl/imgres?imgurl=http://www.solarchords.com/uploaded/82/87-33833-450015_44absorbspec.gif&imgrefurl=http://www.solarchords.com/agw-science/4/greenhouse--1-radiation/33784/&h=468&w=458&sz=28&tbnid=x2NtfKh5OPM7lM:&tbnh=98&tbnw=96&zoom=1&usg=__KldteWbV19nVPbbsC4jsOgzCK6E=&docid=cMRZ9f22jbtYPM&sa=X&ei=SwynUq2TMqiS0QXVq4C4Aw&ved=0CDkQ9QEwAw
Chritos Vezyri

New fabrication technique could provide breakthrough for solar energy systems - 3 views

  •  
    The principle behind that is Nantenna.
  •  
    this is fantastic!!!! waiting of somebody to make this happen since years The size of the gap is critical because it creates an ultra-fast tunnel junction between the rectenna's two electrodes, allowing a maximum transfer of electricity. The nanosized gap gives energized electrons on the rectenna just enough time to tunnel to the opposite electrode before their electrical current reverses and they try to go back. The triangular tip of the rectenna makes it hard for the electrons to reverse direction, thus capturing the energy and rectifying it to a unidirectional current. Impressively, the rectennas, because of their extremely small and fast tunnel diodes, are capable of converting solar radiation in the infrared region through the extremely fast and short wavelengths of visible light - something that has never been accomplished before. Silicon solar panels, by comparison, have a single band gap which, loosely speaking, allows the panel to convert electromagnetic radiation efficiently at only one small portion of the solar spectrum. The rectenna devices don't rely on a band gap and may be tuned to harvest light over the whole solar spectrum, creating maximum efficiency. Through atomic layer deposition, Willis has shown he is able to precisely coat the tip of the rectenna with layers of individual copper atoms until a gap of about 1.5 nanometers is achieved. The process is self-limiting and stops at 1.5 nanometer separation The size of the gap is critical because it creates an ultra-fast tunnel junction between the rectenna's two electrodes, allowing a maximum transfer of electricity. The nanosized gap gives energized electrons on the rectenna just enough time to tunnel to the opposite electrode before their electrical current reverses and they try to go back. The triangular tip of the rectenna makes it hard for the electrons to reverse direction, thus capturing the energy and rectifying it to a unidirectional current. Impressively, the rectennas, because of th
Dario Izzo

Optimal Control Probem in the CR3BP solved!!! - 7 views

  •  
    This guy solved a problem many people are trying to solve!!! The optimal control problem for the three body problem (restricted, circular) can be solved using continuation of the secondary gravity parameter and some clever adaptation of the boundary conditions!! His presentation was an eye opener ... making the work of many pretty useless now :)
  • ...13 more comments...
  •  
    Riemann hypothesis should be next... Which paper on the linked website is this exactly?
  •  
    hmmm, last year at the AIAA conference in Toronto I presented a continuation approach to design a DRO (three-body problem). Nothing new here unfortunately. I know the work of Caillau, although interesting what is presented was solved 10 years ago by others. The interest of his work is not in the applications (CR3BP), but in the research of particular regularity conditions that unfortunately make the problem limited practically. Look also at the work of Mingotti, Russel, Topputo and other for the (C)RTBP. Smart-One inspired a bunch of researchers :)
  •  
    Topputo and some of the others 'inspired' researchers you mention are actually here at the conference and they are all quite depressed :) Caillau really solves the problem: as a one single phase transfer, no tricks, no misconvergence, in general and using none of the usual cheats. What was produced so far by other were only local solutions valid for the particular case considered. In any case I will give him your paper, so that he knows he is working on already solved stuff :)
  •  
    Answer to Marek: the paper you may look at is: Discrete and differential homotopy in circular restricted three-body control
  •  
    Ah! with one single phase and a first order method then it is amazing (but it is still just the very particular CRTBP case). The trick is however the homotopy map he selected! Why this one? Any conjugate point? Did I misunderstood the title ? I solved in one phase with second order methods for the less restrictive problem RTBP or simply 3-body... but as a strict answer to your title the problem has been solved before. Nota: In "Russell, R. P., "Primer Vector Theory Applied to Global Low-Thrust Trade Studies," JGCD, Vol. 30, No. 2", he does solve the RTBP with a first order method in one phase.
  •  
    I think what is interesting is not what he solved, but how he solved the problem. But, are means more important than end ... I dunno
  •  
    I also loved his method, and it looked to me that is far more general than the CRTBP. As for the title of this post, OK maybe it is an exageration as it suggests that no solution was ever given before, on the other end, as Marek would say "come on guys!!!!!"
  •  
    The generality has to be checked. Don't you think his choice of mapping is too specific? he doesn't really demonstrate it works better than other. In addition, the minimum time choice make the problem very regular (i guess you've experienced that solving min time is much easier than mass max, optimality-wise). There is still a long way before maximum mass+RTBP, Topputo et al should be re-assured :p Did you give him my paper, he may find it interesting since I mention the homotopy on mu but for max mass:)
  •  
    Joris, that is the point I was excited abut, at the conference HE DID present solutions to the maximum mass problem!! One phase, from LEO to an orbit around the moon .. amazing :) You will find his presentation on line.... (according to the organizers) I gave him the reference to you paper anyway, but no pdf though as you did not upload it on our web pages and I could not find it in the web. So I gave him some bibliography I had with be from the russians, and from Russell, Petropoulos and Howell, As far as I know these are the only ones that can hope to compete with this guy!!
  •  
    for info only, my phd, in one phase: http://pdf.aiaa.org/preview/CDReadyMAST08_1856/PV2008_7363.pdf I prefered Mars than the dead rock Moon though!
  •  
    If you send me the pdf I can give it to the guy .. the link you gave contains only the first page ... (I have no access till monday to the AIAA thingy)
  •  
    this is why I like this Diigo thingy so much more than delicious ...
  •  
    What do you mean by this comment, Leopold? ;-) Jokes apart: I am following the Diigo thingy with Google Reader (rss). Obviously, I am getting the new postings. But if someone later on adds a comment to a post, then I can miss it, because the rss doesn't get updated. Not that it's a big problem, but do you guys have a better solution for this? How are you following these comments? (I know that if you have commented an entry, then you get the later updates in email.) (For example, in google reader I can see only the first 5 comments in this entry.)
  •  
    I like when there are discussions evolving around entries
  •  
    and on your problem with the RSS Tamas: its the same for me, you get the comments only for entries that you have posted or that you have commented on ...
Francesco Biscani

Apple's Mistake - 5 views

  •  
    Nice opinion piece.
  •  
    nice indeed .... especially like: "They make such great stuff, but they're such assholes. Do I really want to support this company? Should Apple care what people like me think? What difference does it make if they alienate a small minority of their users? There are a couple reasons they should care. One is that these users are the people they want as employees. If your company seems evil, the best programmers won't work for you. That hurt Microsoft a lot starting in the 90s. Programmers started to feel sheepish about working there. It seemed like selling out. When people from Microsoft were talking to other programmers and they mentioned where they worked, there were a lot of self-deprecating jokes about having gone over to the dark side. But the real problem for Microsoft wasn't the embarrassment of the people they hired. It was the people they never got. And you know who got them? Google and Apple. If Microsoft was the Empire, they were the Rebel Alliance. And it's largely because they got more of the best people that Google and Apple are doing so much better than Microsoft today. Why are programmers so fussy about their employers' morals? Partly because they can afford to be. The best programmers can work wherever they want. They don't have to work for a company they have qualms about. But the other reason programmers are fussy, I think, is that evil begets stupidity. An organization that wins by exercising power starts to lose the ability to win by doing better work. And it's not fun for a smart person to work in a place where the best ideas aren't the ones that win."
  •  
    Poor programmers can complain, but they will keep developing applications for iPhone as long as their bosses will tell them to do so... From my experience in mobile software development I assure you it's not the pain of the programmer that dictates what is done, but the customer's demand. Even though like this the quality of applications is somewhat worse than it could be, clients won't complain as they have no reference point. And things will stay as they are: apple censoring the applications, clients paying for stuff that "sometimes just does not work" (it's normal, isn't it??), and programmers complaining, but obediently making iPhone apps...
Joris _

Spaceflight Now | Breaking News | ESA needs to 'tighten the belt' amid budget crisis - 2 views

  • ESA is freezing spending
  • France is planning to boost its funding by 12 percent
  • ESA selected Thales Alenia Space and OHB Technology to build the satellites, but the production contract is still bogged down by Germany's complaints about the distribution of MTG work between France and Germany
  •  
    no much news in regard to the january's talk of Dordain althought just a thought : what if ESA tries to make money - as CNES does - rather than just spending it ?
  • ...1 more comment...
  •  
    to begin with european industry (and probably governments) would complain that ESA was taking business away from industry? or any part that started to make money would be quickly spun-off
  •  
    really bad interview in my view ... btw: how is CNES making money and how much?
  •  
    CNES is known to be a semi-autonomous agency in the sense that it can auto-finance parts of its activities. Besides the money coming from the state, money comes from the participation of CNES in private companies (e.g. Arianespace) and its own activities (e.g. SPOT among others...). It is about 400M€ per year (almost a class-M mission in Cosmic Vision). For the figures (in French): http://www.cnes.fr/automne_modules_files/standard/public/p4354_c050f7963b54a839a843723401bfddf2budget.pdf
nikolas smyrlakis

How siestas help memory: Sleepy heads | The Economist - 3 views

  •  
    How much more proof do we need.... and of course "It may be that those who have a tendency to wake up groggy are choosing not to siesta in the first place. Perhaps, though, as in so many things, it is practice that makes perfect."
  •  
    Come on guys, be innovative and make at least an Ariadna...
pacome delva

Making A Decision? Take Your Time - 3 views

  • Recent research out of MaastrichtUniversity School of Business and Economics shows that indeed delaying a choice, in general, can help us make better decisions. 
Ma Ru

Sun For Everyun - 3 views

  •  
    Nice initiative, isn't it?
  •  
    some citizen science added to this would make it even better ... as it is it's rather, boh ...
  •  
    Well, true - making data public is one thing, and making others to work through them for you is another... I love the new term "citizen science" though (and an explanation on the Wikipedia page why they had to invent a new one because "crowdsourcing" is soooooo - politically - wrong).
Luís F. Simões

The Space Age, as recorded on human written history - 4 views

  •  
    Google Books measurements of word frequencies on 15 million books (12% of all the books ever published). More about it in:  - Google Opens Books to New Cultural Studies - John Bohannon, Science 2010-12-17 - Slashdot: Google Books Makes a Word Cloud of Human History - http://ngrams.googlelabs.com/info
Luzi Bergamin

Prof. Markrams Hirnmaschine (Startseite, NZZ Online) - 2 views

  •  
    A critical view on Prof. Markram's Blue Brain project (in German).
  • ...4 more comments...
  •  
    A critical view on Prof. Markram's Blue Brain project (in German).
  •  
    so critical that the comment needed to be posted twice :-) ?
  •  
    Yes, I know; I still don't know how to deal with this f.... Diigo Toolbar! Shame on me!!!
  •  
    Would be nice to read something about the modelling, but it appears that there is nothing published in detail. Following the article, the main approach is to model each(!) neuron taking into account the spatial structure of the neurons positions. Once achieved they expect intelligent behaviour. And they need a (type of) supercomputer which does not exist yet.
  •  
    As far as I know it's sort of like "Let's construct an enormous dynamical system and see what happens"... i.e. a waste of taxpayer's money... Able to heal Alzheimer... Yeah... Actually I was on the conference the author is mentioning (FET 2011) and I have seen the presentations of all 6 flagship proposals. Following that I had a discussion with one of my colleagues about the existence of limits of the amount of bullshit politicians are willing to buy from scientists. Will there be a point at which politicians, despite their total ignorance, will realise that scientists simply don't deliver anything they promise? How long will we (scientists) be stuck in the viscous circle of have-to-promise-more-than-predecessors in order to get money? Will we face a situation when we'll be forced to revert to promises which are realistic? To be honest none of the 6 presentations convinced me of their scientific merit (apart from the one on graphene where I have absolutely no expertise to tell). Apparently a huge amount of money is about to be wasted.
  •  
    It's not just "Let's construct an enormous dynamical system and see what happens", it's worse! Also the simulation of the cosmological evolution is/was a little bit of this type, still the results are very interesting and useful. Why? Neither the whole cosmos nor the human brain at the level of single neurons can be modelled on a computer, that would last aeons on a "yet-to-be-invented-extra-super-computer". Thus one has to make assumptions and simplifications. In cosmology we have working theories of gravitation, thermodynamics, electrodynamics etc. at hand; starting from these theories we can make reasonable assumptions and (more or less) justified simplifications. The result is valuable since it provides insight into a complex system under given, explicit and understood assumptions. Nothing similar seems to exist in neuroscience. There is no theory of the human brain and apparently nobody has the slightest idea which simplifications can be made without harm. Of course, Mr. Markram remains completely unaffected of ''details'' like this. Finally, Marek, money is not wasted, we ''build networks of excellence'' and ''select the brightest of the brightest'' to make them study and work at our ''elite institutions'' :-). I lively remember the stage of one of these "bestofthebest" from Ivy League at the ACT...
1 - 20 of 414 Next › Last »
Showing 20 items per page