Skip to main content

Home/ Advanced Concepts Team/ Group items tagged 10 rules

Rss Feed Group items tagged

tvinko

Massively collaborative mathematics : Article : Nature - 28 views

  •  
    peer-to-peer theorem-proving
  • ...14 more comments...
  •  
    Or: mathematicians catch up with open-source software developers :)
  •  
    "Similar open-source techniques could be applied in fields such as [...] computer science, where the raw materials are informational and can be freely shared online." ... or we could reach the point, unthinkable only few years ago, of being able to exchange text messages in almost real time! OMG, think of the possibilities! Seriously, does the author even browse the internet?
  •  
    I do not agree with you F., you are citing out of context! Sharing messages does not make a collaboration, nor does a forum, .... You need a set of rules and a common objective. This is clearly observable in "some team", where these rules are lacking, making team work inexistent. The additional difficulties here are that it involves people that are almost strangers to each other, and the immateriality of the project. The support they are using (web, wiki) is only secondary. What they achieved is remarkable, disregarding the subject!
  •  
    I think we will just have to agree to disagree then :) Open source developers have been organizing themselves with emails since the early '90s, and most projects (e.g., the Linux kernel) still do not use anything else today. The Linux kernel mailing list gets around 400 messages per day, and they are managing just fine to scale as the number of contributors increases. I agree that what they achieved is remarkable, but it is more for "what" they achieved than "how". What they did does not remotely qualify as "massively" collaborative: again, many open source projects are managed collaboratively by thousands of people, and many of them are in the multi-million lines of code range. My personal opinion of why in the scientific world these open models are having so many difficulties is that the scientific community today is (globally, of course there are many exceptions) a closed, mostly conservative circle of people who are scared of changes. There is also the fact that the barrier of entry in a scientific community is very high, but I think that this should merely scale down the number of people involved and not change the community "qualitatively". I do not think that many research activities are so much more difficult than, e.g., writing an O(1) scheduler for an Operating System or writing a new balancing tree algorithm for efficiently storing files on a filesystem. Then there is the whole issue of scientific publishing, which, in its current form, is nothing more than a racket. No wonder traditional journals are scared to death by these open-science movements.
  •  
    here we go ... nice controversy! but maybe too many things mixed up together - open science journals vs traditional journals, conservatism of science community wrt programmers (to me one of the reasons for this might be the average age of both groups, which is probably more than 10 years apart ...) and then using emailing wrt other collaboration tools .... .... will have to look at the paper now more carefully ... (I am surprised to see no comment from José or Marek here :-)
  •  
    My point about your initial comment is that it is simplistic to infer that emails imply collaborative work. You actually use the word "organize", what does it mean indeed. In the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review). Mailing is just a coordination mean. In collaborations and team work, it is about rules, not only about the technology you use to potentially collaborate. Otherwise, all projects would be successful, and we would noy learn management at school! They did not write they managed the colloboration exclusively because of wikipedia and emails (or other 2.0 technology)! You are missing the part that makes it successful and remarkable as a project. On his blog the guy put a list of 12 rules for this project. None are related to emails, wikipedia, forums ... because that would be lame and your comment would make sense. Following your argumentation, the tools would be sufficient for collaboration. In the ACT, we have plenty of tools, but no team work. QED
  •  
    the question on the ACT team work is one that is coming back continuously and it always so far has boiled down to the question of how much there need and should be a team project to which everybody inthe team contributes in his / her way or how much we should leave smaller, flexible teams within the team form and progress, more following a bottom-up initiative than imposing one from top-down. At this very moment, there are at least 4 to 5 teams with their own tools and mechanisms which are active and operating within the team. - but hey, if there is a real will for one larger project of the team to which all or most members want to contribute, lets go for it .... but in my view, it should be on a convince rather than oblige basis ...
  •  
    It is, though, indicative that some of the team member do not see all the collaboration and team work happening around them. We always leave the small and agile sub-teams to form and organize themselves spontaneously, but clearly this method leaves out some people (be it for their own personal attitude or be it for pure chance) For those cases which we could think to provide the possibility to participate in an alternative, more structured, team work where we actually manage the hierachy, meritocracy and perform the project review (to use Joris words).
  •  
    I am, and was, involved in "collaboration" but I can say from experience that we are mostly a sum of individuals. In the end, it is always one or two individuals doing the job, and other waiting. Sometimes even, some people don't do what they are supposed to do, so nothing happens ... this could not be defined as team work. Don't get me wrong, this is the dynamic of the team and I am OK with it ... in the end it is less work for me :) team = 3 members or more. I am personally not looking for a 15 member team work, and it is not what I meant. Anyway, this is not exactly the subject of the paper.
  •  
    My opinion about this is that a research team, like the ACT, is a group of _people_ and not only brains. What I mean is that people have feelings, hate, anger, envy, sympathy, love, etc about the others. Unfortunately(?), this could lead to situations, where, in theory, a group of brains could work together, but not the same group of people. As far as I am concerned, this happened many times during my ACT period. And this is happening now with me in Delft, where I have the chance to be in an even more international group than the ACT. I do efficient collaborations with those people who are "close" to me not only in scientific interest, but also in some private sense. And I have people around me who have interesting topics and they might need my help and knowledge, but somehow, it just does not work. Simply lack of sympathy. You know what I mean, don't you? About the article: there is nothing new, indeed. However, why it worked: only brains and not the people worked together on a very specific problem. Plus maybe they were motivated by the idea of e-collaboration. No revolution.
  •  
    Joris, maybe I made myself not clear enough, but my point was only tangentially related to the tools. Indeed, it is the original article mention of "development of new online tools" which prompted my reply about emails. Let me try to say it more clearly: my point is that what they accomplished is nothing new methodologically (i.e., online collaboration of a loosely knit group of people), it is something that has been done countless times before. Do you think that now that it is mathematicians who are doing it makes it somehow special or different? Personally, I don't. You should come over to some mailing lists of mathematical open-source software (e.g., SAGE, Pari, ...), there's plenty of online collaborative research going on there :) I also disagree that, as you say, "in the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review)". First of all I think the main engine of any collaboration like this is the objective, i.e., wanting to get something done. Rules emerge from self-organization later on, and they may be completely different from project to project, ranging from almost anarchy to BDFL (benevolent dictator for life) style. Given this kind of variety that can be observed in open-source projects today, I am very skeptical that any kind of management rule can be said to be universal (and I am pretty sure that the overwhelming majority of project organizers never went to any "management school"). Then there is the social aspect that Tamas mentions above. From my personal experience, communities that put technical merit above everything else tend to remain very small and generally become irrelevant. The ability to work and collaborate with others is the main asset the a participant of a community can bring. I've seen many times on the Linux kernel mailing list contributions deemed "technically superior" being disregarded and not considered for inclusion in the kernel because it was clear that
  •  
    hey, just catched up the discussion. For me what is very new is mainly the framework where this collaborative (open) work is applied. I haven't seen this kind of working openly in any other field of academic research (except for the Boinc type project which are very different, because relying on non specialists for the work to be done). This raise several problems, and mainly the one of the credit, which has not really been solved as I read in the wiki (is an article is written, who writes it, what are the names on the paper). They chose to refer to the project, and not to the individual researchers, as a temporary solution... It is not so surprising for me that this type of work has been first done in the domain of mathematics. Perhaps I have an ideal view of this community but it seems that the result obtained is more important than who obtained it... In many areas of research this is not the case, and one reason is how the research is financed. To obtain money you need to have (scientific) credit, and to have credit you need to have papers with your name on it... so this model of research does not fit in my opinion with the way research is governed. Anyway we had a discussion on the Ariadnet on how to use it, and one idea was to do this kind of collaborative research; idea that was quickly abandoned...
  •  
    I don't really see much the problem with giving credit. It is not the first time a group of researchers collectively take credit for a result under a group umbrella, e.g., see Nicolas Bourbaki: http://en.wikipedia.org/wiki/Bourbaki Again, if the research process is completely transparent and publicly accessible there's no way to fake contributions or to give undue credit, and one could cite without problems a group paper in his/her CV, research grant application, etc.
  •  
    Well my point was more that it could be a problem with how the actual system works. Let say you want a grant or a position, then the jury will count the number of papers with you as a first author, and the other papers (at least in France)... and look at the impact factor of these journals. Then you would have to set up a rule for classifying the authors (endless and pointless discussions), and give an impact factor to the group...?
  •  
    it seems that i should visit you guys at estec... :-)
  •  
    urgently!! btw: we will have the ACT christmas dinner on the 9th in the evening ... are you coming?
Luís F. Simões

Seminar: You and Your Research, Dr. Richard W. Hamming (March 7, 1986) - 10 views

  • This talk centered on Hamming's observations and research on the question "Why do so few scientists make significant contributions and so many are forgotten in the long run?" From his more than forty years of experience, thirty of which were at Bell Laboratories, he has made a number of direct observations, asked very pointed questions of scientists about what, how, and why they did things, studied the lives of great scientists and great contributions, and has done introspection and studied theories of creativity. The talk is about what he has learned in terms of the properties of the individual scientists, their abilities, traits, working habits, attitudes, and philosophy.
  •  
    Here's the link related to one of the lunch time discussions. I recommend it to every single one of you. I promise it will be worth your time. If you're lazy, you have a summary here (good stuff also in the references, have a look at them):      Erren TC, Cullen P, Erren M, Bourne PE (2007) Ten Simple Rules for Doing Your Best Research, According to Hamming. PLoS Comput Biol 3(10): e213.
  • ...3 more comments...
  •  
    I'm also pretty sure that the ones who are remembered are not the ones who tried to be... so why all these rules !? I think it's bullshit...
  •  
    The seminar is not a manual on how to achieve fame, but rather an analysis on how others were able to perform very significant work. The two things are in some cases related, but the seminar's focus is on the second.
  •  
    Then read a good book on the life of Copernic, it's the anti-manual of Hamming... he breaks all the rules !
  •  
    honestly I think that some of these rules actually make sense indeed ... but I am always curious to get a good book recommendation (which book of Copernic would you recommend?) btw Pacome: we are in Paris ... in case you have some time ...
  •  
    I warmly recommend this book, a bit old but fascinating: The sleepwalkers from Arthur Koestler. It shows that progress in science is not straight and do not obey any rule... It is not as rational as most of people seem to believe today. http://www.amazon.com/Sleepwalkers-History-Changing-Universe-Compass/dp/0140192468/ref=sr_1_1?ie=UTF8&qid=1294835558&sr=8-1 Otherwise yes I have some time ! my phone number: 0699428926 We live around Denfert-Rochereau and Montparnasse. We could go for a beer this evening ?
Dario Izzo

Miguel Nicolelis Says the Brain Is Not Computable, Bashes Kurzweil's Singularity | MIT ... - 9 views

  •  
    As I said ten years ago and psychoanalysts 100 years ago. Luis I am so sorry :) Also ... now that the commission funded the project blue brain is a rather big hit Btw Nicolelis is a rather credited neuro-scientist
  • ...14 more comments...
  •  
    nice article; Luzi would agree as well I assume; one aspect not clear to me is the causal relationship it seems to imply between consciousness and randomness ... anybody?
  •  
    This is the same thing Penrose has been saying for ages (and yes, I read the book). IF the human brain proves to be the only conceivable system capable of consciousness/intelligence AND IF we'll forever be limited to the Turing machine type of computation (which is what the "Not Computable" in the article refers to) AND IF the brain indeed is not computable, THEN AI people might need to worry... Because I seriously doubt the first condition will prove to be true, same with the second one, and because I don't really care about the third (brains is not my thing).. I'm not worried.
  •  
    In any case, all AI research is going in the wrong direction: the mainstream is not on how to go beyond Turing machines, rather how to program them well enough ...... and thats not bringing anywhere near the singularity
  •  
    It has not been shown that intelligence is not computable (only some people saying the human brain isn't, which is something different), so I wouldn't go so far as saying the mainstream is going in the wrong direction. But even if that indeed was the case, would it be a problem? If so, well, then someone should quickly go and tell all the people trading in financial markets that they should stop using computers... after all, they're dealing with uncomputable undecidable problems. :) (and research on how to go beyond Turing computation does exist, but how much would you want to devote your research to a non existent machine?)
  •  
    [warning: troll] If you are happy with developing algorithms that serve the financial market ... good for you :) After all they have been proved to be useful for humankind beyond any reasonable doubt.
  •  
    Two comments from me: 1) an apparently credible scientist takes Kurzweil seriously enough to engage with him in polemics... oops 2) what worries me most, I didn't get the retail store pun at the end of article...
  •  
    True, but after Google hired Kurzweil he is de facto being taken seriously ... so I guess Nicolelis reacted to this.
  •  
    Crazy scientist in residence... interesting marketing move, I suppose.
  •  
    Unfortunately, I can't upload my two kids to the cloud to make them sleep, that's why I comment only now :-). But, of course, I MUST add my comment to this discussion. I don't really get what Nicolelis point is, the article is just too short and at a too popular level. But please realize that the question is not just "computable" vs. "non-computable". A system may be computable (we have a collection of rules called "theory" that we can put on a computer and run in a finite time) and still it need not be predictable. Since the lack of predictability pretty obviously applies to the human brain (as it does to any sufficiently complex and nonlinear system) the question whether it is computable or not becomes rather academic. Markram and his fellows may come up with a incredible simulation program of the human brain, this will be rather useless since they cannot solve the initial value problem and even if they could they will be lost in randomness after a short simulation time due to horrible non-linearities... Btw: this is not my idea, it was pointed out by Bohr more than 100 years ago...
  •  
    I guess chaos is what you are referring to. Stuff like the Lorentz attractor. In which case I would say that the point is not to predict one particular brain (in which case you would be right): any initial conditions would be fine as far as any brain gets started :) that is the goal :)
  •  
    Kurzweil talks about downloading your brain to a computer, so he has a specific brain in mind; Markram talks about identifying neural basis of mental diseases, so he has at least pretty specific situations in mind. Chaos is not the only problem, even a perfectly linear brain (which is not a biological brain) is not predictable, since one cannot determine a complete set of initial conditions of a working (viz. living) brain (after having determined about 10% the brain is dead and the data useless). But the situation is even worse: from all we know a brain will only work with a suitable interaction with its environment. So these boundary conditions one has to determine as well. This is already twice impossible. But the situation is worse again: from all we know, the way the brain interacts with its environment at a neural level depends on his history (how this brain learned). So your boundary conditions (that are impossible to determine) depend on your initial conditions (that are impossible to determine). Thus the situation is rather impossible squared than twice impossible. I'm sure Markram will simulate something, but this will rather be the famous Boltzmann brain than a biological one. Boltzman brains work with any initial conditions and any boundary conditions... and are pretty dead!
  •  
    Say one has an accurate model of a brain. It may be the case that the initial and boundary conditions do not matter that much in order for the brain to function an exhibit macro-characteristics useful to make science. Again, if it is not one particular brain you are targeting, but the 'brain' as a general entity this would make sense if one has an accurate model (also to identify the neural basis of mental diseases). But in my opinion, the construction of such a model of the brain is impossible using a reductionist approach (that is taking the naive approach of putting together some artificial neurons and connecting them in a huge net). That is why both Kurzweil and Markram are doomed to fail.
  •  
    I think that in principle some kind of artificial brain should be feasible. But making a brain by just throwing together a myriad of neurons is probably as promising as throwing together some copper pipes and a heap of silica and expecting it to make calculations for you. Like in the biological system, I suspect, an artificial brain would have to grow from a small tiny functional unit by adding neurons and complexity slowly and in a way that in a stable way increases the "usefulness"/fitness. Apparently our brain's usefulness has to do with interpreting inputs of our sensors to the world and steering the body making sure that those sensors, the brain and the rest of the body are still alive 10 seconds from now (thereby changing the world -> sensor inputs -> ...). So the artificial brain might need sensors and a body to affect the "world" creating a much larger feedback loop than the brain itself. One might argue that the complexity of the sensor inputs is the reason why the brain needs to be so complex in the first place. I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain. Anyone? Or are they trying to simulate the human brain after it has been removed from the body? That might be somewhat easier I guess...
  •  
    Johannes: "I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain." In Artificial Life the whole environment+bodies&brains is simulated. You have also the whole embodied cognition movement that basically advocates for just that: no true intelligence until you model the system in its entirety. And from that you then have people building robotic bodies, and getting their "brains" to learn from scratch how to control them, and through the bodies, the environment. Right now, this is obviously closer to the complexity of insect brains, than human ones. (my take on this is: yes, go ahead and build robots, if the intelligence you want to get in the end is to be displayed in interactions with the real physical world...) It's easy to dismiss Markram's Blue Brain for all their clever marketing pronouncements that they're building a human-level consciousness on a computer, but from what I read of the project, they seem to be developing a platfrom onto which any scientist can plug in their model of a detail of a detail of .... of the human brain, and get it to run together with everyone else's models of other tiny parts of the brain. This is not the same as getting the artificial brain to interact with the real world, but it's a big step in enabling scientists to study their own models on more realistic settings, in which the models' outputs get to effect many other systems, and throuh them feed back into its future inputs. So Blue Brain's biggest contribution might be in making model evaluation in neuroscience less wrong, and that doesn't seem like a bad thing. At some point the reductionist approach needs to start moving in the other direction.
  •  
    @ Dario: absolutely agree, the reductionist approach is the main mistake. My point: if you take the reductionsit approach, then you will face the initial and boundary value problem. If one tries a non-reductionist approach, this problem may be much weaker. But off the record: there exists a non-reductionist theory of the brain, it's called psychology... @ Johannes: also agree, the only way the reductionist approach could eventually be successful is to actually grow the brain. Start with essentially one neuron and grow the whole complexity. But if you want to do this, bring up a kid! A brain without body might be easier? Why do you expect that a brain detached from its complete input/output system actually still works. I'm pretty sure it does not!
  •  
    @Luzi: That was exactly my point :-)
Ma Ru

Ten Simple Rules for Starting a Company (PLOSCB) - 2 views

  •  
    For those of you thinking of spin-offs.
Ma Ru

PLOS Computational Biology: Ten Simple Rules for Organizing an Unconference - 1 views

  •  
    For future reference... At the same time, a crowdsourced article: "We began the crowdsourcing by collecting a list of possible rules for the article via a git-controlled repository" SVN would be so 2000-ish...
LeopoldS

Ruling Out Multi-Order Interference in Quantum Mechanics -- Sinha et al. 329 (5990): 41... - 2 views

  •  
    quantumphysics holds ....
Luís F. Simões

Inferring individual rules from collective behavior - 2 views

  •  
    "We fit data to zonal interaction models and characterize which individual interaction forces suffice to explain observed spatial patterns." You can get the paper from the first author's website: http://people.stfx.ca/rlukeman/research.htm
  •  
    PNAS? Didnt strike me as sth very new though... We should refer to it in the roots study though: "Social organisms form striking aggregation patterns, displaying cohesion, polarization, and collective intelligence. Determining how they do so in nature is challenging; a plethora of simulation studies displaying life-like swarm behavior lack rigorous comparison with actual data because collecting field data of sufficient quality has been a bottleneck." For roots it is NO bottleneck :) Tobias was right :)
  •  
    Here they assume all relevant variables influencing behaviour are being observed. Namely, the relative positions and orientations of all ducks in the swarm. So, they make movies of the swarm's movements, process them, and them fit the models to that data. In the roots, though we can observe the complete final structure, or even obtain time-lapse movies showing how that structure came out to be, getting the measurements of all relevant soil variables (nitrogen, phosphorus, ...) throughout the soil, and over time, would be extremely difficult. So I guess a replication of the kind of work they did, but for the roots, would be hard. Nice reference though.
LeopoldS

Meet The Man Who Paid A Record $335,000 For Virtual Property - Oliver Chiang - SelectSt... - 7 views

  •  
    does he also have to pay property tax?
  • ...4 more comments...
  •  
    "He says he made the purchase partly because he wants to be able to spend more time in the virtual world. Before, he was averaging 10 to 20 hours per week. He wants to be able to spend about 40 to 60 hours a week now, basically making running the virtual asteroid a full-time job. (He'll also be cutting back on the time he spends developing software in real life.)"
  •  
    From what I remember when I visited the developer/producer company HQ, he wouldn't have to pay any taxes. If he has a virtual business he might have to pay them a license fee. If you want to start a virtual bank, you would need to buy a banking license. The money thing is quite regulated in this enviroment, so probably that's why property prices can be quite high.
  •  
    Remember the study but have completely zapped that this was with this company ... GSP rules :-)
  •  
    so how does that state get his money from this type of economy? where is the VAT in there?
  •  
    Last time I checked the "state" was still loosing money. But their main income is the sale of resources. Mostly new land, but I believe at some point they wanted to sell their initial planet too.
pacome delva

Special relativity passes key test - 2 views

  • Granot and colleagues studied the radiation from a gamma-ray burst – associated with a highly energetic explosion in a distant galaxy – that was spotted by NASA's Fermi Gamma-ray Space Telescope on 10 May this year. They analysed the radiation at different wavelengths to see whether there were any signs that photons with different energies arrived at Fermi's detectors at different times.
  • According to Granot, these results "strongly disfavour" quantum-gravity theories in which the speed of light varies linearly with photon energy, which might include some variations of string theory or loop quantum gravity. "I would not use the term 'rule out'," he says, "as most models do not have exact predictions for the energy scale associated with this violation of Lorentz invariance. However, our observational requirement that such an energy scale would be well above the Planck energy makes such models unnatural."
  •  
    essentially they made an experiment that does not prove or disprove anything -big deal-... what is the scientific value of "strongly disfavour"??? I also like the sentence "most models do not have exact predictions for the energy scale associated with this violation of Lorentz invariance" ... but if this is true WHAT IS THE POINT OF THE EXPERIMENT!!!! God, physics is in trouble ....
  •  
    hum, null result experiments are not useless !!! there is always the hope of finding "something wrong", which would lead to a great discovery. For the state of theoretical physics (the "no exact predictions" quote), i totally agree that physics is in trouble... That's what happen when physicists don't care anymore about experiments...! All you can do now is drawing "nice"graph with upper bounds on some parameters of an all tunable weird theory !
annaheffernan

Plasmons excite hot carriers - 1 views

  •  
    The first complete theory of how plasmons produce "hot carriers" has been developed by researchers in the US. The new model could help make this process of producing carriers more efficient, which would be good news for enhancing solar-energy conversion in photovoltaic devices.
  •  
    I did not read the paper but what is further down written in the article, does not give much hope that this actually gives much more insight than what we had nor that it could be used in any way to improve current PV cells soon: e.g. "To fully exploit these carriers for such applications, researchers need to understand the physical processes behind plasmon-induced hot-carrier generation. Nordlander's team has now developed a simple model that describes how plasmons produce hot carriers in spherical silver nanoparticles and nanoshells. The model describes the conduction electrons in the metal as free particles and then analyses how plasmons excite hot carriers using Fermi's golden rule - a way to calculate how a quantum system transitions from one state into another following a perturbation. The model allows the researchers to calculate how many hot carriers are produced as a function of the light frequency used to excite the metal, as well as the rate at which they are produced. The spectral profile obtained is, to all intents and purposes, the "plasmonic spectrum" of the material. Particle size and hot-carrier lifetimes "Our analyses reveal that particle size and hot-carrier lifetimes are central for determining both the production rate and the energies of the hot carriers," says Nordlander. "Larger particles and shorter lifetimes produce more carriers with lower energies and smaller particles produce fewer carriers, but with higher energies."
1 - 10 of 10
Showing 20 items per page