Skip to main content

Home/ Advanced Concepts Team/ Group items tagged computer science

Rss Feed Group items tagged

Luís F. Simões

Lockheed Martin buys first D-Wave quantum computing system - 1 views

  • D-Wave develops computing systems that leverage the physics of quantum mechanics in order to address problems that are hard for traditional methods to solve in a cost-effective amount of time. Examples of such problems include software verification and validation, financial risk analysis, affinity mapping and sentiment analysis, object recognition in images, medical imaging classification, compressed sensing and bioinformatics.
  •  
    According to the company's wikipedia page, the computer costs $ 10 million. Can we then declare Quantum Computing has officially arrived?! quotes from elsewhere in the site: "first commercial quantum computing system on the market"; "our current superconducting 128-qubit processor chip is housed inside a cryogenics system within a 10 square meter shielded room" Link to the company's scientific publications. Interestingly, this company seems to have been running a BOINC project, AQUA@home, to "predict the performance of superconducting adiabatic quantum computers on a variety of hard problems arising in fields ranging from materials science to machine learning. AQUA@home uses Internet-connected computers to help design and analyze quantum computing algorithms, using Quantum Monte Carlo techniques". List of papers coming out of it.
Dario Izzo

Miguel Nicolelis Says the Brain Is Not Computable, Bashes Kurzweil's Singularity | MIT ... - 9 views

  •  
    As I said ten years ago and psychoanalysts 100 years ago. Luis I am so sorry :) Also ... now that the commission funded the project blue brain is a rather big hit Btw Nicolelis is a rather credited neuro-scientist
  • ...14 more comments...
  •  
    nice article; Luzi would agree as well I assume; one aspect not clear to me is the causal relationship it seems to imply between consciousness and randomness ... anybody?
  •  
    This is the same thing Penrose has been saying for ages (and yes, I read the book). IF the human brain proves to be the only conceivable system capable of consciousness/intelligence AND IF we'll forever be limited to the Turing machine type of computation (which is what the "Not Computable" in the article refers to) AND IF the brain indeed is not computable, THEN AI people might need to worry... Because I seriously doubt the first condition will prove to be true, same with the second one, and because I don't really care about the third (brains is not my thing).. I'm not worried.
  •  
    In any case, all AI research is going in the wrong direction: the mainstream is not on how to go beyond Turing machines, rather how to program them well enough ...... and thats not bringing anywhere near the singularity
  •  
    It has not been shown that intelligence is not computable (only some people saying the human brain isn't, which is something different), so I wouldn't go so far as saying the mainstream is going in the wrong direction. But even if that indeed was the case, would it be a problem? If so, well, then someone should quickly go and tell all the people trading in financial markets that they should stop using computers... after all, they're dealing with uncomputable undecidable problems. :) (and research on how to go beyond Turing computation does exist, but how much would you want to devote your research to a non existent machine?)
  •  
    [warning: troll] If you are happy with developing algorithms that serve the financial market ... good for you :) After all they have been proved to be useful for humankind beyond any reasonable doubt.
  •  
    Two comments from me: 1) an apparently credible scientist takes Kurzweil seriously enough to engage with him in polemics... oops 2) what worries me most, I didn't get the retail store pun at the end of article...
  •  
    True, but after Google hired Kurzweil he is de facto being taken seriously ... so I guess Nicolelis reacted to this.
  •  
    Crazy scientist in residence... interesting marketing move, I suppose.
  •  
    Unfortunately, I can't upload my two kids to the cloud to make them sleep, that's why I comment only now :-). But, of course, I MUST add my comment to this discussion. I don't really get what Nicolelis point is, the article is just too short and at a too popular level. But please realize that the question is not just "computable" vs. "non-computable". A system may be computable (we have a collection of rules called "theory" that we can put on a computer and run in a finite time) and still it need not be predictable. Since the lack of predictability pretty obviously applies to the human brain (as it does to any sufficiently complex and nonlinear system) the question whether it is computable or not becomes rather academic. Markram and his fellows may come up with a incredible simulation program of the human brain, this will be rather useless since they cannot solve the initial value problem and even if they could they will be lost in randomness after a short simulation time due to horrible non-linearities... Btw: this is not my idea, it was pointed out by Bohr more than 100 years ago...
  •  
    I guess chaos is what you are referring to. Stuff like the Lorentz attractor. In which case I would say that the point is not to predict one particular brain (in which case you would be right): any initial conditions would be fine as far as any brain gets started :) that is the goal :)
  •  
    Kurzweil talks about downloading your brain to a computer, so he has a specific brain in mind; Markram talks about identifying neural basis of mental diseases, so he has at least pretty specific situations in mind. Chaos is not the only problem, even a perfectly linear brain (which is not a biological brain) is not predictable, since one cannot determine a complete set of initial conditions of a working (viz. living) brain (after having determined about 10% the brain is dead and the data useless). But the situation is even worse: from all we know a brain will only work with a suitable interaction with its environment. So these boundary conditions one has to determine as well. This is already twice impossible. But the situation is worse again: from all we know, the way the brain interacts with its environment at a neural level depends on his history (how this brain learned). So your boundary conditions (that are impossible to determine) depend on your initial conditions (that are impossible to determine). Thus the situation is rather impossible squared than twice impossible. I'm sure Markram will simulate something, but this will rather be the famous Boltzmann brain than a biological one. Boltzman brains work with any initial conditions and any boundary conditions... and are pretty dead!
  •  
    Say one has an accurate model of a brain. It may be the case that the initial and boundary conditions do not matter that much in order for the brain to function an exhibit macro-characteristics useful to make science. Again, if it is not one particular brain you are targeting, but the 'brain' as a general entity this would make sense if one has an accurate model (also to identify the neural basis of mental diseases). But in my opinion, the construction of such a model of the brain is impossible using a reductionist approach (that is taking the naive approach of putting together some artificial neurons and connecting them in a huge net). That is why both Kurzweil and Markram are doomed to fail.
  •  
    I think that in principle some kind of artificial brain should be feasible. But making a brain by just throwing together a myriad of neurons is probably as promising as throwing together some copper pipes and a heap of silica and expecting it to make calculations for you. Like in the biological system, I suspect, an artificial brain would have to grow from a small tiny functional unit by adding neurons and complexity slowly and in a way that in a stable way increases the "usefulness"/fitness. Apparently our brain's usefulness has to do with interpreting inputs of our sensors to the world and steering the body making sure that those sensors, the brain and the rest of the body are still alive 10 seconds from now (thereby changing the world -> sensor inputs -> ...). So the artificial brain might need sensors and a body to affect the "world" creating a much larger feedback loop than the brain itself. One might argue that the complexity of the sensor inputs is the reason why the brain needs to be so complex in the first place. I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain. Anyone? Or are they trying to simulate the human brain after it has been removed from the body? That might be somewhat easier I guess...
  •  
    Johannes: "I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain." In Artificial Life the whole environment+bodies&brains is simulated. You have also the whole embodied cognition movement that basically advocates for just that: no true intelligence until you model the system in its entirety. And from that you then have people building robotic bodies, and getting their "brains" to learn from scratch how to control them, and through the bodies, the environment. Right now, this is obviously closer to the complexity of insect brains, than human ones. (my take on this is: yes, go ahead and build robots, if the intelligence you want to get in the end is to be displayed in interactions with the real physical world...) It's easy to dismiss Markram's Blue Brain for all their clever marketing pronouncements that they're building a human-level consciousness on a computer, but from what I read of the project, they seem to be developing a platfrom onto which any scientist can plug in their model of a detail of a detail of .... of the human brain, and get it to run together with everyone else's models of other tiny parts of the brain. This is not the same as getting the artificial brain to interact with the real world, but it's a big step in enabling scientists to study their own models on more realistic settings, in which the models' outputs get to effect many other systems, and throuh them feed back into its future inputs. So Blue Brain's biggest contribution might be in making model evaluation in neuroscience less wrong, and that doesn't seem like a bad thing. At some point the reductionist approach needs to start moving in the other direction.
  •  
    @ Dario: absolutely agree, the reductionist approach is the main mistake. My point: if you take the reductionsit approach, then you will face the initial and boundary value problem. If one tries a non-reductionist approach, this problem may be much weaker. But off the record: there exists a non-reductionist theory of the brain, it's called psychology... @ Johannes: also agree, the only way the reductionist approach could eventually be successful is to actually grow the brain. Start with essentially one neuron and grow the whole complexity. But if you want to do this, bring up a kid! A brain without body might be easier? Why do you expect that a brain detached from its complete input/output system actually still works. I'm pretty sure it does not!
  •  
    @Luzi: That was exactly my point :-)
LeopoldS

Helix Nebula - Helix Nebula Vision - 0 views

  •  
    The partnership brings together leading IT providers and three of Europe's leading research centres, CERN, EMBL and ESA in order to provide computing capacity and services that elastically meet big science's growing demand for computing power.

    Helix Nebula provides an unprecedented opportunity for the global cloud services industry to work closely on the Large Hadron Collider through the large-scale, international ATLAS experiment, as well as with the molecular biology and earth observation. The three flagship use cases will be used to validate the approach and to enable a cost-benefit analysis. Helix Nebula will lead these communities through a two year pilot-phase, during which procurement processes and governance issues for the public/private partnership will be addressed.

    This game-changing strategy will boost scientific innovation and bring new discoveries through novel services and products. At the same time, Helix Nebula will ensure valuable scientific data is protected by a secure data layer that is interoperable across all member states. In addition, the pan-European partnership fits in with the Digital Agenda of the European Commission and its strategy for cloud computing on the continent. It will ensure that services comply with Europe's stringent privacy and security regulations and satisfy the many requirements of policy makers, standards bodies, scientific and research communities, industrial suppliers and SMEs.

    Initially based on the needs of European big-science, Helix Nebula ultimately paves the way for a Cloud Computing platform that offers a unique resource to governments, businesses and citizens.
  •  
    "Helix Nebula will lead these communities through a two year pilot-phase, during which procurement processes and governance issues for the public/private partnership will be addressed." And here I was thinking cloud computing was old news 3 years ago :)
ESA ACT

Solve Puzzles for Science | Fold It! - 0 views

  •  
    You can use idle computers as extra computing power in a big run, or you can use idle personnel as extra computing power by making them play computer games:
Dario Izzo

If you're going to do good science, release the computer code too!!! - 3 views

  • Les Hatton, an international expert in software testing resident in the Universities of Kent and Kingston, carried out an extensive analysis of several million lines of scientific code. He showed that the software had an unacceptably high level of detectable inconsistencies.
  •  
    haha. this guy won't have any new friends with this article! I kind of agree but making your code public doesn't mean you are doing good science...and inversely! He takes experimental physics as a counter example but even there, some teams keep their little secrets on the details of the experiment to have a bit of advance on other labs. Research is competitive in its current state, and I think only collaborations can overcome this fact.
  • ...1 more comment...
  •  
    well sure competitiveness is good but to verify (and that should be the case for scientific experiments) the code should be public, it would be nice to have something like bibtex for code libraries or versions used.... :) btw I fully agree that the code should go public, I had lots of trouble reproducing (reprogramming) some papers in the past ... grr
  •  
    My view is that the only proper way to do scientific communication is full transparency: methodologies, tests, codes, etc. Everything else should be unacceptable. This should hold both for publicly funded science (for which there is the additional moral requirement to give back to the public domain what was produced with taxpayers' money) and privately-funded science (where the need to turn a profit should be of lesser importance than the proper application of the scientifc method).
  •  
    Same battle we are fighting since a few years....
Daniel Hennes

Discovery with Data: Leveraging Statistics with Computer Science to Transform Science ... - 3 views

  •  
    Responding to calls from the National Science Foundation (NSF) and White House Office of Science and Technology Policy (OSTP), a working group of the American Statistical Association has developed a whitepaper detailing how statisticians and computer scientists can contribute to administration research initiatives and priorities. The whitepaper includes a lot of topics central to machine learning and data mining, so please take a look.
  •  
    I guess Norvig is trumping Chomsky big time if this is the attitude of the NSF :)))
johannessimon81

Computing with RNA - 0 views

  •  
    After a discussion this morning on robust computing and possible implementations in biological systems I found this really nice result (from 2008) on molecular RNA computers that get assembled within cells and perform simple functions. Of course by having different types of computers within the same cell one could go on to process the output of the other and more complex computations could be executed... Food for thought. :-)
Francesco Biscani

Bacterial computers can crack mathematical problems | Science | guardian.co.uk - 0 views

  • Biologists have created a living computer from E. coli bacteria that can solve complex mathematical problems
  •  
    nice article ... though the colouring used seems a lit awkward to me ...
Kevin de Groote

Physics or Fashion? What Science Lovers Link to Most [Interactive]: Scientific American - 1 views

  •  
    Science aficionados have odd and surprising interests By Mark Fischetti | November 16, 2011 | People who are intrigued with physics are somewhat intrigued with computer science, too, but they are crazy about fashion. Who knew? Hilary Mason did.
nikolas smyrlakis

Nikolas Smyrlakis (CMSnik) on Twitter - 0 views

  •  
    the Computational Management Science twitter, follow me!
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
jmlloren

Why starting from differential equations for computational physics? - 1 views

  •  
    "The computational methods currently used in physics are based on the discretization of differential equations. This is because the computer can only perform algebraic operations. The purpose of this paper is to critically review this practice, showing how to obtain a purely algebraic formulation of physical laws starting directly from experimental measurements."
Luís F. Simões

Our approach to replication in computational science - 2 views

  • So what did we do to make this paper extra super replicable? If you go to the paper Web site, you'll find:
  • p.s. I think I have to refer to this cancer results not reproducible paper somewhere. Done.
  •  
    good discussion on the replicability/reproducibility of scientific results (also a nice example of how to do it right... in bioinformatics at least)
Dario Izzo

NASA Brings Earth Science 'Big Data' to the Cloud with Amazon Web Services | NASA - 3 views

  •  
    NASA answer to the big data hype
  •  
    "The service encompasses selected NASA satellite and global change data sets -- including temperature, precipitation, and forest cover -- and data processing tools from the NASA Earth Exchange (NEX)" Very good marketing move for just three types of selected data (MODIS, Landsat products) plus four model runs (past/projection) for the the four greenhouse gas emissions scenarios of the IPCC. It looks as if they are making data available to adress a targeted question (crowdsourcing of science, as Paul mentioned last time, this time climate evolution), not at all the "free scrolling of the user around the database" to pick up what he thinks useful, mode. There is already more rich libraries out there when it comes to climate (http://icdc.zmaw.de/) Maybe simpler approach is the way to go: make available the big data sets categorized by study topic (climate evolution, solar system science, galaxies etc.) and not by instrument or mission, which is more technical, so that the amateur user can identify his point of interest easily.
  •  
    They are taking a good leap forward with it, but it definitely requires a lot of post processing of the data. Actually it seems they downsample everything to workable chunks. But I guess the power is really in the availability of the data in combination with Amazon's cloud computing platform. Who knows what will come out of it if hundreds of people start interacting with it.
tvinko

Massively collaborative mathematics : Article : Nature - 28 views

  •  
    peer-to-peer theorem-proving
  • ...14 more comments...
  •  
    Or: mathematicians catch up with open-source software developers :)
  •  
    "Similar open-source techniques could be applied in fields such as [...] computer science, where the raw materials are informational and can be freely shared online." ... or we could reach the point, unthinkable only few years ago, of being able to exchange text messages in almost real time! OMG, think of the possibilities! Seriously, does the author even browse the internet?
  •  
    I do not agree with you F., you are citing out of context! Sharing messages does not make a collaboration, nor does a forum, .... You need a set of rules and a common objective. This is clearly observable in "some team", where these rules are lacking, making team work inexistent. The additional difficulties here are that it involves people that are almost strangers to each other, and the immateriality of the project. The support they are using (web, wiki) is only secondary. What they achieved is remarkable, disregarding the subject!
  •  
    I think we will just have to agree to disagree then :) Open source developers have been organizing themselves with emails since the early '90s, and most projects (e.g., the Linux kernel) still do not use anything else today. The Linux kernel mailing list gets around 400 messages per day, and they are managing just fine to scale as the number of contributors increases. I agree that what they achieved is remarkable, but it is more for "what" they achieved than "how". What they did does not remotely qualify as "massively" collaborative: again, many open source projects are managed collaboratively by thousands of people, and many of them are in the multi-million lines of code range. My personal opinion of why in the scientific world these open models are having so many difficulties is that the scientific community today is (globally, of course there are many exceptions) a closed, mostly conservative circle of people who are scared of changes. There is also the fact that the barrier of entry in a scientific community is very high, but I think that this should merely scale down the number of people involved and not change the community "qualitatively". I do not think that many research activities are so much more difficult than, e.g., writing an O(1) scheduler for an Operating System or writing a new balancing tree algorithm for efficiently storing files on a filesystem. Then there is the whole issue of scientific publishing, which, in its current form, is nothing more than a racket. No wonder traditional journals are scared to death by these open-science movements.
  •  
    here we go ... nice controversy! but maybe too many things mixed up together - open science journals vs traditional journals, conservatism of science community wrt programmers (to me one of the reasons for this might be the average age of both groups, which is probably more than 10 years apart ...) and then using emailing wrt other collaboration tools .... .... will have to look at the paper now more carefully ... (I am surprised to see no comment from José or Marek here :-)
  •  
    My point about your initial comment is that it is simplistic to infer that emails imply collaborative work. You actually use the word "organize", what does it mean indeed. In the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review). Mailing is just a coordination mean. In collaborations and team work, it is about rules, not only about the technology you use to potentially collaborate. Otherwise, all projects would be successful, and we would noy learn management at school! They did not write they managed the colloboration exclusively because of wikipedia and emails (or other 2.0 technology)! You are missing the part that makes it successful and remarkable as a project. On his blog the guy put a list of 12 rules for this project. None are related to emails, wikipedia, forums ... because that would be lame and your comment would make sense. Following your argumentation, the tools would be sufficient for collaboration. In the ACT, we have plenty of tools, but no team work. QED
  •  
    the question on the ACT team work is one that is coming back continuously and it always so far has boiled down to the question of how much there need and should be a team project to which everybody inthe team contributes in his / her way or how much we should leave smaller, flexible teams within the team form and progress, more following a bottom-up initiative than imposing one from top-down. At this very moment, there are at least 4 to 5 teams with their own tools and mechanisms which are active and operating within the team. - but hey, if there is a real will for one larger project of the team to which all or most members want to contribute, lets go for it .... but in my view, it should be on a convince rather than oblige basis ...
  •  
    It is, though, indicative that some of the team member do not see all the collaboration and team work happening around them. We always leave the small and agile sub-teams to form and organize themselves spontaneously, but clearly this method leaves out some people (be it for their own personal attitude or be it for pure chance) For those cases which we could think to provide the possibility to participate in an alternative, more structured, team work where we actually manage the hierachy, meritocracy and perform the project review (to use Joris words).
  •  
    I am, and was, involved in "collaboration" but I can say from experience that we are mostly a sum of individuals. In the end, it is always one or two individuals doing the job, and other waiting. Sometimes even, some people don't do what they are supposed to do, so nothing happens ... this could not be defined as team work. Don't get me wrong, this is the dynamic of the team and I am OK with it ... in the end it is less work for me :) team = 3 members or more. I am personally not looking for a 15 member team work, and it is not what I meant. Anyway, this is not exactly the subject of the paper.
  •  
    My opinion about this is that a research team, like the ACT, is a group of _people_ and not only brains. What I mean is that people have feelings, hate, anger, envy, sympathy, love, etc about the others. Unfortunately(?), this could lead to situations, where, in theory, a group of brains could work together, but not the same group of people. As far as I am concerned, this happened many times during my ACT period. And this is happening now with me in Delft, where I have the chance to be in an even more international group than the ACT. I do efficient collaborations with those people who are "close" to me not only in scientific interest, but also in some private sense. And I have people around me who have interesting topics and they might need my help and knowledge, but somehow, it just does not work. Simply lack of sympathy. You know what I mean, don't you? About the article: there is nothing new, indeed. However, why it worked: only brains and not the people worked together on a very specific problem. Plus maybe they were motivated by the idea of e-collaboration. No revolution.
  •  
    Joris, maybe I made myself not clear enough, but my point was only tangentially related to the tools. Indeed, it is the original article mention of "development of new online tools" which prompted my reply about emails. Let me try to say it more clearly: my point is that what they accomplished is nothing new methodologically (i.e., online collaboration of a loosely knit group of people), it is something that has been done countless times before. Do you think that now that it is mathematicians who are doing it makes it somehow special or different? Personally, I don't. You should come over to some mailing lists of mathematical open-source software (e.g., SAGE, Pari, ...), there's plenty of online collaborative research going on there :) I also disagree that, as you say, "in the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review)". First of all I think the main engine of any collaboration like this is the objective, i.e., wanting to get something done. Rules emerge from self-organization later on, and they may be completely different from project to project, ranging from almost anarchy to BDFL (benevolent dictator for life) style. Given this kind of variety that can be observed in open-source projects today, I am very skeptical that any kind of management rule can be said to be universal (and I am pretty sure that the overwhelming majority of project organizers never went to any "management school"). Then there is the social aspect that Tamas mentions above. From my personal experience, communities that put technical merit above everything else tend to remain very small and generally become irrelevant. The ability to work and collaborate with others is the main asset the a participant of a community can bring. I've seen many times on the Linux kernel mailing list contributions deemed "technically superior" being disregarded and not considered for inclusion in the kernel because it was clear that
  •  
    hey, just catched up the discussion. For me what is very new is mainly the framework where this collaborative (open) work is applied. I haven't seen this kind of working openly in any other field of academic research (except for the Boinc type project which are very different, because relying on non specialists for the work to be done). This raise several problems, and mainly the one of the credit, which has not really been solved as I read in the wiki (is an article is written, who writes it, what are the names on the paper). They chose to refer to the project, and not to the individual researchers, as a temporary solution... It is not so surprising for me that this type of work has been first done in the domain of mathematics. Perhaps I have an ideal view of this community but it seems that the result obtained is more important than who obtained it... In many areas of research this is not the case, and one reason is how the research is financed. To obtain money you need to have (scientific) credit, and to have credit you need to have papers with your name on it... so this model of research does not fit in my opinion with the way research is governed. Anyway we had a discussion on the Ariadnet on how to use it, and one idea was to do this kind of collaborative research; idea that was quickly abandoned...
  •  
    I don't really see much the problem with giving credit. It is not the first time a group of researchers collectively take credit for a result under a group umbrella, e.g., see Nicolas Bourbaki: http://en.wikipedia.org/wiki/Bourbaki Again, if the research process is completely transparent and publicly accessible there's no way to fake contributions or to give undue credit, and one could cite without problems a group paper in his/her CV, research grant application, etc.
  •  
    Well my point was more that it could be a problem with how the actual system works. Let say you want a grant or a position, then the jury will count the number of papers with you as a first author, and the other papers (at least in France)... and look at the impact factor of these journals. Then you would have to set up a rule for classifying the authors (endless and pointless discussions), and give an impact factor to the group...?
  •  
    it seems that i should visit you guys at estec... :-)
  •  
    urgently!! btw: we will have the ACT christmas dinner on the 9th in the evening ... are you coming?
LeopoldS

Demonstration of Blind Quantum Computing - 0 views

  •  
    Another Zeilinger article in science
Luís F. Simões

Polynomial Time Code For 3-SAT Released, P==NP - Slashdot - 0 views

  • "Vladimir Romanov has released what he claims is a polynomial-time algorithm for solving 3-SAT. Because 3-SAT is NP-complete, this would imply that P==NP. While there's still good reason to be skeptical that this is, in fact, true, he's made source code available and appears decidedly more serious than most of the people attempting to prove that P==NP or P!=NP. Even though this is probably wrong, just based on the sheer number of prior failures, it seems more likely to lead to new discoveries than most. Note that there are already algorithms to solve 3-SAT, including one that runs in time (4/3)^n and succeeds with high probability. Incidentally, this wouldn't necessarily imply that encryption is worthless: it may still be too slow to be practical."
  •  
    here we go again...
  •  
    slashdot: "Russian computer scientist Vladimir Romanov has conceded that his previously published solution to the '3 SAT' problem of boolean algebra does not work."
tvinko

Computational Science - 1 views

  •  
    Stackexchange is a network of collaborative question and answer sites; the most well-known is the stackoverflow site for programming. This site focuses on Computation.
Francesco Biscani

STLport: An Interview with A. Stepanov - 2 views

  • Generic programming is a programming method that is based in finding the most abstract representations of efficient algorithms.
  • I spent several months programming in Java.
  • for the first time in my life programming in a new language did not bring me new insights
  • ...2 more annotations...
  • it has no intellectual value whatsoever
  • Java is clearly an example of a money oriented programming (MOP).
  •  
    One of the authors of the STL (C++'s Standard Template Library) explains generic programming and slams Java.
  • ...6 more comments...
  •  
    "Java is clearly an example of a money oriented programming (MOP)." Exactly. And for the industry it's the money that matters. Whatever mathematicians think about it.
  •  
    It is actually a good thing that it is "MOP" (even though I do not agree with this term): that is what makes it inter-operable, light and easy to learn. There is no point in writing fancy codes, if it does not bring anything to the end-user, but only for geeks to discuss incomprehensible things in forums. Anyway, I am pretty sure we can find a Java guy slamming C++ ;)
  •  
    Personally, I never understood what the point of Java is, given that: 1) I do not know of any developer (maybe Marek?) that uses it for intellectual pleasure/curiosity/fun whatever, given the possibility of choice - this to me speaks loudly on the objective qualities of the language more than any industrial-corporate marketing bullshit (for the record, I argue that Python is more interoperable, lighter and easier to learn than Java - which is why, e.g., Google is using it heavily); 2) I have used a software developed in Java maybe a total of 5 times on any computer/laptop I owned over 15 years. I cannot name of one single Java project that I find necessary or even useful; for my usage of computers, Java could disappear overnight without even noticing. Then of course one can argue as much as one wants about the "industry choosing Java", to which I would counterargue with examples of industry doing stupid things and making absurd choices. But I suppose it would be a kind of pointless discussion, so I'll just stop here :)
  •  
    "At Google, python is one of the 3 "official languages" alongside with C++ and Java". Java runs everywhere (the byte code itself) that is I think the only reason it became famous. Python, I guess, is more heavy if it were to run on your web browser! I think every language has its pros and cons, but I agree Java is not the answer to everything... Java is used in MATLAB, some web applications, mobile phones apps, ... I would be a bit in trouble if it were to disappear today :(
  •  
    I personally do not believe in interoperability :)
  •  
    Well, I bet you'd notice an overnight disappearance of java, because half of the internet would vanish... J2EE technologies are just omnipresent there... I'd rather not even *think* about developing a web application/webservice/web-whatever in standard C++... is it actually possible?? Perhaps with some weird Microsoft solutions... I bet your bank online services are written in Java. Certainly not in PHP+MySQL :) Industry has chosen Java not because of industrial-corporate marketing bullshit, but because of economics... it enables you develop robustly, reliably, error-prone, modular, well integrated etc... software. And the costs? Well, using java technologies you can set-up enterprise-quality web application servers, get a fully featured development environment (which is better than ANY C/C++/whatever development environment I've EVER seen) at the cost of exactly 0 (zero!) USD/GBP/EUR... Since many years now, the central issue in software development is not implementing algorithms, it's building applications. And that's where Java outperforms many other technologies. The final remark, because I may be mistakenly taken for an apostle of Java or something... I love the idea of generic programming, C++ is my favourite programming language (and I used to read Stroustroup before sleep), at leisure time I write programs in Python... But if I were to start a software development company, then, apart from some very niche applications like computer games, it most probably would use Java as main technology.
  •  
    "I'd rather not even *think* about developing a web application/webservice/web-whatever in standard C++... is it actually possible?? Perhaps with some weird Microsoft solutions... I bet your bank online services are written in Java. Certainly not in PHP+MySQL :)" Doing in C++ would be awesomely crazy, I agree :) But as I see it there are lots of huge websites that operate on PHP, see for instance Facebook. For the banks and the enterprise market, as a general rule I tend to take with a grain of salt whatever spin comes out from them; in the end behind every corporate IT decision there is a little smurf just trying to survive and have the back covered :) As they used to say in the old times, "No one ever got fired for buying IBM". "Industry has chosen Java not because of industrial-corporate marketing bullshit, but because of economics... it enables you develop robustly, reliably, error-prone, modular, well integrated etc... software. And the costs? Well, using java technologies you can set-up enterprise-quality web application servers, get a fully featured development environment (which is better than ANY C/C++/whatever development environment I've EVER seen) at the cost of exactly 0 (zero!) USD/GBP/EUR... Since many years now, the central issue in software development is not implementing algorithms, it's building applications. And that's where Java outperforms many other technologies." Apart from the IDE considerations (on which I cannot comment, since I'm not a IDE user myself), I do not see how Java beats the competition in this regard (again, Python and the huge software ecosystem surrounding it). My impression is that Java's success is mostly due to Sun pushing it like there is no tomorrow and bundling it with their hardware business.
  •  
    OK, I think there is a bit of everything, wrong and right, but you have to acknowledge that Python is not always the simplest. For info, Facebook uses Java (if you upload picture for instance), and PHP is very limited. So definitely, in company, engineers like you and me select the language, it is not a marketing or political thing. And in the case of fb, they come up with the conclusion that PHP, and Java don't do everything but complement each other. As you say Python as many things around, but it might be too much for simple applications. Otherwise, I would seriously be interested by a study of how to implement a Python-like system on-board spacecrafts and what are the advantages over mixing C, Ada and Java.
1 - 20 of 108 Next › Last »
Showing 20 items per page