Skip to main content

Home/ Advanced Concepts Team/ Group items tagged scientific

Rss Feed Group items tagged

LeopoldS

Helix Nebula - Helix Nebula Vision - 0 views

  •  
    The partnership brings together leading IT providers and three of Europe's leading research centres, CERN, EMBL and ESA in order to provide computing capacity and services that elastically meet big science's growing demand for computing power.

    Helix Nebula provides an unprecedented opportunity for the global cloud services industry to work closely on the Large Hadron Collider through the large-scale, international ATLAS experiment, as well as with the molecular biology and earth observation. The three flagship use cases will be used to validate the approach and to enable a cost-benefit analysis. Helix Nebula will lead these communities through a two year pilot-phase, during which procurement processes and governance issues for the public/private partnership will be addressed.

    This game-changing strategy will boost scientific innovation and bring new discoveries through novel services and products. At the same time, Helix Nebula will ensure valuable scientific data is protected by a secure data layer that is interoperable across all member states. In addition, the pan-European partnership fits in with the Digital Agenda of the European Commission and its strategy for cloud computing on the continent. It will ensure that services comply with Europe's stringent privacy and security regulations and satisfy the many requirements of policy makers, standards bodies, scientific and research communities, industrial suppliers and SMEs.

    Initially based on the needs of European big-science, Helix Nebula ultimately paves the way for a Cloud Computing platform that offers a unique resource to governments, businesses and citizens.
  •  
    "Helix Nebula will lead these communities through a two year pilot-phase, during which procurement processes and governance issues for the public/private partnership will be addressed." And here I was thinking cloud computing was old news 3 years ago :)
santecarloni

Scientific History and the Lessons for Today's Emerging Ideas - Technology Review - 1 views

  •  
    A better understanding of the scientific turkeys of the 19th century may provide a stark warning about the value of mainstream scientific thought today.
tvinko

Massively collaborative mathematics : Article : Nature - 28 views

  •  
    peer-to-peer theorem-proving
  • ...14 more comments...
  •  
    Or: mathematicians catch up with open-source software developers :)
  •  
    "Similar open-source techniques could be applied in fields such as [...] computer science, where the raw materials are informational and can be freely shared online." ... or we could reach the point, unthinkable only few years ago, of being able to exchange text messages in almost real time! OMG, think of the possibilities! Seriously, does the author even browse the internet?
  •  
    I do not agree with you F., you are citing out of context! Sharing messages does not make a collaboration, nor does a forum, .... You need a set of rules and a common objective. This is clearly observable in "some team", where these rules are lacking, making team work inexistent. The additional difficulties here are that it involves people that are almost strangers to each other, and the immateriality of the project. The support they are using (web, wiki) is only secondary. What they achieved is remarkable, disregarding the subject!
  •  
    I think we will just have to agree to disagree then :) Open source developers have been organizing themselves with emails since the early '90s, and most projects (e.g., the Linux kernel) still do not use anything else today. The Linux kernel mailing list gets around 400 messages per day, and they are managing just fine to scale as the number of contributors increases. I agree that what they achieved is remarkable, but it is more for "what" they achieved than "how". What they did does not remotely qualify as "massively" collaborative: again, many open source projects are managed collaboratively by thousands of people, and many of them are in the multi-million lines of code range. My personal opinion of why in the scientific world these open models are having so many difficulties is that the scientific community today is (globally, of course there are many exceptions) a closed, mostly conservative circle of people who are scared of changes. There is also the fact that the barrier of entry in a scientific community is very high, but I think that this should merely scale down the number of people involved and not change the community "qualitatively". I do not think that many research activities are so much more difficult than, e.g., writing an O(1) scheduler for an Operating System or writing a new balancing tree algorithm for efficiently storing files on a filesystem. Then there is the whole issue of scientific publishing, which, in its current form, is nothing more than a racket. No wonder traditional journals are scared to death by these open-science movements.
  •  
    here we go ... nice controversy! but maybe too many things mixed up together - open science journals vs traditional journals, conservatism of science community wrt programmers (to me one of the reasons for this might be the average age of both groups, which is probably more than 10 years apart ...) and then using emailing wrt other collaboration tools .... .... will have to look at the paper now more carefully ... (I am surprised to see no comment from José or Marek here :-)
  •  
    My point about your initial comment is that it is simplistic to infer that emails imply collaborative work. You actually use the word "organize", what does it mean indeed. In the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review). Mailing is just a coordination mean. In collaborations and team work, it is about rules, not only about the technology you use to potentially collaborate. Otherwise, all projects would be successful, and we would noy learn management at school! They did not write they managed the colloboration exclusively because of wikipedia and emails (or other 2.0 technology)! You are missing the part that makes it successful and remarkable as a project. On his blog the guy put a list of 12 rules for this project. None are related to emails, wikipedia, forums ... because that would be lame and your comment would make sense. Following your argumentation, the tools would be sufficient for collaboration. In the ACT, we have plenty of tools, but no team work. QED
  •  
    the question on the ACT team work is one that is coming back continuously and it always so far has boiled down to the question of how much there need and should be a team project to which everybody inthe team contributes in his / her way or how much we should leave smaller, flexible teams within the team form and progress, more following a bottom-up initiative than imposing one from top-down. At this very moment, there are at least 4 to 5 teams with their own tools and mechanisms which are active and operating within the team. - but hey, if there is a real will for one larger project of the team to which all or most members want to contribute, lets go for it .... but in my view, it should be on a convince rather than oblige basis ...
  •  
    It is, though, indicative that some of the team member do not see all the collaboration and team work happening around them. We always leave the small and agile sub-teams to form and organize themselves spontaneously, but clearly this method leaves out some people (be it for their own personal attitude or be it for pure chance) For those cases which we could think to provide the possibility to participate in an alternative, more structured, team work where we actually manage the hierachy, meritocracy and perform the project review (to use Joris words).
  •  
    I am, and was, involved in "collaboration" but I can say from experience that we are mostly a sum of individuals. In the end, it is always one or two individuals doing the job, and other waiting. Sometimes even, some people don't do what they are supposed to do, so nothing happens ... this could not be defined as team work. Don't get me wrong, this is the dynamic of the team and I am OK with it ... in the end it is less work for me :) team = 3 members or more. I am personally not looking for a 15 member team work, and it is not what I meant. Anyway, this is not exactly the subject of the paper.
  •  
    My opinion about this is that a research team, like the ACT, is a group of _people_ and not only brains. What I mean is that people have feelings, hate, anger, envy, sympathy, love, etc about the others. Unfortunately(?), this could lead to situations, where, in theory, a group of brains could work together, but not the same group of people. As far as I am concerned, this happened many times during my ACT period. And this is happening now with me in Delft, where I have the chance to be in an even more international group than the ACT. I do efficient collaborations with those people who are "close" to me not only in scientific interest, but also in some private sense. And I have people around me who have interesting topics and they might need my help and knowledge, but somehow, it just does not work. Simply lack of sympathy. You know what I mean, don't you? About the article: there is nothing new, indeed. However, why it worked: only brains and not the people worked together on a very specific problem. Plus maybe they were motivated by the idea of e-collaboration. No revolution.
  •  
    Joris, maybe I made myself not clear enough, but my point was only tangentially related to the tools. Indeed, it is the original article mention of "development of new online tools" which prompted my reply about emails. Let me try to say it more clearly: my point is that what they accomplished is nothing new methodologically (i.e., online collaboration of a loosely knit group of people), it is something that has been done countless times before. Do you think that now that it is mathematicians who are doing it makes it somehow special or different? Personally, I don't. You should come over to some mailing lists of mathematical open-source software (e.g., SAGE, Pari, ...), there's plenty of online collaborative research going on there :) I also disagree that, as you say, "in the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review)". First of all I think the main engine of any collaboration like this is the objective, i.e., wanting to get something done. Rules emerge from self-organization later on, and they may be completely different from project to project, ranging from almost anarchy to BDFL (benevolent dictator for life) style. Given this kind of variety that can be observed in open-source projects today, I am very skeptical that any kind of management rule can be said to be universal (and I am pretty sure that the overwhelming majority of project organizers never went to any "management school"). Then there is the social aspect that Tamas mentions above. From my personal experience, communities that put technical merit above everything else tend to remain very small and generally become irrelevant. The ability to work and collaborate with others is the main asset the a participant of a community can bring. I've seen many times on the Linux kernel mailing list contributions deemed "technically superior" being disregarded and not considered for inclusion in the kernel because it was clear that
  •  
    hey, just catched up the discussion. For me what is very new is mainly the framework where this collaborative (open) work is applied. I haven't seen this kind of working openly in any other field of academic research (except for the Boinc type project which are very different, because relying on non specialists for the work to be done). This raise several problems, and mainly the one of the credit, which has not really been solved as I read in the wiki (is an article is written, who writes it, what are the names on the paper). They chose to refer to the project, and not to the individual researchers, as a temporary solution... It is not so surprising for me that this type of work has been first done in the domain of mathematics. Perhaps I have an ideal view of this community but it seems that the result obtained is more important than who obtained it... In many areas of research this is not the case, and one reason is how the research is financed. To obtain money you need to have (scientific) credit, and to have credit you need to have papers with your name on it... so this model of research does not fit in my opinion with the way research is governed. Anyway we had a discussion on the Ariadnet on how to use it, and one idea was to do this kind of collaborative research; idea that was quickly abandoned...
  •  
    I don't really see much the problem with giving credit. It is not the first time a group of researchers collectively take credit for a result under a group umbrella, e.g., see Nicolas Bourbaki: http://en.wikipedia.org/wiki/Bourbaki Again, if the research process is completely transparent and publicly accessible there's no way to fake contributions or to give undue credit, and one could cite without problems a group paper in his/her CV, research grant application, etc.
  •  
    Well my point was more that it could be a problem with how the actual system works. Let say you want a grant or a position, then the jury will count the number of papers with you as a first author, and the other papers (at least in France)... and look at the impact factor of these journals. Then you would have to set up a rule for classifying the authors (endless and pointless discussions), and give an impact factor to the group...?
  •  
    it seems that i should visit you guys at estec... :-)
  •  
    urgently!! btw: we will have the ACT christmas dinner on the 9th in the evening ... are you coming?
Dario Izzo

Climate scientists told to 'cover up' the fact that the Earth's temperature hasn't rise... - 5 views

  •  
    This is becoming a mess :)
  • ...2 more comments...
  •  
    I would avoid reading climate science from political journals, for a less selective / dramatic picture :-) . Here is a good start: http://www.realclimate.org/ And an article on why climate understanding should be approached hierarcically, (that is not the way done in the IPCC), a view with insight, 8 years ago: http://www.princeton.edu/aos/people/graduate_students/hill/files/held2005.pdf
  •  
    True, but fundings are allocated to climate modelling 'science' on the basis of political decisions, not solid and boring scientific truisms such as 'all models are wrong'. The reason so many people got trained on this area in the past years is that resources were allocated to climate science on the basis of the dramatic picture depicted by some scientists when it was indeed convenient for them to be dramatic.
  •  
    I see your point, and I agree that funding was also promoted through the energy players and their political influence. A coincident parallel interest which is irrelevant to the fact that the question remains vital. How do we affect climate and how does it respond. Huge complex system to analyse which responds in various time scales which could obscure the trend. What if we made a conceptual parallelism with the L Ácquila case : Is the scientific method guilty or the interpretation of uncertainty in terms of societal mobilization? Should we leave the humanitarian aspect outside any scientific activity?
  •  
    I do not think there is anyone arguing that the question is not interesting and complex. The debate, instead, addresses the predictive value of the models produced so far. Are they good enough to be used outside of the scientific process aimed at improving them? Or should one wait for "the scientific method" to bring forth substantial improvements to the current understanding and only then start using its results? One can take both stand points, but some recent developments will bring many towards the second approach.
Kevin de Groote

Cell Beta Prototypes - 0 views

  •  
    Cell Press and Elsevier have launched a project called Article of the Future that is an ongoing collaboration with the scientific community to redefine how the scientific article is presented online....
  •  
    well - none of the two examples that they have given show much imagination - don't think that any of these will be better than just using the full screen pdf, my preferred way after printing and reading on paper ... btw: Kevin: are you still around? could we meet?
ESA ACT

STIX Fonts - General Information - 0 views

  •  
    First time I heard about this relevant project. In brief: The mission of the Scientific and Technical Information Exchange (STIX) font creation project is the preparation of a comprehensive set of fonts that serve the scientific and engineering community
santecarloni

Physics anniversaries: How Professor Maxwell changed the world | The Economist - 1 views

  •  
    Maxwell remains the great unsung hero of human progress, the physicists' physicist whose name means little to those without a scientific bent. His life's work [....] is among the most enduring scientific legacies of all time, on a par with those of his more widely acclaimed peers, Isaac Newton and Albert Einstein.
jmlloren

HUBbub 2013 - 0 views

shared by jmlloren on 21 Aug 13 - No Cached
  •  
    HUBbub 2013 is the annual conference for researchers, educators, and IT professionals engaged in building and using cyberinfrastructure. Learn about the latest features in the HUBzero tool box and how they can be used to address the unique challenges of scientific pursuits.
  •  
    It is probably more interesting to check the parent site: hubzero.org: HUBzero ® is a powerful, open source software platform for creating dynamic web sites that support scientific research and educational activities.
Francesco Biscani

Kitware Blog - VTK: an example on how to fix the crisis of scientific software - 2 views

  •  
    A nice blog post about a recent Nature paper highlighting the problems of scientific software.
Dario Izzo

If you're going to do good science, release the computer code too!!! - 3 views

  • Les Hatton, an international expert in software testing resident in the Universities of Kent and Kingston, carried out an extensive analysis of several million lines of scientific code. He showed that the software had an unacceptably high level of detectable inconsistencies.
  •  
    haha. this guy won't have any new friends with this article! I kind of agree but making your code public doesn't mean you are doing good science...and inversely! He takes experimental physics as a counter example but even there, some teams keep their little secrets on the details of the experiment to have a bit of advance on other labs. Research is competitive in its current state, and I think only collaborations can overcome this fact.
  • ...1 more comment...
  •  
    well sure competitiveness is good but to verify (and that should be the case for scientific experiments) the code should be public, it would be nice to have something like bibtex for code libraries or versions used.... :) btw I fully agree that the code should go public, I had lots of trouble reproducing (reprogramming) some papers in the past ... grr
  •  
    My view is that the only proper way to do scientific communication is full transparency: methodologies, tests, codes, etc. Everything else should be unacceptable. This should hold both for publicly funded science (for which there is the additional moral requirement to give back to the public domain what was produced with taxpayers' money) and privately-funded science (where the need to turn a profit should be of lesser importance than the proper application of the scientifc method).
  •  
    Same battle we are fighting since a few years....
Francesco Biscani

The Exploration of the Moon: Scientific American - 0 views

  •  
    This article originally appeared in the October 1969 issue of Scientific American. It really made me sad about the current stat of human space exploration.
Christophe Praz

Scientific method: Defend the integrity of physics - 2 views

  •  
    Interesting article about theoretical physics theories vs. experimental verification. Can we state that a theory can be so good that its existence supplants the need for data and testing ? If a theory is proved to be untestable experimentally, can we still say that it is a scientific theory ? (not in my opinion)
  •  
    There is an interesting approach by Feynman that it does not make sense to describe something of which we cannot measure the consequences. So a theory that is so removed from experiment that it cannot be backed by it is pointless and of no consequence. It is a bit as with the statement "if a tree falls in the forrest and nobody is there to hear it, does it make a sound?". We would typically extrapolate to say that it does make a sound. But actually nobody knows - you would have to take some kind of measurement. But even more fundamentally it does not make any difference! For all intents and purposes there is no point in forcing a prediction that you cannot measure and that therefore has noto reflect an event in your world.
  •  
    "Mathematics is the model of the universe, not the other way round" - M. R.
johannessimon81

Mathematicians Predict the Future With Data From the Past - 6 views

  •  
    Asimov's Foundation meets ACT's Tipping Point Prediction?
  • ...2 more comments...
  •  
    Good luck to them!!
  •  
    "Mathematicians Predict the Future With Data From the Past". GREAT! And physicists probably predict the past with data from the future?!? "scientists and mathematicians analyze history in the hopes of finding patterns they can then use to predict the future". Big deal! That's what any scientist does anyway... "cliodynamics"!? Give me a break!
  •  
    still, some interesting thoughts in there ... "Then you have the 50-year cycles of violence. Turchin describes these as the building up and then the release of pressure. Each time, social inequality creeps up over the decades, then reaches a breaking point. Reforms are made, but over time, those reforms are reversed, leading back to a state of increasing social inequality. The graph above shows how regular these spikes are - though there's one missing in the early 19th century, which Turchin attributes to the relative prosperity that characterized the time. He also notes that the severity of the spikes can vary depending on how governments respond to the problem. Turchin says that the United States was in a pre-revolutionary state in the 1910s, but there was a steep drop-off in violence after the 1920s because of the progressive era. The governing class made decisions to reign in corporations and allowed workers to air grievances. These policies reduced the pressure, he says, and prevented revolution. The United Kingdom was also able to avoid revolution through reforms in the 19th century, according to Turchin. But the most common way for these things to resolve themselves is through violence. Turchin takes pains to emphasize that the cycles are not the result of iron-clad rules of history, but of feedback loops - just like in ecology. "In a predator-prey cycle, such as mice and weasels or hares and lynx, the reason why populations go through periodic booms and busts has nothing to do with any external clocks," he writes. "As mice become abundant, weasels breed like crazy and multiply. Then they eat down most of the mice and starve to death themselves, at which point the few surviving mice begin breeding like crazy and the cycle repeats." There are competing theories as well. A group of researchers at the New England Complex Systems Institute - who practice a discipline called econophysics - have built their own model of political violence and
  •  
    It's not the scientific activity described in the article that is uninteresting, on the contrary! But the way it is described is just a bad joke. Once again the results itself are seemingly not sexy enough and thus something is sold as the big revolution, though it's just the application of the oldest scientific principles in a slightly different way than used before.
Joris _

18 Complicated Scientific Ideas Explained Simply - 8 views

  •  
    nice exercice to all scientists. I guess it is how you explain number theory to your kids ;-)
  •  
    Lame. They use words longer than one syllable... see one of my posts below...
Luís F. Simões

Our approach to replication in computational science - 2 views

  • So what did we do to make this paper extra super replicable? If you go to the paper Web site, you'll find:
  • p.s. I think I have to refer to this cancer results not reproducible paper somewhere. Done.
  •  
    good discussion on the replicability/reproducibility of scientific results (also a nice example of how to do it right... in bioinformatics at least)
Thijs Versloot

Role of data visualization in the scientific community @britishlibrary - 1 views

  •  
    In a new exhibition titled Beautiful Science: Picturing Data, Inspiring Insight [bl.uk], the British Library pays homage to the important role data visualization plays in the scientific process. The exhibition can be visited from 20 February until 26 May 2014, and contains works ranging from John Snow's plotting of the 1854 London cholera infections on a map to colourful depictions of the Tree of Life.
Nicholas Lan

Kerbal Space Program | Media - 2 views

  •  
    what seems to be an impressively detailed space game
  • ...4 more comments...
  •  
    Yeah... 2011 called with the greetings. However, there was quite an interesting news about KSP recently... Perhaps it's been ACT's small failure to spot this opportunity? Considering we wrote space missions games ourselves...
  •  
    This guy actually makes very detailed video tutorials about how to master the orbital dynamics in Kerbal. I think the level of detail (and sometimes realism) is quite impressive: https://www.youtube.com/channel/UCxzC4EngIsMrPmbm6Nxvb-A
  •  
    I will have to try this definitely, looks like a lot of fun.. I also saw some crazy 'Insane Rocket Division' videos.. :)
  •  
    @Marek: true, old news. But "opportunity"? For what? The games we write are always games with a scientific purpose (not training not educational) Kerbal Space programme is cool, but it is a game just like Microsoft Flight Simulator (but less accurate). Having ESA mission simulated in it is also cool but is it what we should or could do? Even more is it want we want to do? My personal opinion: No-No-No
  •  
    > The games we write are always games with a scientific purpose (not training not educational) I'd say investigating how to get the crowd may be an important part of "science of crowdsourcing". So, an obvious example would be comparing how many participants the original ACT space mission game attracted versus a variant implemented in Kerbal and why. Easily made and easily publishable I think. But that's just an obvious example I can give on the spot. I think there is more potential than that, so would not dismiss the idea so definitively. But then, correct me if I'm wrong, social sciences are still not represented in the ACT... Perhaps an idea to revive during the upcoming retreat? ;-)
  •  
    it's on sale on steam til tomorrow by the way if anyone's interested
Juxi Leitner

Lotus Plant-Inspired Dust-Busting Shield To Protect Space Gear - 3 views

  • replicate to prevent dirt from accumulating on the surfaces of spacesuits, scientific instruments, robotic rovers, solar array panels and other hardware used to gather scientific data or carry out exploratory activities on other objects in the solar system
  • The team also is trying to partner with Northrop Grumman to add a biocide to the coating, which would kill bacteria that thrive and produce foul odors wherever people are confined to a small space for long periods, like the space station.
  •  
    We had some discussion about the Lotus-effect roughly two years ago. Zoe said that NASA surely worked on it. Well, she was right.
Tobias Seidl

ScienceDirect - Remote Sensing of Environment : Wombats detected from space - 1 views

  •  
    How useful space technology can be for real scientific problems.
Giusi Schiavone

The importance of stupidity in scientific research. - 10 views

  •  
    I suggest you this easy reading ( is on a peer-reviewed scientific journal, IF = 6.14) 'We just don't know what we're doing!!!'
  • ...2 more comments...
  •  
    as a start of a peer reviewed paper this is an interesting first paragraph: "I recently saw an old friend for the first time in many years. We had been Ph.D. students at the same time, both studying science, although in different areas. She later dropped out of graduate school, went to Harvard Law School and is now a senior lawyer for a major environmental organization. At some point, the conversation turned to why she had left graduate school. To my utter astonishment, she said it was because it made her feel stupid. After a couple of years of feeling stupid every day, she was ready to do something else."
  •  
    Hilarious! Mr Schwartz, who made a PhD at Stanford(!) and apparently is working as a postdoc now, has finally discovered what science is about!!! Quote: "That's when it hit me: nobody did. That's why it was a research problem." And he seems so excited about it! I think he should not only get published in 6.14 journal, but also get the Nobel Prize immediately! Seriously, after reading something like this, how one may not have superstitions about the educational system in the US?
  •  
    I tend to agree with you but I think that you are too harsh - its still only an "essay" and one of his points of making sure that education at post graduate level is not about indoctrinating what we know already is valid ...
  •  
    I think this quote by Richard Horton is relevant to the discussion: "We portray peer review to the public as a quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong." :P
1 - 20 of 145 Next › Last »
Showing 20 items per page