Skip to main content

Home/ Advanced Concepts Team/ Group items tagged methodology

Rss Feed Group items tagged

Luís F. Simões

NASA Goddard to Auction off Patents for Automated Software Code Generation - 0 views

  • The technology was originally developed to handle coding of control code for spacecraft swarms, but it is broadly applicable to any commercial application where rule-based systems development is used.
  •  
    This is related to the "Verified Software" item in NewScientist's list of ideas that will change science. At the link below you'll find the text of the patents being auctioned: http://icapoceantomo.com/item-for-sale/exclusive-license-related-improved-methodology-formally-developing-control-systems :) Patent #7,627,538 ("Swarm autonomic agents with self-destruct capability") makes for quite an interesting read: "This invention relates generally to artificial intelligence and, more particularly, to architecture for collective interactions between autonomous entities." "In some embodiments, an evolvable synthetic neural system is operably coupled to one or more evolvable synthetic neural systems in a hierarchy." "In yet another aspect, an autonomous nanotechnology swarm may comprise a plurality of workers composed of self-similar autonomic components that are arranged to perform individual tasks in furtherance of a desired objective." "In still yet another aspect, a process to construct an environment to satisfy increasingly demanding external requirements may include instantiating an embryonic evolvable neural interface and evolving the embryonic evolvable neural interface towards complex complete connectivity." "In some embodiments, NBF 500 also includes genetic algorithms (GA) 504 at each interface between autonomic components. The GAs 504 may modify the intra-ENI 202 to satisfy requirements of the SALs 502 during learning, task execution or impairment of other subsystems."
ESA ACT

Microfibre-nanowire hybrid structure for energy scavenging : Abstract : Nature - 0 views

  •  
    This work establishes a methodology for scavenging light-wind energy and body-movement energy using fabrics.
Tobias Seidl

Global Futures Studies & Research by the MILLENNIUM PROJECT - 0 views

  •  
    The Millennium Project is a global participatory futures research think tank of futurists, scholars, business planners, and policy makers who work for international organizations, governments, corporations, NGOs, and universities. The Millennium Project manages a coherent and cumulative process that collects and assesses judgements from its several hundred participants to produce the annual "State of the Future", "Futures Research Methodology" series, and special studies such as the State of the Future Index, Future Scenarios for Africa, Lessons of History, Environmental Security, Applications of Futures Research to Policy, and a 700+ annotated scenarios bibliography.
  •  
    very nice page - we should use some of its resources!!
Nicholas Lan

Advancing Aeronautics: A Decision Framework for Selecting Research Agendas | RAND - 1 views

  •  
    possibly some of you might find this interesting. methodology for selecting research agendas particularly with respect to NASA from the RAND corporation "Develops a unified decisionmaking approach for selecting aeronautics research agendas that quantifies the social and economic reasons for the research, balances competing perspectives, and enables transparent explanation of the resulting decisions."
pacome delva

[1107.5728] The network of global corporate control - 1 views

  • Abstract: The structure of the control network of transnational corporations affects global market competition and financial stability. So far, only small national samples were studied and there was no appropriate methodology to assess control globally. We present the first investigation of the architecture of the international ownership network, along with the computation of the control held by each global player. We find that transnational corporations form a giant bow-tie structure and that a large portion of control flows to a small tightly-knit core of financial institutions. This core can be seen as an economic "super-entity" that raises new important issues both for researchers and policy makers.
tvinko

Massively collaborative mathematics : Article : Nature - 28 views

  •  
    peer-to-peer theorem-proving
  • ...14 more comments...
  •  
    Or: mathematicians catch up with open-source software developers :)
  •  
    "Similar open-source techniques could be applied in fields such as [...] computer science, where the raw materials are informational and can be freely shared online." ... or we could reach the point, unthinkable only few years ago, of being able to exchange text messages in almost real time! OMG, think of the possibilities! Seriously, does the author even browse the internet?
  •  
    I do not agree with you F., you are citing out of context! Sharing messages does not make a collaboration, nor does a forum, .... You need a set of rules and a common objective. This is clearly observable in "some team", where these rules are lacking, making team work inexistent. The additional difficulties here are that it involves people that are almost strangers to each other, and the immateriality of the project. The support they are using (web, wiki) is only secondary. What they achieved is remarkable, disregarding the subject!
  •  
    I think we will just have to agree to disagree then :) Open source developers have been organizing themselves with emails since the early '90s, and most projects (e.g., the Linux kernel) still do not use anything else today. The Linux kernel mailing list gets around 400 messages per day, and they are managing just fine to scale as the number of contributors increases. I agree that what they achieved is remarkable, but it is more for "what" they achieved than "how". What they did does not remotely qualify as "massively" collaborative: again, many open source projects are managed collaboratively by thousands of people, and many of them are in the multi-million lines of code range. My personal opinion of why in the scientific world these open models are having so many difficulties is that the scientific community today is (globally, of course there are many exceptions) a closed, mostly conservative circle of people who are scared of changes. There is also the fact that the barrier of entry in a scientific community is very high, but I think that this should merely scale down the number of people involved and not change the community "qualitatively". I do not think that many research activities are so much more difficult than, e.g., writing an O(1) scheduler for an Operating System or writing a new balancing tree algorithm for efficiently storing files on a filesystem. Then there is the whole issue of scientific publishing, which, in its current form, is nothing more than a racket. No wonder traditional journals are scared to death by these open-science movements.
  •  
    here we go ... nice controversy! but maybe too many things mixed up together - open science journals vs traditional journals, conservatism of science community wrt programmers (to me one of the reasons for this might be the average age of both groups, which is probably more than 10 years apart ...) and then using emailing wrt other collaboration tools .... .... will have to look at the paper now more carefully ... (I am surprised to see no comment from José or Marek here :-)
  •  
    My point about your initial comment is that it is simplistic to infer that emails imply collaborative work. You actually use the word "organize", what does it mean indeed. In the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review). Mailing is just a coordination mean. In collaborations and team work, it is about rules, not only about the technology you use to potentially collaborate. Otherwise, all projects would be successful, and we would noy learn management at school! They did not write they managed the colloboration exclusively because of wikipedia and emails (or other 2.0 technology)! You are missing the part that makes it successful and remarkable as a project. On his blog the guy put a list of 12 rules for this project. None are related to emails, wikipedia, forums ... because that would be lame and your comment would make sense. Following your argumentation, the tools would be sufficient for collaboration. In the ACT, we have plenty of tools, but no team work. QED
  •  
    the question on the ACT team work is one that is coming back continuously and it always so far has boiled down to the question of how much there need and should be a team project to which everybody inthe team contributes in his / her way or how much we should leave smaller, flexible teams within the team form and progress, more following a bottom-up initiative than imposing one from top-down. At this very moment, there are at least 4 to 5 teams with their own tools and mechanisms which are active and operating within the team. - but hey, if there is a real will for one larger project of the team to which all or most members want to contribute, lets go for it .... but in my view, it should be on a convince rather than oblige basis ...
  •  
    It is, though, indicative that some of the team member do not see all the collaboration and team work happening around them. We always leave the small and agile sub-teams to form and organize themselves spontaneously, but clearly this method leaves out some people (be it for their own personal attitude or be it for pure chance) For those cases which we could think to provide the possibility to participate in an alternative, more structured, team work where we actually manage the hierachy, meritocracy and perform the project review (to use Joris words).
  •  
    I am, and was, involved in "collaboration" but I can say from experience that we are mostly a sum of individuals. In the end, it is always one or two individuals doing the job, and other waiting. Sometimes even, some people don't do what they are supposed to do, so nothing happens ... this could not be defined as team work. Don't get me wrong, this is the dynamic of the team and I am OK with it ... in the end it is less work for me :) team = 3 members or more. I am personally not looking for a 15 member team work, and it is not what I meant. Anyway, this is not exactly the subject of the paper.
  •  
    My opinion about this is that a research team, like the ACT, is a group of _people_ and not only brains. What I mean is that people have feelings, hate, anger, envy, sympathy, love, etc about the others. Unfortunately(?), this could lead to situations, where, in theory, a group of brains could work together, but not the same group of people. As far as I am concerned, this happened many times during my ACT period. And this is happening now with me in Delft, where I have the chance to be in an even more international group than the ACT. I do efficient collaborations with those people who are "close" to me not only in scientific interest, but also in some private sense. And I have people around me who have interesting topics and they might need my help and knowledge, but somehow, it just does not work. Simply lack of sympathy. You know what I mean, don't you? About the article: there is nothing new, indeed. However, why it worked: only brains and not the people worked together on a very specific problem. Plus maybe they were motivated by the idea of e-collaboration. No revolution.
  •  
    Joris, maybe I made myself not clear enough, but my point was only tangentially related to the tools. Indeed, it is the original article mention of "development of new online tools" which prompted my reply about emails. Let me try to say it more clearly: my point is that what they accomplished is nothing new methodologically (i.e., online collaboration of a loosely knit group of people), it is something that has been done countless times before. Do you think that now that it is mathematicians who are doing it makes it somehow special or different? Personally, I don't. You should come over to some mailing lists of mathematical open-source software (e.g., SAGE, Pari, ...), there's plenty of online collaborative research going on there :) I also disagree that, as you say, "in the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review)". First of all I think the main engine of any collaboration like this is the objective, i.e., wanting to get something done. Rules emerge from self-organization later on, and they may be completely different from project to project, ranging from almost anarchy to BDFL (benevolent dictator for life) style. Given this kind of variety that can be observed in open-source projects today, I am very skeptical that any kind of management rule can be said to be universal (and I am pretty sure that the overwhelming majority of project organizers never went to any "management school"). Then there is the social aspect that Tamas mentions above. From my personal experience, communities that put technical merit above everything else tend to remain very small and generally become irrelevant. The ability to work and collaborate with others is the main asset the a participant of a community can bring. I've seen many times on the Linux kernel mailing list contributions deemed "technically superior" being disregarded and not considered for inclusion in the kernel because it was clear that
  •  
    hey, just catched up the discussion. For me what is very new is mainly the framework where this collaborative (open) work is applied. I haven't seen this kind of working openly in any other field of academic research (except for the Boinc type project which are very different, because relying on non specialists for the work to be done). This raise several problems, and mainly the one of the credit, which has not really been solved as I read in the wiki (is an article is written, who writes it, what are the names on the paper). They chose to refer to the project, and not to the individual researchers, as a temporary solution... It is not so surprising for me that this type of work has been first done in the domain of mathematics. Perhaps I have an ideal view of this community but it seems that the result obtained is more important than who obtained it... In many areas of research this is not the case, and one reason is how the research is financed. To obtain money you need to have (scientific) credit, and to have credit you need to have papers with your name on it... so this model of research does not fit in my opinion with the way research is governed. Anyway we had a discussion on the Ariadnet on how to use it, and one idea was to do this kind of collaborative research; idea that was quickly abandoned...
  •  
    I don't really see much the problem with giving credit. It is not the first time a group of researchers collectively take credit for a result under a group umbrella, e.g., see Nicolas Bourbaki: http://en.wikipedia.org/wiki/Bourbaki Again, if the research process is completely transparent and publicly accessible there's no way to fake contributions or to give undue credit, and one could cite without problems a group paper in his/her CV, research grant application, etc.
  •  
    Well my point was more that it could be a problem with how the actual system works. Let say you want a grant or a position, then the jury will count the number of papers with you as a first author, and the other papers (at least in France)... and look at the impact factor of these journals. Then you would have to set up a rule for classifying the authors (endless and pointless discussions), and give an impact factor to the group...?
  •  
    it seems that i should visit you guys at estec... :-)
  •  
    urgently!! btw: we will have the ACT christmas dinner on the 9th in the evening ... are you coming?
Ma Ru

Estimating the reproducibility of psychological science - 1 views

  •  
    Apparently, between 33 to 50%. But I'm not convinced the results are reproducible...
Dario Izzo

If you're going to do good science, release the computer code too!!! - 3 views

  • Les Hatton, an international expert in software testing resident in the Universities of Kent and Kingston, carried out an extensive analysis of several million lines of scientific code. He showed that the software had an unacceptably high level of detectable inconsistencies.
  •  
    haha. this guy won't have any new friends with this article! I kind of agree but making your code public doesn't mean you are doing good science...and inversely! He takes experimental physics as a counter example but even there, some teams keep their little secrets on the details of the experiment to have a bit of advance on other labs. Research is competitive in its current state, and I think only collaborations can overcome this fact.
  • ...1 more comment...
  •  
    well sure competitiveness is good but to verify (and that should be the case for scientific experiments) the code should be public, it would be nice to have something like bibtex for code libraries or versions used.... :) btw I fully agree that the code should go public, I had lots of trouble reproducing (reprogramming) some papers in the past ... grr
  •  
    My view is that the only proper way to do scientific communication is full transparency: methodologies, tests, codes, etc. Everything else should be unacceptable. This should hold both for publicly funded science (for which there is the additional moral requirement to give back to the public domain what was produced with taxpayers' money) and privately-funded science (where the need to turn a profit should be of lesser importance than the proper application of the scientifc method).
  •  
    Same battle we are fighting since a few years....
Luís F. Simões

Seminar: You and Your Research, Dr. Richard W. Hamming (March 7, 1986) - 10 views

  • This talk centered on Hamming's observations and research on the question "Why do so few scientists make significant contributions and so many are forgotten in the long run?" From his more than forty years of experience, thirty of which were at Bell Laboratories, he has made a number of direct observations, asked very pointed questions of scientists about what, how, and why they did things, studied the lives of great scientists and great contributions, and has done introspection and studied theories of creativity. The talk is about what he has learned in terms of the properties of the individual scientists, their abilities, traits, working habits, attitudes, and philosophy.
  •  
    Here's the link related to one of the lunch time discussions. I recommend it to every single one of you. I promise it will be worth your time. If you're lazy, you have a summary here (good stuff also in the references, have a look at them):      Erren TC, Cullen P, Erren M, Bourne PE (2007) Ten Simple Rules for Doing Your Best Research, According to Hamming. PLoS Comput Biol 3(10): e213.
  • ...3 more comments...
  •  
    I'm also pretty sure that the ones who are remembered are not the ones who tried to be... so why all these rules !? I think it's bullshit...
  •  
    The seminar is not a manual on how to achieve fame, but rather an analysis on how others were able to perform very significant work. The two things are in some cases related, but the seminar's focus is on the second.
  •  
    Then read a good book on the life of Copernic, it's the anti-manual of Hamming... he breaks all the rules !
  •  
    honestly I think that some of these rules actually make sense indeed ... but I am always curious to get a good book recommendation (which book of Copernic would you recommend?) btw Pacome: we are in Paris ... in case you have some time ...
  •  
    I warmly recommend this book, a bit old but fascinating: The sleepwalkers from Arthur Koestler. It shows that progress in science is not straight and do not obey any rule... It is not as rational as most of people seem to believe today. http://www.amazon.com/Sleepwalkers-History-Changing-Universe-Compass/dp/0140192468/ref=sr_1_1?ie=UTF8&qid=1294835558&sr=8-1 Otherwise yes I have some time ! my phone number: 0699428926 We live around Denfert-Rochereau and Montparnasse. We could go for a beer this evening ?
Luís F. Simões

The Fantastical Promise of Reversible Computing  - Technology Review - 2 views

  • Reversible logic could cut the energy wasted by computers to zero. But significant challenges lie ahead.
  • By some estimates the difference between the amount of energy required to carry out a computation and the amount that today's computers actually use, is some eight orders of magnitude. Clearly, there is room for improvement.
  • There are one or two caveats, of course. The first is that nobody has succeeded in building a properly reversible logic gate so this work is entirely theoretical. But there are a number of computing schemes that have the potential to work like this. Thapliyal and Ranganathan point in particular to the emerging technology of quantum cellular automata and show how their approach might be applied.
  • ...1 more annotation...
  • Ref: arxiv.org/abs/1101.4222: Reversible Logic Based Concurrent Error Detection Methodology For Emerging Nanocircuits
  •  
    We did look at making computation powers more efficient from the bio perspective (efficiency of computations in brain). This paper was actually the base for our discussion on a new approsach to computing http://atlas.estec.esa.int/ACTwiki/images/6/68/Sarpeshkar.pdf and led to several ACT internal studies
  •  
    here is the paper I told you about, on the computational power of analog computing: http://dx.doi.org/10.1016/0304-3975(95)00248-0 you can also get it here: http://www.santafe.edu/media/workingpapers/95-09-079.pdf
Luís F. Simões

When Astronomy Met Computer Science | Cosmology | DISCOVER Magazine - 1 views

  • “That’s impossible!” he told Borne. “Don’t you realize that the entire data set NASA has collected over the past 45 years is one terabyte?”
  • The LSST, producing 30 terabytes of data nightly, will become the centerpiece of what some experts have dubbed the age of peta­scale astronomy—that’s 1015 bits (what Borne jokingly calls “a tonabytes”).
  • A major sky survey might detect millions or even billions of objects, and for each object we might measure thousands of attributes in a thousand dimensions. You can get a data-mining package off the shelf, but if you want to deal with a billion data vectors in a thousand dimensions, you’re out of luck even if you own the world’s biggest supercomputer. The challenge is to develop a new scientific methodology for the 21st century.”
  •  
    Francesco please look at this and get back wrt to the /. question .... thanks
LeopoldS

Futures wheel - Wikipedia, the free encyclopedia - 2 views

  •  
    of interest to us? Tobias? Kevin?
  •  
    Bullshit!!!
1 - 12 of 12
Showing 20 items per page