Skip to main content

Home/ Advanced Concepts Team/ Group items tagged linux

Rss Feed Group items tagged

LeopoldS

Linux Foundation Training Prepares the International Space Station for Linux Migration ... - 1 views

  •  
    linux goes ISS ...
  •  
    From there, it's just a tiny step to OSX :-)
tvinko

Massively collaborative mathematics : Article : Nature - 28 views

  •  
    peer-to-peer theorem-proving
  • ...14 more comments...
  •  
    Or: mathematicians catch up with open-source software developers :)
  •  
    "Similar open-source techniques could be applied in fields such as [...] computer science, where the raw materials are informational and can be freely shared online." ... or we could reach the point, unthinkable only few years ago, of being able to exchange text messages in almost real time! OMG, think of the possibilities! Seriously, does the author even browse the internet?
  •  
    I do not agree with you F., you are citing out of context! Sharing messages does not make a collaboration, nor does a forum, .... You need a set of rules and a common objective. This is clearly observable in "some team", where these rules are lacking, making team work inexistent. The additional difficulties here are that it involves people that are almost strangers to each other, and the immateriality of the project. The support they are using (web, wiki) is only secondary. What they achieved is remarkable, disregarding the subject!
  •  
    I think we will just have to agree to disagree then :) Open source developers have been organizing themselves with emails since the early '90s, and most projects (e.g., the Linux kernel) still do not use anything else today. The Linux kernel mailing list gets around 400 messages per day, and they are managing just fine to scale as the number of contributors increases. I agree that what they achieved is remarkable, but it is more for "what" they achieved than "how". What they did does not remotely qualify as "massively" collaborative: again, many open source projects are managed collaboratively by thousands of people, and many of them are in the multi-million lines of code range. My personal opinion of why in the scientific world these open models are having so many difficulties is that the scientific community today is (globally, of course there are many exceptions) a closed, mostly conservative circle of people who are scared of changes. There is also the fact that the barrier of entry in a scientific community is very high, but I think that this should merely scale down the number of people involved and not change the community "qualitatively". I do not think that many research activities are so much more difficult than, e.g., writing an O(1) scheduler for an Operating System or writing a new balancing tree algorithm for efficiently storing files on a filesystem. Then there is the whole issue of scientific publishing, which, in its current form, is nothing more than a racket. No wonder traditional journals are scared to death by these open-science movements.
  •  
    here we go ... nice controversy! but maybe too many things mixed up together - open science journals vs traditional journals, conservatism of science community wrt programmers (to me one of the reasons for this might be the average age of both groups, which is probably more than 10 years apart ...) and then using emailing wrt other collaboration tools .... .... will have to look at the paper now more carefully ... (I am surprised to see no comment from José or Marek here :-)
  •  
    My point about your initial comment is that it is simplistic to infer that emails imply collaborative work. You actually use the word "organize", what does it mean indeed. In the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review). Mailing is just a coordination mean. In collaborations and team work, it is about rules, not only about the technology you use to potentially collaborate. Otherwise, all projects would be successful, and we would noy learn management at school! They did not write they managed the colloboration exclusively because of wikipedia and emails (or other 2.0 technology)! You are missing the part that makes it successful and remarkable as a project. On his blog the guy put a list of 12 rules for this project. None are related to emails, wikipedia, forums ... because that would be lame and your comment would make sense. Following your argumentation, the tools would be sufficient for collaboration. In the ACT, we have plenty of tools, but no team work. QED
  •  
    the question on the ACT team work is one that is coming back continuously and it always so far has boiled down to the question of how much there need and should be a team project to which everybody inthe team contributes in his / her way or how much we should leave smaller, flexible teams within the team form and progress, more following a bottom-up initiative than imposing one from top-down. At this very moment, there are at least 4 to 5 teams with their own tools and mechanisms which are active and operating within the team. - but hey, if there is a real will for one larger project of the team to which all or most members want to contribute, lets go for it .... but in my view, it should be on a convince rather than oblige basis ...
  •  
    It is, though, indicative that some of the team member do not see all the collaboration and team work happening around them. We always leave the small and agile sub-teams to form and organize themselves spontaneously, but clearly this method leaves out some people (be it for their own personal attitude or be it for pure chance) For those cases which we could think to provide the possibility to participate in an alternative, more structured, team work where we actually manage the hierachy, meritocracy and perform the project review (to use Joris words).
  •  
    I am, and was, involved in "collaboration" but I can say from experience that we are mostly a sum of individuals. In the end, it is always one or two individuals doing the job, and other waiting. Sometimes even, some people don't do what they are supposed to do, so nothing happens ... this could not be defined as team work. Don't get me wrong, this is the dynamic of the team and I am OK with it ... in the end it is less work for me :) team = 3 members or more. I am personally not looking for a 15 member team work, and it is not what I meant. Anyway, this is not exactly the subject of the paper.
  •  
    My opinion about this is that a research team, like the ACT, is a group of _people_ and not only brains. What I mean is that people have feelings, hate, anger, envy, sympathy, love, etc about the others. Unfortunately(?), this could lead to situations, where, in theory, a group of brains could work together, but not the same group of people. As far as I am concerned, this happened many times during my ACT period. And this is happening now with me in Delft, where I have the chance to be in an even more international group than the ACT. I do efficient collaborations with those people who are "close" to me not only in scientific interest, but also in some private sense. And I have people around me who have interesting topics and they might need my help and knowledge, but somehow, it just does not work. Simply lack of sympathy. You know what I mean, don't you? About the article: there is nothing new, indeed. However, why it worked: only brains and not the people worked together on a very specific problem. Plus maybe they were motivated by the idea of e-collaboration. No revolution.
  •  
    Joris, maybe I made myself not clear enough, but my point was only tangentially related to the tools. Indeed, it is the original article mention of "development of new online tools" which prompted my reply about emails. Let me try to say it more clearly: my point is that what they accomplished is nothing new methodologically (i.e., online collaboration of a loosely knit group of people), it is something that has been done countless times before. Do you think that now that it is mathematicians who are doing it makes it somehow special or different? Personally, I don't. You should come over to some mailing lists of mathematical open-source software (e.g., SAGE, Pari, ...), there's plenty of online collaborative research going on there :) I also disagree that, as you say, "in the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review)". First of all I think the main engine of any collaboration like this is the objective, i.e., wanting to get something done. Rules emerge from self-organization later on, and they may be completely different from project to project, ranging from almost anarchy to BDFL (benevolent dictator for life) style. Given this kind of variety that can be observed in open-source projects today, I am very skeptical that any kind of management rule can be said to be universal (and I am pretty sure that the overwhelming majority of project organizers never went to any "management school"). Then there is the social aspect that Tamas mentions above. From my personal experience, communities that put technical merit above everything else tend to remain very small and generally become irrelevant. The ability to work and collaborate with others is the main asset the a participant of a community can bring. I've seen many times on the Linux kernel mailing list contributions deemed "technically superior" being disregarded and not considered for inclusion in the kernel because it was clear that
  •  
    hey, just catched up the discussion. For me what is very new is mainly the framework where this collaborative (open) work is applied. I haven't seen this kind of working openly in any other field of academic research (except for the Boinc type project which are very different, because relying on non specialists for the work to be done). This raise several problems, and mainly the one of the credit, which has not really been solved as I read in the wiki (is an article is written, who writes it, what are the names on the paper). They chose to refer to the project, and not to the individual researchers, as a temporary solution... It is not so surprising for me that this type of work has been first done in the domain of mathematics. Perhaps I have an ideal view of this community but it seems that the result obtained is more important than who obtained it... In many areas of research this is not the case, and one reason is how the research is financed. To obtain money you need to have (scientific) credit, and to have credit you need to have papers with your name on it... so this model of research does not fit in my opinion with the way research is governed. Anyway we had a discussion on the Ariadnet on how to use it, and one idea was to do this kind of collaborative research; idea that was quickly abandoned...
  •  
    I don't really see much the problem with giving credit. It is not the first time a group of researchers collectively take credit for a result under a group umbrella, e.g., see Nicolas Bourbaki: http://en.wikipedia.org/wiki/Bourbaki Again, if the research process is completely transparent and publicly accessible there's no way to fake contributions or to give undue credit, and one could cite without problems a group paper in his/her CV, research grant application, etc.
  •  
    Well my point was more that it could be a problem with how the actual system works. Let say you want a grant or a position, then the jury will count the number of papers with you as a first author, and the other papers (at least in France)... and look at the impact factor of these journals. Then you would have to set up a rule for classifying the authors (endless and pointless discussions), and give an impact factor to the group...?
  •  
    it seems that i should visit you guys at estec... :-)
  •  
    urgently!! btw: we will have the ACT christmas dinner on the 9th in the evening ... are you coming?
Alexander Wittig

Ubuntu on Windows -- The Ubuntu Userspace for Windows Developers - 2 views

  •  
    Sounds like Microsoft is developing a full Linux binary compatibility layer for Win 10 using syscall translation (like for example FreeBSD has for Linux binaries). In simpler terms: You can run any Linux binary (from the Ubuntu base or any of the packages) directly from Windows. No virtual machine. No emulation layer. No recompiling like SUA or Cygwin.
Tobias Seidl

Wombats detected from space - 4 views

  •  
    Demonstrates how useful space technology can be.
  • ...4 more comments...
  •  
    Also, this reminds me of a poem that once sprung out on my Linux console login: The wombat lives across the seas, Among the far Antipodes. He may exist on nuts and berries, Or then again, on missionaries; His distant habitat precludes Conclusive knowledge of his moods, But I would not engage the wombat In any form of mortal combat.
  •  
    sprung out of your console????? my mac never talks like this to me ....
  •  
    See? Even console can be user-friendly ;-) If I remember well, it was Slackware linux and at every console start-up the fortune program was launched : http://linux.die.net/man/6/fortune
  •  
    so you are still not convinced about macs being superior after working for a year with martin?
  •  
    Apparently not - I just got a brand new sexy Sony Vaio S :-)
  •  
    I am sorry for you ... :-)
ESA ACT

LinuxDNA Supercharges Linux with the Intel C/C++ Compiler - 0 views

  •  
    Exciting news!!!
LeopoldS

Global Innovation Commons - 4 views

  •  
    nice initiative!
  • ...6 more comments...
  •  
    Any viral licence is a bad license...
  •  
    I'm pretty confident I'm about to open a can of worms, but mind explaining why? :)
  •  
    I am less worried about the can of worms ... actually eager to open it ... so why????
  •  
    Well, the topic GPL vs other open-source licenses (e.g., BSD, MIT, etc.) is old as the internet and it has provided material for long and glorious flame wars. The executive summary is that the GPL license (the one used by Linux) is a license which imposes some restrictions on the way you are allowed to (re)use the code. Specifically, if you re-use or modify GPL code and re-distribute it, you are required to make it available again under the GPL license. It is called "viral" because once you use a bit of GPL code, you are required to make the whole application GPL - so in this sense GPL code replicates like a virus. On the other side of the spectrum, there are the so-called BSD-like licenses which have more relaxed requirements. Usually, the only obligation they impose is to acknowledge somewhere (e.g., in a README file) that you have used some BSD code and who wrote it (this is called "attribution clause"), but they do not require to re-distribute the whole application under the same license. GPL critics usually claim that the license is not really "free" because it does not allow you to do whatever you want with the code without restrictions. GPL proponents claim that the requirements imposed by the GPL are necessary to safeguard the freedom of the code, in order to avoid being able to re-use GPL code without giving anything back to the community (which the BSD license allow: early versions of Microsoft Windows, for instance, had the networking code basically copy-pasted from BSD-licensed versions of Unix). In my opinion (and this point is often brought up in the debates) the division pro/against GPL mirrors somehow the division between anti/pro anarchism. Anarchists claim that the only way to be really free is the absence of laws, while non-anarchist maintain that the only practical way to be free is to have laws (which by definition limit certain freedoms). So you can see how the topic can quickly become inflammatory :) GPL at the current time is used by aro
  •  
    whoa, the comment got cut off. Anyway, I was just saying that at the present time the GPL license is used by around 65% of open source projects, including the Linux kernel, KDE, Samba, GCC, all the GNU utils, etc. The topic is much deeper than this brief summary, so if you are interested in it, Leopold, we can discuss it at length in another place.
  •  
    Thanks for the record long comment - am sure that this is longest ever made to an ACT diigo post! On the topic, I would rather lean for the GPL license (which I also advocated for the Marek viewer programme we put on source forge btw), mainly because I don't trust that open source is by nature delivering a better product and thus will prevail but I still would like to succeed, which I am not sure it would if there were mainly BSD like licenses around. ... but clearly, this is an outsider talking :-)
  •  
    btw: did not know the anarchist penchant of Marek :-)
  •  
    Well, not going into the discussion about GPL/BSD, the viral license in this particular case in my view simply undermines the "clean and clear" motivations of the initiative authors - why should *they* be credited for using something they have no rights for? And I don't like viral licences because they prevent using things released under this licence to all those people who want to release their stuff under a different licence, thus limiting the usefulness of the stuff released on that licence :) BSD is not a perfect license too, it also had major flaws And I'm not an anarchist, lol
Luzi Bergamin

Gesture controlled Linux Desktop - 6 views

  •  
    Demo of a Linux Desktop controlled by a PS3 move controller.
LeopoldS

How I ended up with Mac - Miguel de Icaza - 2 views

  •  
    from a linux guru ...
Luís F. Simões

Raspberry Pi in space: Putting the Linux PC into orbit | ZDNet - 0 views

  • A thriving home-brew community is already putting the credit card-sized PC to use in drones and robots. The device's designer, Eben Upton, wants to see it in rockets and satellites, too
  •  
    related: Raspberry Pi Computer To Cross The Atlantic Ocean In Autonomous Boat
LeopoldS

How Apple Killed the Linux Desktop and Why That Doesn't Matter | Wired Enterprise | Wir... - 2 views

  •  
    nice read ... let's hope since Apple got anyway already too powerful
Francesco Biscani

Pi Computation Record - 4 views

  •  
    For Dario: the PI computation record was established on a single desktop computer using a cache optimized algorithm. Previous record was obtained by a cluster of hundreds of computers. The cache optimized algorithm was 20 times faster.
  • ...6 more comments...
  •  
    Teeeeheeeeheeee... assembler programmers greet Java/Python/Etc. programmers :)
  •  
    And he seems to have done everything in his free time!!! I like the first FAQ.... "why did you do it?"
  •  
    did you read any of the books he recommends? suggest: Modern Computer Arithmetic by Richard Brent and Paul Zimmermann, version 0.4, November 2009, Full text available here. The Art of Computer Programming, volume 2 : Seminumerical Algorithms by Donald E. Knuth, Addison-Wesley, third edition, 1998. More information here.
  •  
    btw: we will very soon have the very same processor in the new iMac .... what record are you going to beat with it?
  •  
    Zimmerman is the same guy behind the MPFR multiprecision floating-point library, if I recall correctly: http://www.mpfr.org/credit.html I've not read the book... Multiprecision arithmetic is a huge topic though, at least from the scientific and number theory point of view if not for its applications to engineering problems. "The art of computer programming" is probably the closest thing to a bible for computer scientists :)
  •  
    "btw: we will very soon have the very same processor in the new iMac .... what record are you going to beat with it?" Fastest Linux install on an iMac :)
  •  
    "Fastest Linux install on an iMac :)" that is going to be a though one but a worthy aim! ""The art of computer programming" is probably the closest thing to a bible for computer scientists :)" yep! Programming is art ;)
Francesco Biscani

LaTeX Lab - Welcome - 6 views

shared by Francesco Biscani on 11 May 10 - Cached
  •  
    Finally LaTeX for Google docs?
  • ...4 more comments...
  •  
    mmm seems better (more option and direct preview) than spartantex
  •  
    Great!!! seems like the tool we were looking for....
  •  
    does not seem to work (at least not with Safari)
  •  
    Excellent!!!!!!!!! and works with Linux :-)
  •  
    Here it works fine in Chromium, Firefox and Opera (Linux).
  •  
    Worked fine, but after saving a document I can't get it back to the LaTeX mode...
Francesco Biscani

The Great Linux World Map - 6 views

  •  
    Friday afternoon geeky humour.
Francesco Biscani

iTWire - London Stock Exchange gets the facts and dumps Windows for Linux - 1 views

  • Microsoft’s marketing arm excitedly churned out a case study in 2005 when the London Stock Exchange (LSE) rolled out a C# stock exchange ticker system on Windows Server 2003 and SQL Server 2000. Four years later the LSE has scrapped the whole system in favour of a Linux-based solution instead.
  •  
    Microsoft "Gets the facts".
LeopoldS

Microsoft Offers Secure Windows … But Only to the Government | Threat Level - 0 views

  •  
    why didn't they take linux as a basis?
LeopoldS

ESA startet Summer of Code in Space « NEWS « Linux-Magazin Online - 6 views

  •  
    post here more of these if you see them ....
Kevin de Groote

Gephi, an open source graph visualization and manipulation software - 5 views

shared by Kevin de Groote on 05 Jun 12 - Cached
LeopoldS and Joris _ liked it
  •  
    Gephi is an interactive visualization and exploration platform for all kinds of networks and complex systems, dynamic and hierarchical graphs. Runs on Windows, Linux and Mac OS X. Gephi is open-source and free. Learn More on Gephi Platform " Gephi 0.8.1-beta has been released! Discover a new Timeline, dynamic ranking and weighted community detection.
Dario Izzo

Google ditches Windows for Mac/Linux - 2 views

  •  
    it is done.... now maybe ESA will consider doing it too?
  •  
    Yeah... and finally give all staff those neat silver Macs...
LeopoldS

French National Police Force saves €2 million a year with Ubuntu | Canonical - 0 views

  •  
    Be careful, the article is written by the company who did the migration to Ubuntu. Here is a comment by a police guy from IT (in french...). In brief he says that tyhe migration was not a problem for most of the people; exepc for some probleme with access. But it did cost money ! and the saving was not the main argument. "Personnellement concerné par la news qui n'en est pas une, je peux vous assurer que le message de Canonical est surtout commercial... Le choix d'Ubuntu est dû à son hégémonie et le fait que ce soit basé sur du Debian qui est considéré comme très stable. La distrib est d'une maintenance plus aisée que la plupart de celles qui ont été testées. "4500 postes" veut dire "4500 unités de gendarmerie" donc dans les brigades que vous connaissez... Pour ce qui est d'OpenOffice, le passage s'est fait assez tranquillement sauf pour les applis Access qui ont eu un peu de mal à passer sur le module Base...La plupart ont été reprise au sein d'applis php/mysql ou d'applis centralisées... Aujourd'hui, les gendarmes qui je le rappelle ne sont pas informaticiens mais vivent pour vous (au sens le plus strict je vous l'assure) utilisent donc firefox/thunderbird et oppenoffice en clients lourds, le reste étant des applis sur l'intranet ou "invisibles" pour l'utilisateur. Le passage à Ubuntu ne gène en rien dans l'utilisation car le trio précédemment cité est déjà connu et maîtrisé par nombre de mes collègues. Je ne suis pas censé m'exprimer en lieu et place de mes supérieurs mais à titre personnel, le choix d'Ubuntu est un choix intelligent car c'est une distribution avec une prise en main très accessible et avec une maintenance vraiment aisée pour les spécialistes informatiques dont je fais partie...Il ne faut pas oublier qu'une distribution plus élitiste aurait été maîtrisée par moins de monde et donc la maintenance aurait été plus coûteuse... Donc aujourd'hui nous "maîtrisons" cette part de notre infrastructure et la trans
  •  
    Lotus Notes doesn't run on Linux anyway...
Dario Izzo

Probabilistic Logic Allows Computer Chip to Run Faster - 3 views

  •  
    Francesco pointed out this research one year ago, we dropped it as noone was really considering it ... but in space a low CPU power consumption is crucial!! Maybe we should look back into this?
  • ...6 more comments...
  •  
    Q1: For the time being, for what purposes computers are mainly used on-board?
  •  
    for navigation, control, data handling and so on .... why?
  •  
    Well, because the point is to identify an application in which such computers would do the job... That could be either an existing application which can be done sufficiently well by such computers or a completely new application which is not already there for instance because of some power consumption constraints... Q2 would be then: for which of these purposes strict determinism of the results is not crucial? As the answer to this may not be obvious, a potential study could address this very issue. For instance one can consider on-board navigation systems with limited accuracy... I may be talking bullshit now, but perhaps in some applications it doesn't matter whether a satellite flies on the exact route but +/-10km to the left/right? ...and so on for the other systems. Another thing is understanding what exactly this probabilistic computing is, and what can be achieved using it (like the result is probabilistic but falls within a defined range of precision), etc. Did they build a complete chip or at least a sub-circiut, or still only logic gates...
  •  
    Satellites use old CPUs also because with the trend of going for higher power modern CPUs are not very convenient from a system design point of view (TBC)... as a consequence the constraints put on on-board algorithms can be demanding. I agree with you that double precision might just not be necessary for a number of applications (navigation also), but I guess we are not talking about 10km as an absolute value, rather to a relative error that can be tolerated at level of (say) 10^-6. All in all you are right a first study should assess what application this would be useful at all.. and at what precision / power levels
  •  
    The interest of this can be a high fault tolerance for some math operations, ... which would have for effect to simplify the job of coders! I don't think this is a good idea regarding power consumption for CPU (strictly speaking). The reason we use old chip is just a matter of qualification for space, not power. For instance a LEON Sparc (e.g. use on some platform for ESA) consumes something like 5mW/MHz so it is definitely not were an engineer will look for some power saving considering a usual 10-15kW spacecraft
  •  
    What about speed then? Seven time faster could allow some real time navigation at higher speed (e.g. velocity of a terminal guidance for an asteroid impactor is limited to 10 km/s ... would a higher velocity be possible with faster processors?) Another issue is the radiation tolerance of the technology ... if the PCMOS are more tolerant to radiation they could get more easily space qualified.....
  •  
    I don't remember what is the speed factor, but I guess this might do it! Although, I remember when using an IMU that you cannot have the data above a given rate (e.g. 20Hz even though the ADC samples the sensor at a little faster rate), so somehow it is not just the CPU that must be re-thought. When I say qualification I also imply the "hardened" phase.
  •  
    I don't know if the (promised) one-order-of-magnitude improvements in power efficiency and performance are enough to justify looking into this. For once, it is not clear to me what embracing this technology would mean from an engineering point of view: does this technology need an entirely new software/hardware stack? If that were the case, in my opinion any potential benefit would be nullified. Also, is it realistic to build an entire self-sufficient chip on this technology? While the precision of floating point computations may be degraded and still be useful, how does all this play with integer arithmetic? Keep in mind that, e.g., in the Linux kernel code floating-point calculations are not even allowed/available... It is probably possible to integrate an "accelerated" low-accuracy floating-point unit together with a traditional CPU, but then again you have more implementation overhead creeping in. Finally, recent processors by Intel (e.g., the Atom) and especially ARM boast really low power-consumption levels, at the same time offering performance-boosting features such as multi-core and vectorization capabilities. Don't such efforts have more potential, if anything because of economical/industrial inertia?
1 - 20 of 26 Next ›
Showing 20 items per page