Skip to main content

Home/ Advanced Concepts Team/ Group items tagged for

Rss Feed Group items tagged

tvinko

Massively collaborative mathematics : Article : Nature - 28 views

  •  
    peer-to-peer theorem-proving
  • ...14 more comments...
  •  
    Or: mathematicians catch up with open-source software developers :)
  •  
    "Similar open-source techniques could be applied in fields such as [...] computer science, where the raw materials are informational and can be freely shared online." ... or we could reach the point, unthinkable only few years ago, of being able to exchange text messages in almost real time! OMG, think of the possibilities! Seriously, does the author even browse the internet?
  •  
    I do not agree with you F., you are citing out of context! Sharing messages does not make a collaboration, nor does a forum, .... You need a set of rules and a common objective. This is clearly observable in "some team", where these rules are lacking, making team work inexistent. The additional difficulties here are that it involves people that are almost strangers to each other, and the immateriality of the project. The support they are using (web, wiki) is only secondary. What they achieved is remarkable, disregarding the subject!
  •  
    I think we will just have to agree to disagree then :) Open source developers have been organizing themselves with emails since the early '90s, and most projects (e.g., the Linux kernel) still do not use anything else today. The Linux kernel mailing list gets around 400 messages per day, and they are managing just fine to scale as the number of contributors increases. I agree that what they achieved is remarkable, but it is more for "what" they achieved than "how". What they did does not remotely qualify as "massively" collaborative: again, many open source projects are managed collaboratively by thousands of people, and many of them are in the multi-million lines of code range. My personal opinion of why in the scientific world these open models are having so many difficulties is that the scientific community today is (globally, of course there are many exceptions) a closed, mostly conservative circle of people who are scared of changes. There is also the fact that the barrier of entry in a scientific community is very high, but I think that this should merely scale down the number of people involved and not change the community "qualitatively". I do not think that many research activities are so much more difficult than, e.g., writing an O(1) scheduler for an Operating System or writing a new balancing tree algorithm for efficiently storing files on a filesystem. Then there is the whole issue of scientific publishing, which, in its current form, is nothing more than a racket. No wonder traditional journals are scared to death by these open-science movements.
  •  
    here we go ... nice controversy! but maybe too many things mixed up together - open science journals vs traditional journals, conservatism of science community wrt programmers (to me one of the reasons for this might be the average age of both groups, which is probably more than 10 years apart ...) and then using emailing wrt other collaboration tools .... .... will have to look at the paper now more carefully ... (I am surprised to see no comment from José or Marek here :-)
  •  
    My point about your initial comment is that it is simplistic to infer that emails imply collaborative work. You actually use the word "organize", what does it mean indeed. In the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review). Mailing is just a coordination mean. In collaborations and team work, it is about rules, not only about the technology you use to potentially collaborate. Otherwise, all projects would be successful, and we would noy learn management at school! They did not write they managed the colloboration exclusively because of wikipedia and emails (or other 2.0 technology)! You are missing the part that makes it successful and remarkable as a project. On his blog the guy put a list of 12 rules for this project. None are related to emails, wikipedia, forums ... because that would be lame and your comment would make sense. Following your argumentation, the tools would be sufficient for collaboration. In the ACT, we have plenty of tools, but no team work. QED
  •  
    the question on the ACT team work is one that is coming back continuously and it always so far has boiled down to the question of how much there need and should be a team project to which everybody inthe team contributes in his / her way or how much we should leave smaller, flexible teams within the team form and progress, more following a bottom-up initiative than imposing one from top-down. At this very moment, there are at least 4 to 5 teams with their own tools and mechanisms which are active and operating within the team. - but hey, if there is a real will for one larger project of the team to which all or most members want to contribute, lets go for it .... but in my view, it should be on a convince rather than oblige basis ...
  •  
    It is, though, indicative that some of the team member do not see all the collaboration and team work happening around them. We always leave the small and agile sub-teams to form and organize themselves spontaneously, but clearly this method leaves out some people (be it for their own personal attitude or be it for pure chance) For those cases which we could think to provide the possibility to participate in an alternative, more structured, team work where we actually manage the hierachy, meritocracy and perform the project review (to use Joris words).
  •  
    I am, and was, involved in "collaboration" but I can say from experience that we are mostly a sum of individuals. In the end, it is always one or two individuals doing the job, and other waiting. Sometimes even, some people don't do what they are supposed to do, so nothing happens ... this could not be defined as team work. Don't get me wrong, this is the dynamic of the team and I am OK with it ... in the end it is less work for me :) team = 3 members or more. I am personally not looking for a 15 member team work, and it is not what I meant. Anyway, this is not exactly the subject of the paper.
  •  
    My opinion about this is that a research team, like the ACT, is a group of _people_ and not only brains. What I mean is that people have feelings, hate, anger, envy, sympathy, love, etc about the others. Unfortunately(?), this could lead to situations, where, in theory, a group of brains could work together, but not the same group of people. As far as I am concerned, this happened many times during my ACT period. And this is happening now with me in Delft, where I have the chance to be in an even more international group than the ACT. I do efficient collaborations with those people who are "close" to me not only in scientific interest, but also in some private sense. And I have people around me who have interesting topics and they might need my help and knowledge, but somehow, it just does not work. Simply lack of sympathy. You know what I mean, don't you? About the article: there is nothing new, indeed. However, why it worked: only brains and not the people worked together on a very specific problem. Plus maybe they were motivated by the idea of e-collaboration. No revolution.
  •  
    Joris, maybe I made myself not clear enough, but my point was only tangentially related to the tools. Indeed, it is the original article mention of "development of new online tools" which prompted my reply about emails. Let me try to say it more clearly: my point is that what they accomplished is nothing new methodologically (i.e., online collaboration of a loosely knit group of people), it is something that has been done countless times before. Do you think that now that it is mathematicians who are doing it makes it somehow special or different? Personally, I don't. You should come over to some mailing lists of mathematical open-source software (e.g., SAGE, Pari, ...), there's plenty of online collaborative research going on there :) I also disagree that, as you say, "in the case of Linux, what makes the project work is the rules they set and the management style (hierachy, meritocracy, review)". First of all I think the main engine of any collaboration like this is the objective, i.e., wanting to get something done. Rules emerge from self-organization later on, and they may be completely different from project to project, ranging from almost anarchy to BDFL (benevolent dictator for life) style. Given this kind of variety that can be observed in open-source projects today, I am very skeptical that any kind of management rule can be said to be universal (and I am pretty sure that the overwhelming majority of project organizers never went to any "management school"). Then there is the social aspect that Tamas mentions above. From my personal experience, communities that put technical merit above everything else tend to remain very small and generally become irrelevant. The ability to work and collaborate with others is the main asset the a participant of a community can bring. I've seen many times on the Linux kernel mailing list contributions deemed "technically superior" being disregarded and not considered for inclusion in the kernel because it was clear that
  •  
    hey, just catched up the discussion. For me what is very new is mainly the framework where this collaborative (open) work is applied. I haven't seen this kind of working openly in any other field of academic research (except for the Boinc type project which are very different, because relying on non specialists for the work to be done). This raise several problems, and mainly the one of the credit, which has not really been solved as I read in the wiki (is an article is written, who writes it, what are the names on the paper). They chose to refer to the project, and not to the individual researchers, as a temporary solution... It is not so surprising for me that this type of work has been first done in the domain of mathematics. Perhaps I have an ideal view of this community but it seems that the result obtained is more important than who obtained it... In many areas of research this is not the case, and one reason is how the research is financed. To obtain money you need to have (scientific) credit, and to have credit you need to have papers with your name on it... so this model of research does not fit in my opinion with the way research is governed. Anyway we had a discussion on the Ariadnet on how to use it, and one idea was to do this kind of collaborative research; idea that was quickly abandoned...
  •  
    I don't really see much the problem with giving credit. It is not the first time a group of researchers collectively take credit for a result under a group umbrella, e.g., see Nicolas Bourbaki: http://en.wikipedia.org/wiki/Bourbaki Again, if the research process is completely transparent and publicly accessible there's no way to fake contributions or to give undue credit, and one could cite without problems a group paper in his/her CV, research grant application, etc.
  •  
    Well my point was more that it could be a problem with how the actual system works. Let say you want a grant or a position, then the jury will count the number of papers with you as a first author, and the other papers (at least in France)... and look at the impact factor of these journals. Then you would have to set up a rule for classifying the authors (endless and pointless discussions), and give an impact factor to the group...?
  •  
    it seems that i should visit you guys at estec... :-)
  •  
    urgently!! btw: we will have the ACT christmas dinner on the 9th in the evening ... are you coming?
LeopoldS

House Approves Flat 2011 Budget for Most Science Agencies - ScienceInsider - 0 views

  •  
    "Some segments of the research community would get their preferences under the House spending bill. For example, it matches the president's request for a 1.5% increase for NASA, to $19 billion, including a 12% increase, to $5 billion, for the space science program. Legislators had already worked out a deal with the White House on the future of the manned space program, and they included funding for an additional shuttle flight in 2011. They even added $35 million to the $20 million increase that the president requested for NASA's education programs, boosting them by a whopping 30% to $180 million. "
  •  
    Some segments of the research community would get their preferences under the House spending bill. For example, it matches the president's request for a 1.5% increase for NASA, to $19 billion, including a 12% increase, to $5 billion, for the space science program. Legislators had already worked out a deal with the White House on the future of the manned space program, and they included funding for an additional shuttle flight in 2011. They even added $35 million to the $20 million increase that the president requested for NASA's education programs, boosting them by a whopping 30% to $180 million.
Dario Izzo

Probabilistic Logic Allows Computer Chip to Run Faster - 3 views

  •  
    Francesco pointed out this research one year ago, we dropped it as noone was really considering it ... but in space a low CPU power consumption is crucial!! Maybe we should look back into this?
  • ...6 more comments...
  •  
    Q1: For the time being, for what purposes computers are mainly used on-board?
  •  
    for navigation, control, data handling and so on .... why?
  •  
    Well, because the point is to identify an application in which such computers would do the job... That could be either an existing application which can be done sufficiently well by such computers or a completely new application which is not already there for instance because of some power consumption constraints... Q2 would be then: for which of these purposes strict determinism of the results is not crucial? As the answer to this may not be obvious, a potential study could address this very issue. For instance one can consider on-board navigation systems with limited accuracy... I may be talking bullshit now, but perhaps in some applications it doesn't matter whether a satellite flies on the exact route but +/-10km to the left/right? ...and so on for the other systems. Another thing is understanding what exactly this probabilistic computing is, and what can be achieved using it (like the result is probabilistic but falls within a defined range of precision), etc. Did they build a complete chip or at least a sub-circiut, or still only logic gates...
  •  
    Satellites use old CPUs also because with the trend of going for higher power modern CPUs are not very convenient from a system design point of view (TBC)... as a consequence the constraints put on on-board algorithms can be demanding. I agree with you that double precision might just not be necessary for a number of applications (navigation also), but I guess we are not talking about 10km as an absolute value, rather to a relative error that can be tolerated at level of (say) 10^-6. All in all you are right a first study should assess what application this would be useful at all.. and at what precision / power levels
  •  
    The interest of this can be a high fault tolerance for some math operations, ... which would have for effect to simplify the job of coders! I don't think this is a good idea regarding power consumption for CPU (strictly speaking). The reason we use old chip is just a matter of qualification for space, not power. For instance a LEON Sparc (e.g. use on some platform for ESA) consumes something like 5mW/MHz so it is definitely not were an engineer will look for some power saving considering a usual 10-15kW spacecraft
  •  
    What about speed then? Seven time faster could allow some real time navigation at higher speed (e.g. velocity of a terminal guidance for an asteroid impactor is limited to 10 km/s ... would a higher velocity be possible with faster processors?) Another issue is the radiation tolerance of the technology ... if the PCMOS are more tolerant to radiation they could get more easily space qualified.....
  •  
    I don't remember what is the speed factor, but I guess this might do it! Although, I remember when using an IMU that you cannot have the data above a given rate (e.g. 20Hz even though the ADC samples the sensor at a little faster rate), so somehow it is not just the CPU that must be re-thought. When I say qualification I also imply the "hardened" phase.
  •  
    I don't know if the (promised) one-order-of-magnitude improvements in power efficiency and performance are enough to justify looking into this. For once, it is not clear to me what embracing this technology would mean from an engineering point of view: does this technology need an entirely new software/hardware stack? If that were the case, in my opinion any potential benefit would be nullified. Also, is it realistic to build an entire self-sufficient chip on this technology? While the precision of floating point computations may be degraded and still be useful, how does all this play with integer arithmetic? Keep in mind that, e.g., in the Linux kernel code floating-point calculations are not even allowed/available... It is probably possible to integrate an "accelerated" low-accuracy floating-point unit together with a traditional CPU, but then again you have more implementation overhead creeping in. Finally, recent processors by Intel (e.g., the Atom) and especially ARM boast really low power-consumption levels, at the same time offering performance-boosting features such as multi-core and vectorization capabilities. Don't such efforts have more potential, if anything because of economical/industrial inertia?
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
Luís F. Simões

Shell energy scenarios to 2050 - 6 views

  •  
    just in case you were feeling happy and optimistic
  • ...7 more comments...
  •  
    An energy scenario published by an oil company? Allow me to be sceptical...
  •  
    Indeed, Shell is an energy company, not just oil, for some time now ... The two scenarii are, in their approach, dependant of economic and political situation, which is right now impossible to forecast. Reference to Kyoto is surprising, almost out-dated! But overall, I find it rather optimistic at some stages, and probably the timeline (p37-39) is unlikely with recent events.
  •  
    the report was published in 2008, which explains the reference to Kyoto, as the follow-up to it was much more uncertain at that point. The Blueprint scenario is indeed optimistic, but also quite unlikely I'd say. I don't see humanity suddenly becoming so wise and coordinated. Sadly, I see something closer to the Scramble scenario as much more likely to occur.
  •  
    not an oil company??? please have a look at the percentage of their revenues coming from oil and gas and then compare this with all their other energy activities together and you will see very quickly that it is only window dressing ... they are an oil and gas company ... and nothing more
  •  
    not JUST oil. From a description: "Shell is a global group of energy and petrochemical companies." Of course revenues coming from oil are the biggest, the investment turnover on other energy sources is small for now. Knowing that most of their revenues is from an expendable source, to guarantee their future, they invest elsewhere. They have invested >1b$ in renewable energy, including biofuels. They had the largest wind power business among so-called "oil" companies. Oil only defines what they do "best". As a comparison, some time ago, Apple were selling only computers and now they sell phones. But I would not say Apple is just a phone company.
  •  
    window dressing only ... e.g. Net cash from operating activities (pre-tax) in 2008: 70 Billion$ net income in 2008: 26 Billion revenues in 2008: 88 Billion Their investments and revenues in renewables don't even show up in their annual financial reports since probably they are under the heading of "marketing" which is already 1.7 Billion $ ... this is what they report on their investments: Capital investment, portfolio actions and business development Capital investment in 2009 was $24 billion. This represents a 26% decrease from 2008, which included over $8 billion in acquisitions, primarily relating to Duvernay Oil Corp. Capital investment included exploration expenditure of $4.5 billion (2008: $11.0 billion). In Abu Dhabi, Shell signed an agreement with Abu Dhabi National Oil Company to extend the GASCO joint venture for a further 20 years. In Australia, Shell and its partners took the final investment decision (FID) for the Gorgon LNG project (Shell share 25%). Gorgon will supply global gas markets to at least 2050, with a capacity of 15 million tonnes (100% basis) of LNG per year and a major carbon capture and storage scheme. Shell has announced a front-end engineering and design study for a floating LNG (FLNG) project, with the potential to deploy these facilities at the Prelude offshore gas discovery in Australia (Shell share 100%). In Australia, Shell confirmed that it has accepted Woodside Petroleum Ltd.'s entitlement offer of new shares at a total cost of $0.8 billion, maintaining its 34.27% share in the company; $0.4 billion was paid in 2009 with the remainder paid in 2010. In Bolivia and Brazil, Shell sold its share in a gas pipeline and in a thermoelectric power plant and its related assets for a total of around $100 million. In Canada, the Government of Alberta and the national government jointly announced their intent to contribute $0.8 billion of funding towards the Quest carbon capture and sequestration project. Quest, which is at the f
  •  
    thanks for the info :) They still have their 50% share in the wind farm in Noordzee (you can see it from ESTEC on a clear day). Look for Shell International Renewables, other subsidiaries and joint-ventures. I guess, the report is about the oil branch. http://sustainabilityreport.shell.com/2009/servicepages/downloads/files/all_shell_sr09.pdf http://www.noordzeewind.nl/
  •  
    no - its about Shell globally - all Shell .. these participations are just peanuts please read the intro of the CEO in the pdf you linked to: he does not even mention renewables! their entire sustainability strategy is about oil and gas - just making it (look) nicer and environmentally friendlier
  •  
    Fair enough, for me even peanuts are worthy and I am not able to judge. Not all big-profit companies, like Shell, are evil :( Look in the pdf what is in the upstream and downstream you mentionned above. Non-shell sources for examples and more objectivity: http://www.nuon.com/company/Innovative-projects/noordzeewind.jsp http://www.e-energymarket.com/news/single-news/article/ferrari-tops-bahrain-gp-using-shell-biofuel.html thanks.
jmlloren

Scientists discover how to turn light into matter after 80-year quest - 5 views

  •  
    Theoretized 80 years ago was Breit-Wheeler pair production in which two photons result in an electron-positron pair (via a virtual electron). It is a relatively simple Feynmann diagram, but the problem is/was how to produce in practice a high energy photon-photon collider... The collider experiment that the scientists have proposed involves two key steps. First, the scientists would use an extremely powerful high-intensity laser to speed up electrons to just below the speed of light. They would then fire these electrons into a slab of gold to create a beam of photons a billion times more energetic than visible light. The next stage of the experiment involves a tiny gold can called a hohlraum (German for 'empty room'). Scientists would fire a high-energy laser at the inner surface of this gold can, to create a thermal radiation field, generating light similar to the light emitted by stars. They would then direct the photon beam from the first stage of the experiment through the centre of the can, causing the photons from the two sources to collide and form electrons and positrons. It would then be possible to detect the formation of the electrons and positrons when they exited the can. Now this is a good experiment... :)
  • ...6 more comments...
  •  
    The solution of thrusting in space.
  •  
    Thrusting in space is solved already. Maybe you wanted to say something different?
  •  
    Thrusting until your fuel runs out is solved, in this way one can produce mass from, among others, solar/star energy directly. What I like about this experiment is that we have the technology already to do it, many parts have been designed for inertial confinement fusion.
  •  
    I am quite certain that it would be more efficient to use the photons directly for thrust instead of converting them into matter. Also, I am a bit puzzled at the asymmetric layout for photon creation. Typically, colliders use two beam of particle with equal but opposite momentum. Because the total momentum for two colliding particles is zero the reaction products are produced more efficiently as a minimum of collision energy is waisted on accelerating the products. I guess in this case the thermal radiation in the cavity is chosen instead of an opposing gamma ray beam to increase the photon density and increase the number of collisions (even if the efficiency decreases because of the asymmetry). However, a danger from using a high temperature cavity might be that a lot of thermionic emission creates lots of free electrons with the cavity. This could reduce the positron yield through recombination and would allow the high energetic photons to loose energy through Compton scattering instead of the Breit-Wheeler pair production.
  •  
    Well, the main benefit from e-p pair creation might be that one can accelerate these subsequently to higher energies again. I think the photon-photon cross-section is extremely low, such that direct beam-beam interactions are basically not happening (below 1/20.. so basically 0 according to quantum probability :P), in this way, the central line of the hohlraum actually has a very high photon density and if timed correctly maximizes the reaction yield such that it could be measured.
  •  
    I agree about the reason for the hohlraum - but I also keep my reservations about the drawbacks. About the pair production as fuel: I pretty sure that your energy would be used smarter in using photon (not necessarily high energy photons) for thrust directly instead of putting tons of energy in creating a rest-mass and then accelerating that. If you look at E² = (p c)²+(m0 c)² then putting energy into the mass term will always reduce your maximum value of p.
  •  
    True, but isnt it E2=(pc)^2 + (m0c^2)^2 such that for photons E\propto{pc} and for mass E\propto{mc^2}. I agree it will take a lot of energy, but this assumes that that wont be the problem at least. The question therefore is whether the mass flow of the photon rocket (fuel consumed to create photons, eg fission/fusion) is higher/lower than the mass flow for e-p creation. You are probably right that the low e-p cross-section will favour direct use of photons to create low thrust for long periods of time, but with significant power available the ISP might be higher for e-p pair creation.
  •  
    In essence the equation tells you that for photons with zero rest mass m0 all the energy will be converted to momentum of the particles. If you want to accelerate e-p then you first spend part of the energy on creating them (~511 keV each) and you can only use the remaining energy to accelerate them. In this case the equation gives you a lower particle momentum which leads to lower thrust (even when assuming 100% acceleration efficiency). ISP is a tricky concept in this case because there are different definitions which clash in the relativistic context (due to the concept of mass flow). R. Tinder gets to a I_SP = c (speed of light) for a photon rocket (using the relativistic mass of the photons) which is the maximum possible relativistic I_SP: http://goo.gl/Zz5gyC .
Francesco Biscani

STLport: An Interview with A. Stepanov - 2 views

  • Generic programming is a programming method that is based in finding the most abstract representations of efficient algorithms.
  • I spent several months programming in Java.
  • for the first time in my life programming in a new language did not bring me new insights
  • ...2 more annotations...
  • it has no intellectual value whatsoever
  • Java is clearly an example of a money oriented programming (MOP).
  •  
    One of the authors of the STL (C++'s Standard Template Library) explains generic programming and slams Java.
  • ...6 more comments...
  •  
    "Java is clearly an example of a money oriented programming (MOP)." Exactly. And for the industry it's the money that matters. Whatever mathematicians think about it.
  •  
    It is actually a good thing that it is "MOP" (even though I do not agree with this term): that is what makes it inter-operable, light and easy to learn. There is no point in writing fancy codes, if it does not bring anything to the end-user, but only for geeks to discuss incomprehensible things in forums. Anyway, I am pretty sure we can find a Java guy slamming C++ ;)
  •  
    Personally, I never understood what the point of Java is, given that: 1) I do not know of any developer (maybe Marek?) that uses it for intellectual pleasure/curiosity/fun whatever, given the possibility of choice - this to me speaks loudly on the objective qualities of the language more than any industrial-corporate marketing bullshit (for the record, I argue that Python is more interoperable, lighter and easier to learn than Java - which is why, e.g., Google is using it heavily); 2) I have used a software developed in Java maybe a total of 5 times on any computer/laptop I owned over 15 years. I cannot name of one single Java project that I find necessary or even useful; for my usage of computers, Java could disappear overnight without even noticing. Then of course one can argue as much as one wants about the "industry choosing Java", to which I would counterargue with examples of industry doing stupid things and making absurd choices. But I suppose it would be a kind of pointless discussion, so I'll just stop here :)
  •  
    "At Google, python is one of the 3 "official languages" alongside with C++ and Java". Java runs everywhere (the byte code itself) that is I think the only reason it became famous. Python, I guess, is more heavy if it were to run on your web browser! I think every language has its pros and cons, but I agree Java is not the answer to everything... Java is used in MATLAB, some web applications, mobile phones apps, ... I would be a bit in trouble if it were to disappear today :(
  •  
    I personally do not believe in interoperability :)
  •  
    Well, I bet you'd notice an overnight disappearance of java, because half of the internet would vanish... J2EE technologies are just omnipresent there... I'd rather not even *think* about developing a web application/webservice/web-whatever in standard C++... is it actually possible?? Perhaps with some weird Microsoft solutions... I bet your bank online services are written in Java. Certainly not in PHP+MySQL :) Industry has chosen Java not because of industrial-corporate marketing bullshit, but because of economics... it enables you develop robustly, reliably, error-prone, modular, well integrated etc... software. And the costs? Well, using java technologies you can set-up enterprise-quality web application servers, get a fully featured development environment (which is better than ANY C/C++/whatever development environment I've EVER seen) at the cost of exactly 0 (zero!) USD/GBP/EUR... Since many years now, the central issue in software development is not implementing algorithms, it's building applications. And that's where Java outperforms many other technologies. The final remark, because I may be mistakenly taken for an apostle of Java or something... I love the idea of generic programming, C++ is my favourite programming language (and I used to read Stroustroup before sleep), at leisure time I write programs in Python... But if I were to start a software development company, then, apart from some very niche applications like computer games, it most probably would use Java as main technology.
  •  
    "I'd rather not even *think* about developing a web application/webservice/web-whatever in standard C++... is it actually possible?? Perhaps with some weird Microsoft solutions... I bet your bank online services are written in Java. Certainly not in PHP+MySQL :)" Doing in C++ would be awesomely crazy, I agree :) But as I see it there are lots of huge websites that operate on PHP, see for instance Facebook. For the banks and the enterprise market, as a general rule I tend to take with a grain of salt whatever spin comes out from them; in the end behind every corporate IT decision there is a little smurf just trying to survive and have the back covered :) As they used to say in the old times, "No one ever got fired for buying IBM". "Industry has chosen Java not because of industrial-corporate marketing bullshit, but because of economics... it enables you develop robustly, reliably, error-prone, modular, well integrated etc... software. And the costs? Well, using java technologies you can set-up enterprise-quality web application servers, get a fully featured development environment (which is better than ANY C/C++/whatever development environment I've EVER seen) at the cost of exactly 0 (zero!) USD/GBP/EUR... Since many years now, the central issue in software development is not implementing algorithms, it's building applications. And that's where Java outperforms many other technologies." Apart from the IDE considerations (on which I cannot comment, since I'm not a IDE user myself), I do not see how Java beats the competition in this regard (again, Python and the huge software ecosystem surrounding it). My impression is that Java's success is mostly due to Sun pushing it like there is no tomorrow and bundling it with their hardware business.
  •  
    OK, I think there is a bit of everything, wrong and right, but you have to acknowledge that Python is not always the simplest. For info, Facebook uses Java (if you upload picture for instance), and PHP is very limited. So definitely, in company, engineers like you and me select the language, it is not a marketing or political thing. And in the case of fb, they come up with the conclusion that PHP, and Java don't do everything but complement each other. As you say Python as many things around, but it might be too much for simple applications. Otherwise, I would seriously be interested by a study of how to implement a Python-like system on-board spacecrafts and what are the advantages over mixing C, Ada and Java.
LeopoldS

Helix Nebula - Helix Nebula Vision - 0 views

  •  
    The partnership brings together leading IT providers and three of Europe's leading research centres, CERN, EMBL and ESA in order to provide computing capacity and services that elastically meet big science's growing demand for computing power.

    Helix Nebula provides an unprecedented opportunity for the global cloud services industry to work closely on the Large Hadron Collider through the large-scale, international ATLAS experiment, as well as with the molecular biology and earth observation. The three flagship use cases will be used to validate the approach and to enable a cost-benefit analysis. Helix Nebula will lead these communities through a two year pilot-phase, during which procurement processes and governance issues for the public/private partnership will be addressed.

    This game-changing strategy will boost scientific innovation and bring new discoveries through novel services and products. At the same time, Helix Nebula will ensure valuable scientific data is protected by a secure data layer that is interoperable across all member states. In addition, the pan-European partnership fits in with the Digital Agenda of the European Commission and its strategy for cloud computing on the continent. It will ensure that services comply with Europe's stringent privacy and security regulations and satisfy the many requirements of policy makers, standards bodies, scientific and research communities, industrial suppliers and SMEs.

    Initially based on the needs of European big-science, Helix Nebula ultimately paves the way for a Cloud Computing platform that offers a unique resource to governments, businesses and citizens.
  •  
    "Helix Nebula will lead these communities through a two year pilot-phase, during which procurement processes and governance issues for the public/private partnership will be addressed." And here I was thinking cloud computing was old news 3 years ago :)
Dario Izzo

Optimal Control Probem in the CR3BP solved!!! - 7 views

  •  
    This guy solved a problem many people are trying to solve!!! The optimal control problem for the three body problem (restricted, circular) can be solved using continuation of the secondary gravity parameter and some clever adaptation of the boundary conditions!! His presentation was an eye opener ... making the work of many pretty useless now :)
  • ...13 more comments...
  •  
    Riemann hypothesis should be next... Which paper on the linked website is this exactly?
  •  
    hmmm, last year at the AIAA conference in Toronto I presented a continuation approach to design a DRO (three-body problem). Nothing new here unfortunately. I know the work of Caillau, although interesting what is presented was solved 10 years ago by others. The interest of his work is not in the applications (CR3BP), but in the research of particular regularity conditions that unfortunately make the problem limited practically. Look also at the work of Mingotti, Russel, Topputo and other for the (C)RTBP. Smart-One inspired a bunch of researchers :)
  •  
    Topputo and some of the others 'inspired' researchers you mention are actually here at the conference and they are all quite depressed :) Caillau really solves the problem: as a one single phase transfer, no tricks, no misconvergence, in general and using none of the usual cheats. What was produced so far by other were only local solutions valid for the particular case considered. In any case I will give him your paper, so that he knows he is working on already solved stuff :)
  •  
    Answer to Marek: the paper you may look at is: Discrete and differential homotopy in circular restricted three-body control
  •  
    Ah! with one single phase and a first order method then it is amazing (but it is still just the very particular CRTBP case). The trick is however the homotopy map he selected! Why this one? Any conjugate point? Did I misunderstood the title ? I solved in one phase with second order methods for the less restrictive problem RTBP or simply 3-body... but as a strict answer to your title the problem has been solved before. Nota: In "Russell, R. P., "Primer Vector Theory Applied to Global Low-Thrust Trade Studies," JGCD, Vol. 30, No. 2", he does solve the RTBP with a first order method in one phase.
  •  
    I think what is interesting is not what he solved, but how he solved the problem. But, are means more important than end ... I dunno
  •  
    I also loved his method, and it looked to me that is far more general than the CRTBP. As for the title of this post, OK maybe it is an exageration as it suggests that no solution was ever given before, on the other end, as Marek would say "come on guys!!!!!"
  •  
    The generality has to be checked. Don't you think his choice of mapping is too specific? he doesn't really demonstrate it works better than other. In addition, the minimum time choice make the problem very regular (i guess you've experienced that solving min time is much easier than mass max, optimality-wise). There is still a long way before maximum mass+RTBP, Topputo et al should be re-assured :p Did you give him my paper, he may find it interesting since I mention the homotopy on mu but for max mass:)
  •  
    Joris, that is the point I was excited abut, at the conference HE DID present solutions to the maximum mass problem!! One phase, from LEO to an orbit around the moon .. amazing :) You will find his presentation on line.... (according to the organizers) I gave him the reference to you paper anyway, but no pdf though as you did not upload it on our web pages and I could not find it in the web. So I gave him some bibliography I had with be from the russians, and from Russell, Petropoulos and Howell, As far as I know these are the only ones that can hope to compete with this guy!!
  •  
    for info only, my phd, in one phase: http://pdf.aiaa.org/preview/CDReadyMAST08_1856/PV2008_7363.pdf I prefered Mars than the dead rock Moon though!
  •  
    If you send me the pdf I can give it to the guy .. the link you gave contains only the first page ... (I have no access till monday to the AIAA thingy)
  •  
    this is why I like this Diigo thingy so much more than delicious ...
  •  
    What do you mean by this comment, Leopold? ;-) Jokes apart: I am following the Diigo thingy with Google Reader (rss). Obviously, I am getting the new postings. But if someone later on adds a comment to a post, then I can miss it, because the rss doesn't get updated. Not that it's a big problem, but do you guys have a better solution for this? How are you following these comments? (I know that if you have commented an entry, then you get the later updates in email.) (For example, in google reader I can see only the first 5 comments in this entry.)
  •  
    I like when there are discussions evolving around entries
  •  
    and on your problem with the RSS Tamas: its the same for me, you get the comments only for entries that you have posted or that you have commented on ...
Luzi Bergamin

First circuit breaker for high voltage direct current - 2 views

  •  
    Doesn't really sound sexy, but this is of utmost importance for next generation grids for renewable energy.
  •  
    I agree on the significance indeed - a small boost also for my favourite Desertec project ... Though their language is a bit too "grandiose": "ABB has successfully designed and developed a hybrid DC breaker after years of research, functional testing and simulation in the R&D laboratories. This breaker is a breakthrough that solves a technical challenge that has been unresolved for over a hundred years and was perhaps one the main influencers in the 'war of currents' outcome. The 'hybrid' breaker combines mechanical and power electronics switching that enables it to interrupt power flows equivalent to the output of a nuclear power station within 5 milliseconds - that's as fast as a honey bee takes per flap of its wing - and more than 30 times faster than the reaction time of an Olympic 100-meter medalist to react to the starter's gun! But its not just about speed. The challenge was to do it 'ultra-fast' with minimal operational losses and this has been achieved by combining advanced ultrafast mechanical actuators with our inhouse semiconductor IGBT valve technologies or power electronics (watch video: Hybrid HVDC Breaker - How does it work). In terms of significance, this breaker is a 'game changer'. It removes a significant stumbling block in the development of HVDC transmission grids where planning can start now. These grids will enable interconnection and load balancing between HVDC power superhighways integrating renewables and transporting bulk power across long distances with minimal losses. DC grids will enable sharing of resources like lines and converter stations that provides reliability and redundancy in a power network in an economically viable manner with minimal losses. ABB's new Hybrid HVDC breaker, in simple terms will enable the transmission system to maintain power flow even if there is a fault on one of the lines. This is a major achievement for the global R&D team in ABB who have worked for years on the challeng
LeopoldS

Prepare and transmit electronic text - American Institute of Physics - 2 views

  •  
    new revTex version available ... what do they mean by this? how do they use XML and latex to XML? would this also be an option for acta futura? "While we appreciate the benefits to authors of preparing manuscripts in TeX, especially for math-intensive manuscripts, it is neither a cost-effective composition tool (for the volume of pages AIP currently produces) nor is it a format that can be used effectively for online publishing."
  •  
    Dunno really, they may have some in-house process that converts LaTeX to XML for some reason. Probably they are using some subset of SGML, the standard generalized markup language from which both HTML and XML derive. Don't think is really relevant for Acta Futura, and the rest of the world seems to get along with TeX just fine...
Thijs Versloot

A Groundbreaking Idea About Why Life Exists - 1 views

  •  
    Jeremy England, a 31-year-old assistant professor at the Massachusetts Institute of Technology, has derived a mathematical formula that he believes explains this capacity. The formula, based on established physics, indicates that when a group of atoms is driven by an external source of energy (like the sun or chemical fuel) and surrounded by a heat bath (like the ocean or atmosphere), it will often gradually restructure itself in order to dissipate increasingly more energy. This could mean that under certain conditions, matter inexorably acquires the key physical attribute associated with life. The simulation results made me think of Jojo's attempts to make a self-assembling space structure. Seems he may have been on the right track, just not thinking big enough
  •  
    :-P Thanks Thijs... I do not agree with the premise of the article that a possible correlation of energy dissipation in living systems and their fitness means that one is the cause for the other - it may just be that both go hand-in-hand because of the nature of the world that we live in. Maybe there is such a drive for pre-biotic systems (like crystals and amino acids), but once life as we know it exists (i.e., heredity + mutation) it is hard to see the need for an amendment of Darwin's principles. The following just misses the essence of Darwin: "If England's approach stands up to more testing, it could further liberate biologists from seeking a Darwinian explanation for every adaptation and allow them to think more generally in terms of dissipation-driven organization. They might find, for example, that "the reason that an organism shows characteristic X rather than Y may not be because X is more fit than Y, but because physical constraints make it easier for X to evolve than for Y to evolve." Darwin's principle in its simplest expression just says that if a genome is more effective at reproducing it is more likely to dominate the next generation. The beauty of it is that there is NO need for a steering mechanism (like maximize energy dissipation) any random set of mutations will still lead to an increase of reproductive effectiveness. BTW: what does "better at dissipating energy" even mean? If I run around all the time I will have more babies? Most species that prove to be very successful end up being very good at conserving energy: trees, turtles, worms. Even complexity of an organism is not a recipe for evolutionary success: jellyfish have been successful for hundreds of millions of years while polar bears are seem to be on the way out.
LeopoldS

Operation Socialist: How GCHQ Spies Hacked Belgium's Largest Telco - 4 views

  •  
    interesting story with many juicy details on how they proceed ... (similarly interesting nickname for the "operation" chosen by our british friends) "The spies used the IP addresses they had associated with the engineers as search terms to sift through their surveillance troves, and were quickly able to find what they needed to confirm the employees' identities and target them individually with malware. The confirmation came in the form of Google, Yahoo, and LinkedIn "cookies," tiny unique files that are automatically placed on computers to identify and sometimes track people browsing the Internet, often for advertising purposes. GCHQ maintains a huge repository named MUTANT BROTH that stores billions of these intercepted cookies, which it uses to correlate with IP addresses to determine the identity of a person. GCHQ refers to cookies internally as "target detection identifiers." Top-secret GCHQ documents name three male Belgacom engineers who were identified as targets to attack. The Intercept has confirmed the identities of the men, and contacted each of them prior to the publication of this story; all three declined comment and requested that their identities not be disclosed. GCHQ monitored the browsing habits of the engineers, and geared up to enter the most important and sensitive phase of the secret operation. The agency planned to perform a so-called "Quantum Insert" attack, which involves redirecting people targeted for surveillance to a malicious website that infects their computers with malware at a lightning pace. In this case, the documents indicate that GCHQ set up a malicious page that looked like LinkedIn to trick the Belgacom engineers. (The NSA also uses Quantum Inserts to target people, as The Intercept has previously reported.) A GCHQ document reviewing operations conducted between January and March 2011 noted that the hack on Belgacom was successful, and stated that the agency had obtained access to the company's
  •  
    I knew I wasn't using TOR often enough...
  •  
    Cool! It seems that after all it is best to restrict employees' internet access only to work-critical areas... @Paul TOR works on network level, so it would not help here much as cookies (application level) were exploited.
johannessimon81

Facebook is buying WhatsApp for ~ $ 19e9 - 1 views

  •  
    That is about € 14e9 - enough to pay more than a million YGTs for half a year. Could we use maybe just half a million YGTs for half a year to build a similar platform and keep the remaining € 7e9 for ourselves? Keep in mind that WhatsApp only has 45 employees (according to AllThingsD: http://goo.gl/NtJcSj ). So we would have an advantage > 10000:1. On the other hand does this mean that every employee at WhatsApp gets enough money now to survive comfortably for ~5000 years or will the inevitable social inequality strike and most people get next to nothing while a few get money to live comfortably for ~1000000 years? Also: Does Facebook think about these numbers before they pay them? Or is it just a case of "That looks tasty - lets have it"? Also (2): As far as I can see all these internet companies (Google, Facebook, Yahoo, WhatsApp, Twitter...) seem to make most of their income from advertising. For all these companies together that must be a lot of advertising money (turns out that in 2013 the world spent about $ 500 billion on advertising: http://goo.gl/vYog15 ). For that money you could of course have 20 million YGTs roaming the Earth and advertising stuff door-to-door... ... ...
  • ...1 more comment...
  •  
    Jo, thats just brilliant... 500billion USD total on advertising, that sounds absolutely ridiculous.. I always wondered whether this giant advertisement scheme is just one big 'ponzi'-like scheme waiting to crash down on us one day when they realize, cat-picture twittering fb-ing whatsapping consumers just aint worth it..
  •  
    The whole valuation of those internet companies is a bit scary. Things like the Facebook and Twitter ipo numbers seem just ridiculous.
  •  
    Facebook is not really so much buying into a potential good business deal as much as it's buying out risky competition. Popular trends need to be killed fast before they take off the ground too much. Also the amount of personal data that WhatsApp is amassing is staggering. I have never seen an app requesting so many phone rights in my life.
Luís F. Simões

Singularity University, class of 2010: projects that aim to impact a billion people wit... - 8 views

  •  
    At the link below you find additional information about the projects: Education: Ten weeks to save the world http://www.nature.com/news/2010/100915/full/467266a.html
  • ...8 more comments...
  •  
    this is the podcast I was listening to ...
  •  
    We can do it in nine :)
  •  
    why wait then?
  •  
    hmm, wonder how easy it is to get funding for that, 25k is a bit steep for 10weeks :)
  •  
    well, we wait for the same fundings they get and then we will do it in nine.... as we say in Rome "a mettece un cartello so bboni tutti". (italian check for Juxi)
  •  
    and what you think about the project subjects?
  •  
    I like the fact that there are quite a lot of space projects .... and these are not even bad in my view: The space project teams have developed imaginative new solutions for space and spinoffs for Earth. The AISynBio project team is working with leading NASA scientists to design bioengineered organisms that can use available resources to mitigate harsh living environments (such as lack of air, water, food, energy, atmosphere, and gravity) - on an asteroid, for example, and also on Earth . The SpaceBio Labs team plans to develop methods for doing low-cost biological research in space, such as 3D tissue engineering and protein crystallization. The Made in Space team plans to bring 3D printing to space to make space exploration cheaper, more reliable, and fail-safe ("send the bits, not the atoms"). For example, they hope to replace some of the $1 billion worth of spare parts and tools that are on the International Space Station.
  •  
    and all in only a three months summer graduate program!! that is impressive. God I feel so stupid!!!
  •  
    well, most good ideas probably take only a second to be formulated, it's the details that take years :-)
  •  
    I do not think the point of the SU is to formulate new ideas (infact there is nothing new in the projects chosen). Their mission is to build and maintain a network of contacts among who they believe will be the 'future leaders' of space ... very similar to our beloved ISU.
Alexander Wittig

Picture This: NVIDIA GPUs Sort Through Tens of Millions of Flickr Photos - 2 views

  •  
    Strange and exotic cityscapes. Desolate wilderness areas. Dogs that look like wookies. Flickr, one of the world's largest photo sharing services, sees it all. And, now, Flickr's image recognition technology can categorize more than 11 billion photos like these. And it does it automatically. It's called "Magic View." Magical deep learning! Buzzword attack!
  • ...4 more comments...
  •  
    and here comes my standard question: how can we use this for space? fast detection of natural disasters onboard?
  •  
    Even on ground. You could for example teach it what nuclear reactors or missiles or other weapons you don't want look like on satellite pictures and automatically scan the world for them (basically replacing intelligence analysts).
  •  
    In fact, I think this could make a nice ACT project: counting seals from satellite imagery is an actual (and quite recent) thing: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0092613 In this publication they did it manually from a GeoEye 1 b/w image, which sounds quite tedious. Maybe one can train one of those image recognition algorithms to do it automatically. Or maybe it's a bit easier to count larger things, like elephants (also a thing).
  •  
    In HiPEAC (High Performance, embedded architecture and computation) conference I attended in the beginning of this year there was a big trend of CUDA GPU vs FPGA for hardware accelerated image processing. Most of it orbitting around discussing who was faster and cheaper with people from NVIDIA in one side and people from Xilinx and Intel in the other. I remember of talking with an IBM scientist working on hardware accelerated data processing working together with the Radio telescope institute in Netherlands about the solution where they working on (GPU CUDA). I gathered that NVIDIA GPU suits best in applications that somehow do not rely in hardware, having the advantage of being programmed in a 'easy' way accessible to a scientist. FPGA's are highly reliable components with the advantage of being available in radhard versions, but requiring specific knowledge of physical circuit design and tailored 'harsh' programming languages. I don't know what is the level of rad hardness in NVIDIA's GPUs... Therefore FPGAs are indeed the standard choice for image processing in space missions (a talk with the microelectronics department guys could expand on this), whereas GPUs are currently used in some ground based (radio astronomy or other types of telescopes). I think that on for a specific purpose as the one you mentioned, this FPGA vs GPU should be assessed first before going further.
  •  
    You're forgetting power usage. GPUs need 1000 hamster wheels worth of power while FPGAs can run on a potato. Since space applications are highly power limited, putting any kind of GPU monster in orbit or on a rover is failed idea from the start. Also in FPGAs if a gate burns out from radiation you can just reprogram around it. Looking for seals offline in high res images is indeed definitely a GPU task.... for now.
  •  
    The discussion of how to make FPGA hardware acceleration solutions easier to use for the 'layman' is starting btw http://reconfigurablecomputing4themasses.net/.
Luís F. Simões

Why Randomly-Selected Politicians Would Improve Democracy - Technology Review - 4 views

  • If Pluchino sounds familiar, it's because we've talked about him and his pals before in relation to the Peter Principle that incompetence always spreads through big organisations. Back in 2009, he and his buddies created a model that showed how promoting people at random always improves the efficiency of the organisation. These guys went on to win a well-deserved IgNobel prize for this work.
  • Ref: arxiv.org/abs/1103.1224: Accidental Politicians: How Randomly Selected Legislators Can Improve Parliament Efficiency
  •  
    I think I start to understand why Italian politics does so horribly bad...
  •  
    ... because they don't follow this rule!
  •  
    According to the authors we have four types of people in the parlement: 1) intelligent people whose actions produce a gain for both themselves and for other people. 2) helpless/naive people in the top left quadrant whose actions produce a loss for themselves but a gain for others; 3) bandits whose actions produce a gain for themselves but a loss for other people. 4) stupid people in the bottom left quadrant produce a loss for themselves and also for other people. According to the above definition it is clear that their model does not apply to the italian parlament where we only have stupid people and bandits.
santecarloni

[1101.6015] Radio beam vorticity and orbital angular momentum - 1 views

  • It has been known for a century that electromagnetic fields can transport not only energy and linear momentum but also angular momentum. However, it was not until twenty years ago, with the discovery in laser optics of experimental techniques for the generation, detection and manipulation of photons in well-defined, pure orbital angular momentum (OAM) states, that twisted light and its pertinent optical vorticity and phase singularities began to come into widespread use in science and technology. We have now shown experimentally how OAM and vorticity can be readily imparted onto radio beams. Our results extend those of earlier experiments on angular momentum and vorticity in radio in that we used a single antenna and reflector to directly generate twisted radio beams and verified that their topological properties agree with theoretical predictions. This opens the possibility to work with photon OAM at frequencies low enough to allow the use of antennas and digital signal processing, thus enabling software controlled experimentation also with first-order quantities, and not only second (and higher) order quantities as in optics-type experiments. Since the OAM state space is infinite, our findings provide new tools for achieving high efficiency in radio communications and radar technology.
  •  
    It has been known for a century that electromagnetic fields can transport not only energy and linear momentum but also angular momentum. However, it was not until twenty years ago, with the discovery in laser optics of experimental techniques for the generation, detection and manipulation of photons in well-defined, pure orbital angular momentum (OAM) states, that twisted light and its pertinent optical vorticity and phase singularities began to come into widespread use in science and technology. We have now shown experimentally how OAM and vorticity can be readily imparted onto radio beams. Our results extend those of earlier experiments on angular momentum and vorticity in radio in that we used a single antenna and reflector to directly generate twisted radio beams and verified that their topological properties agree with theoretical predictions. This opens the possibility to work with photon OAM at frequencies low enough to allow the use of antennas and digital signal processing, thus enabling software controlled experimentation also with first-order quantities, and not only second (and higher) order quantities as in optics-type experiments. Since the OAM state space is infinite, our findings provide new tools for achieving high efficiency in radio communications and radar technology.
  •  
    and how can we use this?
LeopoldS

Schumpeter: More than just a game | The Economist - 3 views

  •  
    remember the discussion I tried to trigger in the team a few weeks ago ...
  • ...5 more comments...
  •  
    main quote I take from the article: "gamification is really a cover for cynically exploiting human psychology for profit"
  •  
    I would say that it applies to management in general :-)
  •  
    which is exactly why it will never work .... and surprisingly "managers" fail to understand this very simple fact.
  •  
    ... "gamification is really a cover for cynically exploiting human psychology for profit" --> "Why Are Half a Million People Poking This Giant Cube?" http://www.wired.com/gamelife/2012/11/curiosity/
  •  
    I think the "essence" of the game is its uselessness... workers need exactly the inverse, to find a meaning in what they do !
  •  
    I love the linked article provided by Johannes! It expresses very elegantly why I still fail to understand even extremely smart and busy people in my view apparently waiting their time in playing computer games - but I recognise that there is something in games that we apparently need / gives us something we cherish .... "In fact, half a million players so far have registered to help destroy the 64 billion tiny blocks that compose that one gigantic cube, all working in tandem toward a singular goal: discovering the secret that Curiosity's creator says awaits one lucky player inside. That's right: After millions of man-hours of work, only one player will ever see the center of the cube. Curiosity is the first release from 22Cans, an independent game studio founded earlier this year by Peter Molyneux, a longtime game designer known for ambitious projects like Populous, Black & White and Fable. Players can carve important messages (or shameless self-promotion) onto the face of the cube as they whittle it to nothing. Image: Wired Molyneux is equally famous for his tendency to overpromise and under-deliver on his games. In 2008, he said that his upcoming game would be "such a significant scientific achievement that it will be on the cover of Wired." That game turned out to be Milo & Kate, a Kinect tech demo that went nowhere and was canceled. Following this, Molyneux left Microsoft to go indie and form 22Cans. Not held back by the past, the Molyneux hype train is going full speed ahead with Curiosity, which the studio grandiosely promises will be merely the first of 22 similar "experiments." Somehow, it is wildly popular. The biggest challenge facing players of Curiosity isn't how to blast through the 2,000 layers of the cube, but rather successfully connecting to 22Cans' servers. So many players are attempting to log in that the server cannot handle it. Some players go for utter efficiency, tapping rapidly to rack up combo multipliers and get more
  •  
    why are video games so much different than collecting stamps or spotting birds or planes ? One could say they are all just hobbies
johannessimon81

Google combines skycrane, VTOL and lifting wing to make drone deliveries - 6 views

  •  
    Nice video featuring the technology. Plus it comes with a good soundtrack! Google's project wing uses a lifting wing concept (more fuel efficient than normal airplane layouts and MUCH more efficient than quadrocopters) but it equips the plane with engines strong enough to hover in a nose up position, allowing vertical landing and takeoff. For the delivery of packages the drone does not even need to land - it can lower them on a wire - much like the skycrane concept used to deliver the Curiosity rover on Mars. Not sure if the skycrane is really necessary but it is certainly cool. Anyways, the video is great for its soundtrack alone! ;-P
  • ...4 more comments...
  •  
    could we just use genetic algorithms to evolve these shapes and layouts? :P
  •  
    > Not sure if the skycrane is really necessary but it is certainly cool. I think apart from coolness using a skycrane helps keep the rotating knives away from the recipient...
  •  
    Honest question, are we ever going to see this in practice? I mean besides some niche application somewhere, isn't it fundamentally flawed or do I need to keep my window opened on the 3rd floor without a balcony when I ordered something from DX? Its pretty cool yes, but practical?
  •  
    Package delivery is indeed more complicated than it may seem at first sight, although solutions are possible for instance by restricting delivery to distribution centers. What we really need of course is some really efficient and robust AI to navigate without any problems in urban areas : ) The hybrid is interesting since it combines the advantage of a Vertical Takeoff and Landing (and hover), and a wing for more efficient forward flight. Challenges lie in the control of the vehicle under any angle and all that this entails also for higher levels of control. Our lab has first used this concept a few years ago for the DARPA UAVforge challenge, and we had two hybrids in our entry last year for the IMAV 2013 (for some shaky images: https://www.youtube.com/watch?v=Z7XgRK7pMoU ).
  •  
    Fair enough, but even if you consider advanced/robust/efficient AI, why would you use a drone? Do we envision hundreds of drones above our heads in the street instead of UPS vans, or postmen, considering delivers letters might be more easily achievable. I am not so sure if personal delivery will take this route. On the other hand, if the system would work smoothly, I can image that I'm send a mail with the question whether I'm home (or they might know already from my personal GPS tracker) and then notify me that they are launching my DVD and it will come crashing into my door in 5min.
  •  
    I'm more curios how they're planning to keep people from stealing the drones. I could do with a drone army myself and having cheap amazon or google drones flying about sounds like a decent source.
1 - 20 of 1850 Next › Last »
Showing 20 items per page