NASA has it ... I WANT IT TOO!!!!
240 threads on 60 cores ...
Imagine the possibilities of this new toy!!
Francesco also has it in his new "kill the seals" job
Sure looks interesting, hopefully it will gain some traction. Bonus point it uses Python heavily :)
First versions are coming out in Sept/Oct, according to their roadmap, we could start playing with it as soon as it gets out.
this is the first three-planet resonance ever seen.
The three planets are in a 4:2:1 resonance: the innermost giant completes four orbits in the time the middle one completes two and the newfound outermost world completes one.
I would love to see people working on touchscreens all day, orthopedics would have a field day :)
Anyway, the answer to the original question is: because the JesusPhone is an appliance, a PC is not.
Luckily there is open source, so neither Steve Jobs nor NY Times decide for us what sort of OS we have to like! I'll join the "Jihad for the command line" troop!!
Francesco pointed out this research one year ago, we dropped it as noone was really considering it ... but in space a low CPU power consumption is crucial!! Maybe we should look back into this?
Well, because the point is to identify an application in which such computers would do the job... That could be either an existing application which can be done sufficiently well by such computers or a completely new application which is not already there for instance because of some power consumption constraints...
Q2 would be then: for which of these purposes strict determinism of the results is not crucial?
As the answer to this may not be obvious, a potential study could address this very issue. For instance one can consider on-board navigation systems with limited accuracy... I may be talking bullshit now, but perhaps in some applications it doesn't matter whether a satellite flies on the exact route but +/-10km to the left/right?
...and so on for the other systems.
Another thing is understanding what exactly this probabilistic computing is, and what can be achieved using it (like the result is probabilistic but falls within a defined range of precision), etc. Did they build a complete chip or at least a sub-circiut, or still only logic gates...
Satellites use old CPUs also because with the trend of going for higher power modern CPUs are not very convenient from a system design point of view (TBC)... as a consequence the constraints put on on-board algorithms can be demanding. I agree with you that double precision might just not be necessary for a number of applications (navigation also), but I guess we are not talking about 10km as an absolute value, rather to a relative error that can be tolerated at level of (say) 10^-6. All in all you are right a first study should assess what application this would be useful at all.. and at what precision / power levels
The interest of this can be a high fault tolerance for some math operations, ... which would have for effect to simplify the job of coders!
I don't think this is a good idea regarding power consumption for CPU (strictly speaking).
The reason we use old chip is just a matter of qualification for space, not power. For instance a LEON Sparc (e.g. use on some platform for ESA) consumes something like 5mW/MHz so it is definitely not were an engineer will look for some power saving considering a usual 10-15kW spacecraft
What about speed then? Seven time faster could allow some real time navigation at higher speed (e.g. velocity of a terminal guidance for an asteroid impactor is limited to 10 km/s ... would a higher velocity be possible with faster processors?)
Another issue is the radiation tolerance of the technology ... if the PCMOS are more tolerant to radiation they could get more easily space qualified.....
I don't remember what is the speed factor, but I guess this might do it! Although, I remember when using an IMU that you cannot have the data above a given rate (e.g. 20Hz even though the ADC samples the sensor at a little faster rate), so somehow it is not just the CPU that must be re-thought.
When I say qualification I also imply the "hardened" phase.
I don't know if the (promised) one-order-of-magnitude improvements in power efficiency and performance are enough to justify looking into this.
For once, it is not clear to me what embracing this technology would mean from an engineering point of view: does this technology need an entirely new software/hardware stack? If that were the case, in my opinion any potential benefit would be nullified.
Also, is it realistic to build an entire self-sufficient chip on this technology? While the precision of floating point computations may be degraded and still be useful, how does all this play with integer arithmetic? Keep in mind that, e.g., in the Linux kernel code floating-point calculations are not even allowed/available... It is probably possible to integrate an "accelerated" low-accuracy floating-point unit together with a traditional CPU, but then again you have more implementation overhead creeping in.
Finally, recent processors by Intel (e.g., the Atom) and especially ARM boast really low power-consumption levels, at the same time offering performance-boosting features such as multi-core and vectorization capabilities. Don't such efforts have more potential, if anything because of economical/industrial inertia?
Nahh... it's cloud com-poo-ting, it's all about taking control from the user into the corporation. To quote Stallman:
"It's stupidity. It's worse than stupidity: it's a marketing hype campaign [...] Somebody is saying this is inevitable - and whenever you hear somebody saying that, it's very likely to be a set of businesses campaigning to make it true."
I don't think so, it is just a code optimizer for JavaScript, unless there are somewhere big JavaScript (web2.0) applications running that is not of much interest for us
Other google labs systems e.g. FriendConnect could be useful for Ariadnet, maybe also the visualization and social graph API