Skip to main content

Home/ Advanced Concepts Team/ Group items tagged cpu

Rss Feed Group items tagged

Dario Izzo

Probabilistic Logic Allows Computer Chip to Run Faster - 3 views

  •  
    Francesco pointed out this research one year ago, we dropped it as noone was really considering it ... but in space a low CPU power consumption is crucial!! Maybe we should look back into this?
  • ...6 more comments...
  •  
    Q1: For the time being, for what purposes computers are mainly used on-board?
  •  
    for navigation, control, data handling and so on .... why?
  •  
    Well, because the point is to identify an application in which such computers would do the job... That could be either an existing application which can be done sufficiently well by such computers or a completely new application which is not already there for instance because of some power consumption constraints... Q2 would be then: for which of these purposes strict determinism of the results is not crucial? As the answer to this may not be obvious, a potential study could address this very issue. For instance one can consider on-board navigation systems with limited accuracy... I may be talking bullshit now, but perhaps in some applications it doesn't matter whether a satellite flies on the exact route but +/-10km to the left/right? ...and so on for the other systems. Another thing is understanding what exactly this probabilistic computing is, and what can be achieved using it (like the result is probabilistic but falls within a defined range of precision), etc. Did they build a complete chip or at least a sub-circiut, or still only logic gates...
  •  
    Satellites use old CPUs also because with the trend of going for higher power modern CPUs are not very convenient from a system design point of view (TBC)... as a consequence the constraints put on on-board algorithms can be demanding. I agree with you that double precision might just not be necessary for a number of applications (navigation also), but I guess we are not talking about 10km as an absolute value, rather to a relative error that can be tolerated at level of (say) 10^-6. All in all you are right a first study should assess what application this would be useful at all.. and at what precision / power levels
  •  
    The interest of this can be a high fault tolerance for some math operations, ... which would have for effect to simplify the job of coders! I don't think this is a good idea regarding power consumption for CPU (strictly speaking). The reason we use old chip is just a matter of qualification for space, not power. For instance a LEON Sparc (e.g. use on some platform for ESA) consumes something like 5mW/MHz so it is definitely not were an engineer will look for some power saving considering a usual 10-15kW spacecraft
  •  
    What about speed then? Seven time faster could allow some real time navigation at higher speed (e.g. velocity of a terminal guidance for an asteroid impactor is limited to 10 km/s ... would a higher velocity be possible with faster processors?) Another issue is the radiation tolerance of the technology ... if the PCMOS are more tolerant to radiation they could get more easily space qualified.....
  •  
    I don't remember what is the speed factor, but I guess this might do it! Although, I remember when using an IMU that you cannot have the data above a given rate (e.g. 20Hz even though the ADC samples the sensor at a little faster rate), so somehow it is not just the CPU that must be re-thought. When I say qualification I also imply the "hardened" phase.
  •  
    I don't know if the (promised) one-order-of-magnitude improvements in power efficiency and performance are enough to justify looking into this. For once, it is not clear to me what embracing this technology would mean from an engineering point of view: does this technology need an entirely new software/hardware stack? If that were the case, in my opinion any potential benefit would be nullified. Also, is it realistic to build an entire self-sufficient chip on this technology? While the precision of floating point computations may be degraded and still be useful, how does all this play with integer arithmetic? Keep in mind that, e.g., in the Linux kernel code floating-point calculations are not even allowed/available... It is probably possible to integrate an "accelerated" low-accuracy floating-point unit together with a traditional CPU, but then again you have more implementation overhead creeping in. Finally, recent processors by Intel (e.g., the Atom) and especially ARM boast really low power-consumption levels, at the same time offering performance-boosting features such as multi-core and vectorization capabilities. Don't such efforts have more potential, if anything because of economical/industrial inertia?
Dario Izzo

Stacked Approximated Regression Machine: A Simple Deep Learning Approach - 5 views

  •  
    from one of the reddit threads discussing this: "bit fishy, crazy if real". "Incredible claims: - Train only using about 10% of imagenet-12, i.e. around 120k images (i.e. they use 6k images per arm) - get to the same or better accuracy as the equivalent VGG net - Training is not via backprop but more simpler PCA + Sparsity regime (see section 4.1), shouldn't take more than 10 hours just on CPU probably "
  •  
    clicking the link says the manuscript was withdrawn :))
  •  
    This "one-shot learning" paper by Googe Deepmind also claims to be able to learn from very few training data. Thought it might be interesting for you guys: https://arxiv.org/pdf/1605.06065v1.pdf
Friederike Sontag

New supercomputer at the German Climate Computing Center in Hamburg - 2 views

ESA ACT

ZetaGrid - 0 views

shared by ESA ACT on 24 Apr 09 - Cached
  •  
    ZetaGrid is a platform independent grid system that uses idle CPU cycles from participating computers. Problem: The ZetaGrid activities must come to a final end! Now all services are down and this domain will be closed soon. The official last update note
Dario Izzo

Global Climate Models Powered by Intel® Xeon Phi™ Coprocessors - 1 views

  •  
    NASA has it ... I WANT IT TOO!!!! 240 threads on 60 cores ... Imagine the possibilities of this new toy!! Francesco also has it in his new "kill the seals" job
LeopoldS

CPU and password strength - 4 views

  •  
    true?
  •  
    Isn't that why systems have a "wait for 15 minutes before trying again" after 3 or 5 wrong guesses? All the brute force in the word can't save you from real-life latency.
  •  
    Oh, so you haven't heard about diceware yet? http://world.std.com/~reinhold/diceware.html And, of course, a related XKCD...
Ma Ru

Neural Network simulation chip from IBM - 1 views

  •  
    There you go, the latest-and-greatest chip is there. Now the only remaining tiny detail - program it.
  •  
    Let's buy it first and we'll figure the rest out later :P
Daniel Hennes

Google Just Open Sourced the Artificial Intelligence Engine at the Heart of Its Online ... - 2 views

  •  
    TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.
  •  
    And the interface even looks a bit less retarded than theano
1 - 9 of 9
Showing 20 items per page