Skip to main content

Home/ Advanced Concepts Team/ Group items tagged arithmetics

Rss Feed Group items tagged

Francesco Biscani

Pi Computation Record - 4 views

  •  
    For Dario: the PI computation record was established on a single desktop computer using a cache optimized algorithm. Previous record was obtained by a cluster of hundreds of computers. The cache optimized algorithm was 20 times faster.
  • ...6 more comments...
  •  
    Teeeeheeeeheeee... assembler programmers greet Java/Python/Etc. programmers :)
  •  
    And he seems to have done everything in his free time!!! I like the first FAQ.... "why did you do it?"
  •  
    did you read any of the books he recommends? suggest: Modern Computer Arithmetic by Richard Brent and Paul Zimmermann, version 0.4, November 2009, Full text available here. The Art of Computer Programming, volume 2 : Seminumerical Algorithms by Donald E. Knuth, Addison-Wesley, third edition, 1998. More information here.
  •  
    btw: we will very soon have the very same processor in the new iMac .... what record are you going to beat with it?
  •  
    Zimmerman is the same guy behind the MPFR multiprecision floating-point library, if I recall correctly: http://www.mpfr.org/credit.html I've not read the book... Multiprecision arithmetic is a huge topic though, at least from the scientific and number theory point of view if not for its applications to engineering problems. "The art of computer programming" is probably the closest thing to a bible for computer scientists :)
  •  
    "btw: we will very soon have the very same processor in the new iMac .... what record are you going to beat with it?" Fastest Linux install on an iMac :)
  •  
    "Fastest Linux install on an iMac :)" that is going to be a though one but a worthy aim! ""The art of computer programming" is probably the closest thing to a bible for computer scientists :)" yep! Programming is art ;)
Athanasia Nikolaou

The drawbacks of open office - 1 views

  •  
    The natural multitaskers are profited the least from this configuration. And then there is Thijs
  •  
    haha :) "The psychologist Nick Perham, who studies the effect of sound on how we think, has found that office commotion impairs workers' ability to recall information, and even to do basic arithmetic. Listening to music to block out the office intrusion doesn't help: even that, Perham found, impairs our mental acuity." Actually, I grew up studying my homework in my parents shop downstairs. No noise whatsoever drives me insane :)
Dario Izzo

Probabilistic Logic Allows Computer Chip to Run Faster - 3 views

  •  
    Francesco pointed out this research one year ago, we dropped it as noone was really considering it ... but in space a low CPU power consumption is crucial!! Maybe we should look back into this?
  • ...6 more comments...
  •  
    Q1: For the time being, for what purposes computers are mainly used on-board?
  •  
    for navigation, control, data handling and so on .... why?
  •  
    Well, because the point is to identify an application in which such computers would do the job... That could be either an existing application which can be done sufficiently well by such computers or a completely new application which is not already there for instance because of some power consumption constraints... Q2 would be then: for which of these purposes strict determinism of the results is not crucial? As the answer to this may not be obvious, a potential study could address this very issue. For instance one can consider on-board navigation systems with limited accuracy... I may be talking bullshit now, but perhaps in some applications it doesn't matter whether a satellite flies on the exact route but +/-10km to the left/right? ...and so on for the other systems. Another thing is understanding what exactly this probabilistic computing is, and what can be achieved using it (like the result is probabilistic but falls within a defined range of precision), etc. Did they build a complete chip or at least a sub-circiut, or still only logic gates...
  •  
    Satellites use old CPUs also because with the trend of going for higher power modern CPUs are not very convenient from a system design point of view (TBC)... as a consequence the constraints put on on-board algorithms can be demanding. I agree with you that double precision might just not be necessary for a number of applications (navigation also), but I guess we are not talking about 10km as an absolute value, rather to a relative error that can be tolerated at level of (say) 10^-6. All in all you are right a first study should assess what application this would be useful at all.. and at what precision / power levels
  •  
    The interest of this can be a high fault tolerance for some math operations, ... which would have for effect to simplify the job of coders! I don't think this is a good idea regarding power consumption for CPU (strictly speaking). The reason we use old chip is just a matter of qualification for space, not power. For instance a LEON Sparc (e.g. use on some platform for ESA) consumes something like 5mW/MHz so it is definitely not were an engineer will look for some power saving considering a usual 10-15kW spacecraft
  •  
    What about speed then? Seven time faster could allow some real time navigation at higher speed (e.g. velocity of a terminal guidance for an asteroid impactor is limited to 10 km/s ... would a higher velocity be possible with faster processors?) Another issue is the radiation tolerance of the technology ... if the PCMOS are more tolerant to radiation they could get more easily space qualified.....
  •  
    I don't remember what is the speed factor, but I guess this might do it! Although, I remember when using an IMU that you cannot have the data above a given rate (e.g. 20Hz even though the ADC samples the sensor at a little faster rate), so somehow it is not just the CPU that must be re-thought. When I say qualification I also imply the "hardened" phase.
  •  
    I don't know if the (promised) one-order-of-magnitude improvements in power efficiency and performance are enough to justify looking into this. For once, it is not clear to me what embracing this technology would mean from an engineering point of view: does this technology need an entirely new software/hardware stack? If that were the case, in my opinion any potential benefit would be nullified. Also, is it realistic to build an entire self-sufficient chip on this technology? While the precision of floating point computations may be degraded and still be useful, how does all this play with integer arithmetic? Keep in mind that, e.g., in the Linux kernel code floating-point calculations are not even allowed/available... It is probably possible to integrate an "accelerated" low-accuracy floating-point unit together with a traditional CPU, but then again you have more implementation overhead creeping in. Finally, recent processors by Intel (e.g., the Atom) and especially ARM boast really low power-consumption levels, at the same time offering performance-boosting features such as multi-core and vectorization capabilities. Don't such efforts have more potential, if anything because of economical/industrial inertia?
ESA ACT

Baby birds can do arithmetic - 0 views

  •  
    Chicks' ability to add and subtract objects as they were moved behind two screens.
zoervleis

Ancient Babylonian astronomers calculated Jupiter's position from the area under a time... - 2 views

shared by zoervleis on 29 Jan 16 - No Cached
LeopoldS liked it
  •  
    Ancient Babylonian astronomers developed many important concepts that are still in use, including the division of the sky into 360 degrees. They could also predict the positions of the planets using arithmetic. Ossendrijver translated several Babylonian cuneiform tablets from 350 to 50 BCE and found that they contain a sophisticated calculation of the position of Jupiter.
1 - 6 of 6
Showing 20 items per page