Skip to main content

Home/ Advanced Concepts Team/ Group items tagged intel®

Rss Feed Group items tagged

Dario Izzo

Global Climate Models Powered by Intel® Xeon Phi™ Coprocessors - 1 views

  •  
    NASA has it ... I WANT IT TOO!!!! 240 threads on 60 cores ... Imagine the possibilities of this new toy!! Francesco also has it in his new "kill the seals" job
santecarloni

Intel Reveals Neuromorphic Chip Design  - Technology Review - 1 views

  •  
    Intel's goal is to build chips that work more like the human brain. Now its engineers think they know how
pacome delva

New Intel Sensor Could Cut Electricity Bill - 3 views

  • Once connected, the sensor will wirelessly connect to all electrical devices in the house and self configure to record the voltages from each source in real time.
  •  
    "The first thing everyone did after seeing the energy graph on the family PC was to turn off the lights". Kind-of we are becoming slaves of the technology. Do we really need a sensor to tell us to turn-off the light when we are leaving the room!?
ESA ACT

Intel Announces Winner of University Notebook Challenge - 0 views

  •  
    Pedal Power the Healthy Solution for the Energy-Efficient Laptop of the Future
ESA ACT

PC Pro: Product Reviews: Intel Atom - 0 views

  •  
    Yet it does all this with a thermal design power of around 2W - incredibly, less than three per cent that of an everyday Core 2 Duo. Average power consumption is promised to be in the milliwatt range, with idle draw as low as 30mW.
ESA ACT

LinuxDNA Supercharges Linux with the Intel C/C++ Compiler - 0 views

  •  
    Exciting news!!!
ESA ACT

Intel CTO predicts singularity by 2050 - Engadget - 0 views

  •  
    something for Luzi and Marek - didn't this guy read our report? :-)
Francesco Biscani

Intel Shows 48-Core x86 Processor - 1 views

  •  
    Finally a massively multi-core general-purpose architecture.
  •  
    Well, nice, but I wonder how much cache per core will be available... With 48 cores a single memory bus becomes nothing more than one big (small? :) ) bottleneck.
  •  
    Apparently they have separated L2 cache per-tile (i.e., every two processors) and a high speed bus connecting the tiles. As usual, whether it will be fast enough will depend from the specific applications (which BTW is also true for other current multi-core architectures). The nice thing is of course that porting software to this architecture will be one order of magnitude less difficult than going to Tesla/Fermi/CELL architectures. Also, this architecture will also be suitable for other tasks than floating point computations (damn engineers polluting computer science :P) and it has the potential to be more future-proof than other solutions.
eblazquez

Intel launches its next-generation neuromorphic processor-so, what's that again? | Ars ... - 1 views

  •  
    Seems to be a fun playground for Spiking Neural Networks right (from my newbie PoV)?
johannessimon81

18-year-old massively improves supercapacitors during Intel International Science and E... - 1 views

  •  
    "Her goal was to design and synthesise a super capacitor with increased energy density while maintaining power density and long cycle life. She designed, synthesised and characterised a novel core-shell nanorod electrode with hydrogemated TiO2(H-TiO2) core and polyaniline shell. H-TiO2 acts as the double layer electrostatic core. Good conductivity of H-TiO2 combined with the high pseudo capacitance of polyaniline results in significantly higher overall capacitance and energy density while retaining good power density and cycle life. This new electrode was fabricated into a flexible solid-state device to light an LED to test it in a practical application. Khare then evaluated the structural and electrochemical properties of the new electrode. It demonstrated high capacitance of 203.3 mF/cm2 (238.5 F/g) compared to the next best alternative super capacitor in previous research of 80 F/g, due to the design of the core-shell structure. This resulted in excellent energy density of 20.1 Wh/kg, comparable to batteries, while maintaining a high power density of 20540 W/kg. It also demonstrated a much higher cycle life compared to batteries, with a low 32.5% capacitance loss over 10,000 cycles at a high scan rate of 200 mV/s."
Luke O'Connor

Astronomer Captures Enormous True-Color Photo of Night Sky - 4 views

  •  
    And some interactive versions: http://skysurvey.org/
  •  
    this is very very nice! thanks for sharing
LeopoldS

American Innovation Losing its Shine? - 4 views

  •  
    interesting reflections by MIT head on innovation in US
  •  
    interesting, especially since in all COmmission papers US innovation is praised and changes expected are only related to China/India (for the better)... Article mixes a lot talk on innovation with numbers that I do not see necessarily connected (trade deficit, GDP growth etc.). Seems to me the real problematique behind the article is only the next planned distribution of federal funds and where they should cut...
  •  
    well I understand her point. Spending cuts are only vicious short term solutions against economical downturn since growth (GDP is an interesting measure indeed) comes from innovation, research and production. Nonetheless, what she is describing is happening in EU too. So who will take the lead? I am not certain China is the one. In my view, it has not yet solved its domestic issues... and US still has more Nobel Prize than China. One thing for sure, the way it is EU is only a "wagon" of the train...
Alexander Wittig

Picture This: NVIDIA GPUs Sort Through Tens of Millions of Flickr Photos - 2 views

  •  
    Strange and exotic cityscapes. Desolate wilderness areas. Dogs that look like wookies. Flickr, one of the world's largest photo sharing services, sees it all. And, now, Flickr's image recognition technology can categorize more than 11 billion photos like these. And it does it automatically. It's called "Magic View." Magical deep learning! Buzzword attack!
  • ...4 more comments...
  •  
    and here comes my standard question: how can we use this for space? fast detection of natural disasters onboard?
  •  
    Even on ground. You could for example teach it what nuclear reactors or missiles or other weapons you don't want look like on satellite pictures and automatically scan the world for them (basically replacing intelligence analysts).
  •  
    In fact, I think this could make a nice ACT project: counting seals from satellite imagery is an actual (and quite recent) thing: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0092613 In this publication they did it manually from a GeoEye 1 b/w image, which sounds quite tedious. Maybe one can train one of those image recognition algorithms to do it automatically. Or maybe it's a bit easier to count larger things, like elephants (also a thing).
  •  
    In HiPEAC (High Performance, embedded architecture and computation) conference I attended in the beginning of this year there was a big trend of CUDA GPU vs FPGA for hardware accelerated image processing. Most of it orbitting around discussing who was faster and cheaper with people from NVIDIA in one side and people from Xilinx and Intel in the other. I remember of talking with an IBM scientist working on hardware accelerated data processing working together with the Radio telescope institute in Netherlands about the solution where they working on (GPU CUDA). I gathered that NVIDIA GPU suits best in applications that somehow do not rely in hardware, having the advantage of being programmed in a 'easy' way accessible to a scientist. FPGA's are highly reliable components with the advantage of being available in radhard versions, but requiring specific knowledge of physical circuit design and tailored 'harsh' programming languages. I don't know what is the level of rad hardness in NVIDIA's GPUs... Therefore FPGAs are indeed the standard choice for image processing in space missions (a talk with the microelectronics department guys could expand on this), whereas GPUs are currently used in some ground based (radio astronomy or other types of telescopes). I think that on for a specific purpose as the one you mentioned, this FPGA vs GPU should be assessed first before going further.
  •  
    You're forgetting power usage. GPUs need 1000 hamster wheels worth of power while FPGAs can run on a potato. Since space applications are highly power limited, putting any kind of GPU monster in orbit or on a rover is failed idea from the start. Also in FPGAs if a gate burns out from radiation you can just reprogram around it. Looking for seals offline in high res images is indeed definitely a GPU task.... for now.
  •  
    The discussion of how to make FPGA hardware acceleration solutions easier to use for the 'layman' is starting btw http://reconfigurablecomputing4themasses.net/.
Dario Izzo

Probabilistic Logic Allows Computer Chip to Run Faster - 3 views

  •  
    Francesco pointed out this research one year ago, we dropped it as noone was really considering it ... but in space a low CPU power consumption is crucial!! Maybe we should look back into this?
  • ...6 more comments...
  •  
    Q1: For the time being, for what purposes computers are mainly used on-board?
  •  
    for navigation, control, data handling and so on .... why?
  •  
    Well, because the point is to identify an application in which such computers would do the job... That could be either an existing application which can be done sufficiently well by such computers or a completely new application which is not already there for instance because of some power consumption constraints... Q2 would be then: for which of these purposes strict determinism of the results is not crucial? As the answer to this may not be obvious, a potential study could address this very issue. For instance one can consider on-board navigation systems with limited accuracy... I may be talking bullshit now, but perhaps in some applications it doesn't matter whether a satellite flies on the exact route but +/-10km to the left/right? ...and so on for the other systems. Another thing is understanding what exactly this probabilistic computing is, and what can be achieved using it (like the result is probabilistic but falls within a defined range of precision), etc. Did they build a complete chip or at least a sub-circiut, or still only logic gates...
  •  
    Satellites use old CPUs also because with the trend of going for higher power modern CPUs are not very convenient from a system design point of view (TBC)... as a consequence the constraints put on on-board algorithms can be demanding. I agree with you that double precision might just not be necessary for a number of applications (navigation also), but I guess we are not talking about 10km as an absolute value, rather to a relative error that can be tolerated at level of (say) 10^-6. All in all you are right a first study should assess what application this would be useful at all.. and at what precision / power levels
  •  
    The interest of this can be a high fault tolerance for some math operations, ... which would have for effect to simplify the job of coders! I don't think this is a good idea regarding power consumption for CPU (strictly speaking). The reason we use old chip is just a matter of qualification for space, not power. For instance a LEON Sparc (e.g. use on some platform for ESA) consumes something like 5mW/MHz so it is definitely not were an engineer will look for some power saving considering a usual 10-15kW spacecraft
  •  
    What about speed then? Seven time faster could allow some real time navigation at higher speed (e.g. velocity of a terminal guidance for an asteroid impactor is limited to 10 km/s ... would a higher velocity be possible with faster processors?) Another issue is the radiation tolerance of the technology ... if the PCMOS are more tolerant to radiation they could get more easily space qualified.....
  •  
    I don't remember what is the speed factor, but I guess this might do it! Although, I remember when using an IMU that you cannot have the data above a given rate (e.g. 20Hz even though the ADC samples the sensor at a little faster rate), so somehow it is not just the CPU that must be re-thought. When I say qualification I also imply the "hardened" phase.
  •  
    I don't know if the (promised) one-order-of-magnitude improvements in power efficiency and performance are enough to justify looking into this. For once, it is not clear to me what embracing this technology would mean from an engineering point of view: does this technology need an entirely new software/hardware stack? If that were the case, in my opinion any potential benefit would be nullified. Also, is it realistic to build an entire self-sufficient chip on this technology? While the precision of floating point computations may be degraded and still be useful, how does all this play with integer arithmetic? Keep in mind that, e.g., in the Linux kernel code floating-point calculations are not even allowed/available... It is probably possible to integrate an "accelerated" low-accuracy floating-point unit together with a traditional CPU, but then again you have more implementation overhead creeping in. Finally, recent processors by Intel (e.g., the Atom) and especially ARM boast really low power-consumption levels, at the same time offering performance-boosting features such as multi-core and vectorization capabilities. Don't such efforts have more potential, if anything because of economical/industrial inertia?
LeopoldS

Intel wins top spots in SSD storage drive ranking - 1 views

  •  
    for all those considering purchasing one of these SSDs
ESA ACT

Intel says to prepare for 'thousands of cores' - 0 views

  •  
    The end of BOINC, DiGMO, P2P, etc... for massive computation
ESA ACT

Zero Email Friday - 0 views

  •  
    like this particularly and would even go one step further: take one day or one half day during the week internet free - just unplug the cable. We did this during my PhD in our lab and it really helped! LS
Marion Nachon

Frontier Development Lab (FDL): AI technologies to space science - 3 views

Applications might be of interest to some: https://frontierdevelopmentlab.org/blog/2019/3/1/application-deadline-extended-cftt4?fbclid=IwAR0gqMsHJCJx5DeoObv0GSESaP6VGjNKnHCPfmzKuvhFLDpkLSrcaCwmY_c ...

technology AI space science

started by Marion Nachon on 08 Apr 19 no follow-up yet
1 - 20 of 20
Showing 20 items per page