Skip to main content

Home/ Advanced Concepts Team/ Group items tagged nvidia

Rss Feed Group items tagged

Alexander Wittig

Picture This: NVIDIA GPUs Sort Through Tens of Millions of Flickr Photos - 2 views

  •  
    Strange and exotic cityscapes. Desolate wilderness areas. Dogs that look like wookies. Flickr, one of the world's largest photo sharing services, sees it all. And, now, Flickr's image recognition technology can categorize more than 11 billion photos like these. And it does it automatically. It's called "Magic View." Magical deep learning! Buzzword attack!
  • ...4 more comments...
  •  
    and here comes my standard question: how can we use this for space? fast detection of natural disasters onboard?
  •  
    Even on ground. You could for example teach it what nuclear reactors or missiles or other weapons you don't want look like on satellite pictures and automatically scan the world for them (basically replacing intelligence analysts).
  •  
    In fact, I think this could make a nice ACT project: counting seals from satellite imagery is an actual (and quite recent) thing: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0092613 In this publication they did it manually from a GeoEye 1 b/w image, which sounds quite tedious. Maybe one can train one of those image recognition algorithms to do it automatically. Or maybe it's a bit easier to count larger things, like elephants (also a thing).
  •  
    In HiPEAC (High Performance, embedded architecture and computation) conference I attended in the beginning of this year there was a big trend of CUDA GPU vs FPGA for hardware accelerated image processing. Most of it orbitting around discussing who was faster and cheaper with people from NVIDIA in one side and people from Xilinx and Intel in the other. I remember of talking with an IBM scientist working on hardware accelerated data processing working together with the Radio telescope institute in Netherlands about the solution where they working on (GPU CUDA). I gathered that NVIDIA GPU suits best in applications that somehow do not rely in hardware, having the advantage of being programmed in a 'easy' way accessible to a scientist. FPGA's are highly reliable components with the advantage of being available in radhard versions, but requiring specific knowledge of physical circuit design and tailored 'harsh' programming languages. I don't know what is the level of rad hardness in NVIDIA's GPUs... Therefore FPGAs are indeed the standard choice for image processing in space missions (a talk with the microelectronics department guys could expand on this), whereas GPUs are currently used in some ground based (radio astronomy or other types of telescopes). I think that on for a specific purpose as the one you mentioned, this FPGA vs GPU should be assessed first before going further.
  •  
    You're forgetting power usage. GPUs need 1000 hamster wheels worth of power while FPGAs can run on a potato. Since space applications are highly power limited, putting any kind of GPU monster in orbit or on a rover is failed idea from the start. Also in FPGAs if a gate burns out from radiation you can just reprogram around it. Looking for seals offline in high res images is indeed definitely a GPU task.... for now.
  •  
    The discussion of how to make FPGA hardware acceleration solutions easier to use for the 'layman' is starting btw http://reconfigurablecomputing4themasses.net/.
Francesco Biscani

NVIDIA GF100 Architecture and Feature Preview - HotHardware - 3 views

  •  
    NVIDIA Fermi preview...
  •  
    sllllll, nerd porn ;) awesome
eblazquez

NVIDIA Releases Open-Source GPU Kernel Modules - 0 views

https://developer.nvidia.com/blog/nvidia-releases-open-source-gpu-kernel-modules/

technology AI

started by eblazquez on 12 May 22 no follow-up yet
Dario Izzo

A little bit of ACT and NVIDIA Goes to the Moon with CUDA and Tegra - 3 views

shared by Dario Izzo on 08 Dec 11 - No Cached
LeopoldS liked it
  •  
    The famous Mars Rover Simulator was a piece of the Evolution in Robotic Island Ariadna!!!!! But again, its only an algorithm :) whats new?
  •  
    Go Plymouth!
dejanpetkow

Torvalds comments on NVIDIA - 3 views

  •  
    Soooo beautiful!!! Just start at 49:25.
LeopoldS

NVIDIA Newsroom - 2 views

  •  
    the fastest supercomputer seems to be in China now!
LeopoldS

Tilera Corporation - 2 views

  •  
    who wants 100 cores ... future of PAGMO?
  • ...2 more comments...
  •  
    Well nVidia provides 10.000 "cores" in a single rack on thei Teslas...
  •  
    remember that you were recommending its purchase already some time ago ... still strong reasons to do so?
  •  
    The problem with this flurry of activity today regarding multicore architectures is that it is really unclear which one will be the winner in the long run. Never understimate the power of inertia, especially in the software industry (after all, people are still programming in COBOL and Fortran today). For instance, NVIDIA gives you the Teslas with 10000 cores, but then you have to rewrite extensive parts of your code in order to take advantage of this. Is this an investment worth undertaking? Difficult to say, it would certainly be if the whole software world moves into that direction (which is not happening - yet?). But then you have other approaches coming out, suche as the Cell processor by IBM (the one on the PS3) which has really impressive floating point performance and, of course, a completely different programming model. The nice thing about this Tilera processor seems to be that it is a general-purpose processor, which may not require extensive re-engineering of existing code (but I'm really hypothesizing, since the thechincal details are not very abundant on their website).
  •  
    Moreover PaGMO computation model is more towards systems with distributed memory, and not with shared memory (i.e. multi-core). In the latter, at certain point the memory access becomes the bottleneck.
Francesco Biscani

CUDA-Enabled Apps: Measuring Mainstream GPU Performance : Help For the Rest of Us - Rev... - 0 views

  •  
    What will the name of the CUDA-enabled PaGMO be? CuDMO, CyGMO?
ESA ACT

NVIDIA Tesla - GPU Computing Solutions for HPC - 0 views

  •  
    Graphic cards in the core of a supercomputer.
Marcus Maertens

GTC On-Demand Featured Talks | GPU Technology Conference - 3 views

  •  
    NVIDIA published around 154 talks focussed on AI from their conference this year...
Marcus Maertens

Drink up! Beer Tasting Robot Uses AI to Assess Quality - NVIDIA Developer News CenterNV... - 1 views

  •  
    Meanwhile in Australia...
Marcus Maertens

MIT, Mass Gen Aim Deep Learning at Sleep Research | NVIDIA Blog - 2 views

  •  
    Neural Networks to analyse sleeplessness.
eblazquez

Refik Anadol at the AI Art Gallery at GTC 2021 | NVIDIA - 0 views

  •  
    A very nice expo currently showcased in Berlin, well worth a visit. https://www.koeniggalerie.com/exhibitions/38455/machine-hallucinations-nature-dreams/ The concept is quite cool and teh result just stunning (I recommend giving the youtube videos a good luck on your TV if you have teh chance)
Francesco Biscani

[Phoronix] NVIDIA Readies Its OpenCL Linux Driver - 0 views

  •  
    Dario: this was the CUDA standardization thing I talked you about.
Marion Nachon

Frontier Development Lab (FDL): AI technologies to space science - 3 views

Applications might be of interest to some: https://frontierdevelopmentlab.org/blog/2019/3/1/application-deadline-extended-cftt4?fbclid=IwAR0gqMsHJCJx5DeoObv0GSESaP6VGjNKnHCPfmzKuvhFLDpkLSrcaCwmY_c ...

technology AI space science

started by Marion Nachon on 08 Apr 19 no follow-up yet
1 - 15 of 15
Showing 20 items per page