Skip to main content

Home/ Advanced Concepts Team/ Group items tagged GPU

Rss Feed Group items tagged

Alexander Wittig

Picture This: NVIDIA GPUs Sort Through Tens of Millions of Flickr Photos - 2 views

  •  
    Strange and exotic cityscapes. Desolate wilderness areas. Dogs that look like wookies. Flickr, one of the world's largest photo sharing services, sees it all. And, now, Flickr's image recognition technology can categorize more than 11 billion photos like these. And it does it automatically. It's called "Magic View." Magical deep learning! Buzzword attack!
  • ...4 more comments...
  •  
    and here comes my standard question: how can we use this for space? fast detection of natural disasters onboard?
  •  
    Even on ground. You could for example teach it what nuclear reactors or missiles or other weapons you don't want look like on satellite pictures and automatically scan the world for them (basically replacing intelligence analysts).
  •  
    In fact, I think this could make a nice ACT project: counting seals from satellite imagery is an actual (and quite recent) thing: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0092613 In this publication they did it manually from a GeoEye 1 b/w image, which sounds quite tedious. Maybe one can train one of those image recognition algorithms to do it automatically. Or maybe it's a bit easier to count larger things, like elephants (also a thing).
  •  
    In HiPEAC (High Performance, embedded architecture and computation) conference I attended in the beginning of this year there was a big trend of CUDA GPU vs FPGA for hardware accelerated image processing. Most of it orbitting around discussing who was faster and cheaper with people from NVIDIA in one side and people from Xilinx and Intel in the other. I remember of talking with an IBM scientist working on hardware accelerated data processing working together with the Radio telescope institute in Netherlands about the solution where they working on (GPU CUDA). I gathered that NVIDIA GPU suits best in applications that somehow do not rely in hardware, having the advantage of being programmed in a 'easy' way accessible to a scientist. FPGA's are highly reliable components with the advantage of being available in radhard versions, but requiring specific knowledge of physical circuit design and tailored 'harsh' programming languages. I don't know what is the level of rad hardness in NVIDIA's GPUs... Therefore FPGAs are indeed the standard choice for image processing in space missions (a talk with the microelectronics department guys could expand on this), whereas GPUs are currently used in some ground based (radio astronomy or other types of telescopes). I think that on for a specific purpose as the one you mentioned, this FPGA vs GPU should be assessed first before going further.
  •  
    You're forgetting power usage. GPUs need 1000 hamster wheels worth of power while FPGAs can run on a potato. Since space applications are highly power limited, putting any kind of GPU monster in orbit or on a rover is failed idea from the start. Also in FPGAs if a gate burns out from radiation you can just reprogram around it. Looking for seals offline in high res images is indeed definitely a GPU task.... for now.
  •  
    The discussion of how to make FPGA hardware acceleration solutions easier to use for the 'layman' is starting btw http://reconfigurablecomputing4themasses.net/.
Marcus Maertens

Neurokernel - 4 views

  •  
    A nice GPU-based framework that is basically an emulator of the brain of the fruit fly. If you need a fruit fly brain - here it comes!
ESA ACT

NVIDIA Tesla - GPU Computing Solutions for HPC - 0 views

  •  
    Graphic cards in the core of a supercomputer.
ESA ACT

Twilight of the GPU - 1 views

  •  
    Announcement of the end of an era.
eblazquez

NVIDIA Releases Open-Source GPU Kernel Modules - 0 views

https://developer.nvidia.com/blog/nvidia-releases-open-source-gpu-kernel-modules/

technology AI

started by eblazquez on 12 May 22 no follow-up yet
Guido de Croon

Convolutional networks start to rule the world! - 2 views

  •  
    Recently, many competitions in the computer vision domain have been won by huge convolutional networks. In the image net competition, the convolutional network approach halves the error from ~30% to ~15%! Key changes that make this happen: weight-sharing to reduce the search space, and training with a massive GPU approach. (See also the work at IDSIA: http://www.idsia.ch/~juergen/vision.html) This should please Francisco :)
  • ...1 more comment...
  •  
    where is Francisco when one needs him ...
  •  
    ...mmmmm... they use 60 million parameters and 650,000 neurons on a task that one can somehow consider easier than (say) predicting a financial crisis ... still they get 15% of errors .... reminds me of a comic we saw once ... cat http://www.sarjis.info/stripit/abstruse-goose/496/the_singularity_is_way_over_there.png
  •  
    I think the ultimate solution is still to put a human brain in a jar and use it for pattern recognition. Maybe we should get a stagiaire for this..?
LeopoldS

CPU and password strength - 4 views

  •  
    true?
  •  
    Isn't that why systems have a "wait for 15 minutes before trying again" after 3 or 5 wrong guesses? All the brute force in the word can't save you from real-life latency.
  •  
    Oh, so you haven't heard about diceware yet? http://world.std.com/~reinhold/diceware.html And, of course, a related XKCD...
Francesco Biscani

CUDA-Enabled Apps: Measuring Mainstream GPU Performance : Help For the Rest of Us - Rev... - 0 views

  •  
    What will the name of the CUDA-enabled PaGMO be? CuDMO, CyGMO?
Francesco Biscani

[1104.1824] Simulating Spiking Neural P systems without delays using GPUs - 4 views

  •  
    Might be interesting for Giusy...
Daniel Hennes

Google Just Open Sourced the Artificial Intelligence Engine at the Heart of Its Online ... - 2 views

  •  
    TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.
  •  
    And the interface even looks a bit less retarded than theano
Marcus Maertens

GTC On-Demand Featured Talks | GPU Technology Conference - 3 views

  •  
    NVIDIA published around 154 talks focussed on AI from their conference this year...
dharmeshtailor

Facebook does Go - 3 views

  •  
    Took 2000 GPUs over 2 weeks to train :)
pablo_gomez

Introducing Triton: Open-Source GPU Programming for Neural Networks - 1 views

shared by pablo_gomez on 28 Jul 21 - No Cached
  •  
    Might be of interest for torchquad and other projects
1 - 13 of 13
Showing 20 items per page