Skip to main content

Home/ Advanced Concepts Team/ Group items tagged deep learning

Rss Feed Group items tagged

anonymous

Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable ... - 4 views

  •  
    Other possible study: get a textbook example of an image of a pen, evolve it just enough so NN can't recognize it anymore, while minimizing the distance between the original and evolved images. EDIT: Its been done already: http://cs.nyu.edu/~zaremba/docs/understanding.pdf
  •  
    Of course, you can't really use them to extrapolate. The unknown unknown is always the trickiest :P They should just make another class "random bullshit", really and dump all of this stuff in there. I think there's a potential paper right there
dharmeshtailor

Opening the Black Box of Deep Neural Networks via Information Theory - 1 views

Dario Izzo

Stacked Approximated Regression Machine: A Simple Deep Learning Approach - 5 views

  •  
    from one of the reddit threads discussing this: "bit fishy, crazy if real". "Incredible claims: - Train only using about 10% of imagenet-12, i.e. around 120k images (i.e. they use 6k images per arm) - get to the same or better accuracy as the equivalent VGG net - Training is not via backprop but more simpler PCA + Sparsity regime (see section 4.1), shouldn't take more than 10 hours just on CPU probably "
  •  
    clicking the link says the manuscript was withdrawn :))
  •  
    This "one-shot learning" paper by Googe Deepmind also claims to be able to learn from very few training data. Thought it might be interesting for you guys: https://arxiv.org/pdf/1605.06065v1.pdf
johannessimon81

IBM Neuromorphic chip hits DARPA milestone and has been used to implement deep learning - 2 views

  •  
    "IBM delivered on the DARPA SyNAPSE project with a one million neuron brain-inspired processor. The chip consumes merely 70 milliwatts, and is capable of 46 billion synaptic operations per second, per watt-literally a synaptic supercomputer in your palm." --- No memristors..., yet.: https://www.technologyreview.com/s/537211/a-better-way-to-build-brain-inspired-chips/
Paul N

Deep Learning, an Overview - 2 views

  •  
    For those interested in AI, a good idea to keep track of what's recently been put out there
  •  
    From a quick glance, a serious contender to the most unreadable article ever. 60% of the pages of this document are references... Still, in the use of abbreviations doesn't even come close to aerospace...
Alexander Wittig

Picture This: NVIDIA GPUs Sort Through Tens of Millions of Flickr Photos - 2 views

  •  
    Strange and exotic cityscapes. Desolate wilderness areas. Dogs that look like wookies. Flickr, one of the world's largest photo sharing services, sees it all. And, now, Flickr's image recognition technology can categorize more than 11 billion photos like these. And it does it automatically. It's called "Magic View." Magical deep learning! Buzzword attack!
  • ...4 more comments...
  •  
    and here comes my standard question: how can we use this for space? fast detection of natural disasters onboard?
  •  
    Even on ground. You could for example teach it what nuclear reactors or missiles or other weapons you don't want look like on satellite pictures and automatically scan the world for them (basically replacing intelligence analysts).
  •  
    In fact, I think this could make a nice ACT project: counting seals from satellite imagery is an actual (and quite recent) thing: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0092613 In this publication they did it manually from a GeoEye 1 b/w image, which sounds quite tedious. Maybe one can train one of those image recognition algorithms to do it automatically. Or maybe it's a bit easier to count larger things, like elephants (also a thing).
  •  
    In HiPEAC (High Performance, embedded architecture and computation) conference I attended in the beginning of this year there was a big trend of CUDA GPU vs FPGA for hardware accelerated image processing. Most of it orbitting around discussing who was faster and cheaper with people from NVIDIA in one side and people from Xilinx and Intel in the other. I remember of talking with an IBM scientist working on hardware accelerated data processing working together with the Radio telescope institute in Netherlands about the solution where they working on (GPU CUDA). I gathered that NVIDIA GPU suits best in applications that somehow do not rely in hardware, having the advantage of being programmed in a 'easy' way accessible to a scientist. FPGA's are highly reliable components with the advantage of being available in radhard versions, but requiring specific knowledge of physical circuit design and tailored 'harsh' programming languages. I don't know what is the level of rad hardness in NVIDIA's GPUs... Therefore FPGAs are indeed the standard choice for image processing in space missions (a talk with the microelectronics department guys could expand on this), whereas GPUs are currently used in some ground based (radio astronomy or other types of telescopes). I think that on for a specific purpose as the one you mentioned, this FPGA vs GPU should be assessed first before going further.
  •  
    You're forgetting power usage. GPUs need 1000 hamster wheels worth of power while FPGAs can run on a potato. Since space applications are highly power limited, putting any kind of GPU monster in orbit or on a rover is failed idea from the start. Also in FPGAs if a gate burns out from radiation you can just reprogram around it. Looking for seals offline in high res images is indeed definitely a GPU task.... for now.
  •  
    The discussion of how to make FPGA hardware acceleration solutions easier to use for the 'layman' is starting btw http://reconfigurablecomputing4themasses.net/.
Juxi Leitner

Convolutional Neural Networks for Visual Recognition - 3 views

  •  
    pretty impressive stuff!
  • ...3 more comments...
  •  
    Amazing how some guys from some other university also did pretty much the same thing (although they didn't use the bidirectional stuff) and published it just last month. Just goes to show you can dump pretty much anything into an RNN and train it for long enough and it'll produce magic. http://arxiv.org/pdf/1410.1090v1.pdf
  •  
    Seems like quite the trend. And the fact that google still tries to use LSTMs is even more surprising.
  •  
    LSTMs: that was also the first thing in the paper that caught my attention! :) I hadn't seen them in the wild in years... My oversight most likely. The paper seems to be getting ~100 citations a year. Someone's using them.
  •  
    There are a few papers on them. Though you have to be lucky to get them to work. The backprop is horrendous.
Daniel Hennes

Google Just Open Sourced the Artificial Intelligence Engine at the Heart of Its Online ... - 2 views

  •  
    TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.
  •  
    And the interface even looks a bit less retarded than theano
dharmeshtailor

A Universal Training Algorithm for Quantum Deep Learning - 5 views

  •  
    Just out - I wish I could understand this :(
  •  
    ignorance is a bliss :)
darioizzo2

Scientists Have Trained an AI to Spot Obesity From Space - 5 views

  •  
    If it can be done for obesity, I guess noise is also an option right? :)
  •  
    love it
‹ Previous 21 - 30 of 30
Showing 20 items per page