Group items matching
in title, tags, annotations or urlNeuroNex - Odor2Action - 0 views
-
Let's keep a eye on this... Animals use odor cues to navigate through their environments, helping them locate targets and assess danger. Much of how animal brains organize, read out, and respond to odor stimuli across spatial and temporal scales is not well understood. To tackle these questions, Odor2Action uses a highly interdisciplinary team science approach. Our work uses fruit fly, honeybee, and mouse models to determine how neural representations of odor are generated, reformatted, and translated to generate useful behaviors that guide how animals interact with their environment.
-
reminds me of the methan smelling source finding study we did ...
Scientists Have Trained an AI to Spot Obesity From Space - 5 views
Better Language Models and Their Implications - 1 views
-
Just read some of the samples of text generated with their neural networks, insane.
- ...3 more comments...
-
It's really lucky that it was OpenAI who made that development and Elon Musk is so worried about AI. This way at least they try to assess the whole spectrum of abilities and applications of this model before releasing the full research to the public.
-
They released a smaller model, I got it running on Sandy. It's fairly straight forward: https://github.com/openai/gpt-2
Artificial Neural Nets Grow Brainlike Navigation Cells - 0 views
FB pre-trained deep neural net on billion image user-hashtag dataset - 0 views
Opening the Black Box of Deep Neural Networks via Information Theory - 1 views
-
This has attracted a lot of attention in the DL community lately. Article: https://lilianweng.github.io/lil-log/2017/09/28/anatomize-deep-learning-with-information-theory.html Paper: https://arxiv.org/pdf/1703.00810.pdf
MIT, Mass Gen Aim Deep Learning at Sleep Research | NVIDIA Blog - 2 views
Comeback for Genetic Algorithms...Deep Neuroevolution! - 5 views
-
Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. For paper see: https://arxiv.org/pdf/1712.06567.pdf
-
Interesting pointers in this one! I would like to explore neuroevolution as well, although it seems extremely resource-demanding?
-
Not necessarily, I think it can be made to be much faster hybridizing it with backprop and Taylor maps. Its one ideas in the closet we still have not explored (Differential Intelligence: accelerating neuroevolution).
Neural-network quantum state tomography | Nature Physics - 4 views
When AI is made by AI, results are impressive - 6 views
-
This has been around for over a year. The current trend in deep learning is "deeper is better". But a consequence of this is that for a given network depth, we can only feasibly evaluate a tiny fraction of the "search space" of NN architectures. The current approach to choosing a network architecture is to iteratively add more layers/units and keeping the architecture which gives an increase in the accuracy on some held-out data set i.e. we have the following information: {NN, accuracy}. Clearly, this process can be automated by using the accuracy as a 'signal' to a learning algorithm. The novelty in this work is they use reinforcement learning with a recurrent neural network controller which is trained by a policy gradient - a gradient-based method. Previously, evolutionary algorithms would typically be used. In summary, yes, the results are impressive - BUT this was only possible because they had access to Google's resources. An evolutionary approach would probably end up with the same architecture - it would just take longer. This is part of a broader research area in deep learning called 'meta-learning' which seeks to automate all aspects of neural network training.
-
Btw that techxplore article was cringing to read - if interested read this article instead: https://research.googleblog.com/2017/05/using-machine-learning-to-explore.html
Google's AI Wizard Unveils a New Twist on Neural Networks - 2 views
-
"Hinton's new approach, known as capsule networks, is a twist on neural networks intended to make machines better able to understand the world through images or video. In one of the papers posted last week, Hinton's capsule networks matched the accuracy of the best previous techniques on a standard test of how well software can learn to recognize handwritten digits." Links to papers: https://arxiv.org/abs/1710.09829 https://openreview.net/forum?id=HJWLfGWRb¬eId=HJWLfGWRb
-
impressive!
-
seems a very impressive guy :"Hinton formed his intuition that vision systems need such an inbuilt sense of geometry in 1979, when he was trying to figure out how humans use mental imagery. He first laid out a preliminary design for capsule networks in 2011. The fuller picture released last week was long anticipated by researchers in the field. "Everyone has been waiting for it and looking for the next great leap from Geoff," says Kyunghyun Cho, a professor"