Skip to main content

Home/ Advanced Concepts Team/ Group items matching "neural" in title, tags, annotations or url

Group items matching
in title, tags, annotations or url

Sort By: Relevance | Date Filter: All | Bookmarks | Topics Simple Middle
dodo47

Plenoxels: Radiance Fields without Neural Networks - 1 views

shared by dodo47 on 11 Dec 21 - No Cached
eblazquez

Intel launches its next-generation neuromorphic processor-so, what's that again? | Ars Technica - 1 views

  •  
    Seems to be a fun playground for Spiking Neural Networks right (from my newbie PoV)?
pablo_gomez

Introducing Triton: Open-Source GPU Programming for Neural Networks - 1 views

shared by pablo_gomez on 28 Jul 21 - No Cached
  •  
    Might be of interest for torchquad and other projects
marenr

NeuroNex - Odor2Action - 0 views

  •  
    Let's keep a eye on this... Animals use odor cues to navigate through their environments, helping them locate targets and assess danger. Much of how animal brains organize, read out, and respond to odor stimuli across spatial and temporal scales is not well understood. To tackle these questions, Odor2Action uses a highly interdisciplinary team science approach. Our work uses fruit fly, honeybee, and mouse models to determine how neural representations of odor are generated, reformatted, and translated to generate useful behaviors that guide how animals interact with their environment.
  •  
    reminds me of the methan smelling source finding study we did ...
microno95

Differences between deep neural networks and human perception | MIT News - 2 views

  •  
    The generated inputs are quite strange, I wonder where else something like this occurs.
microno95

Introducing LCA: Loss Change Allocation for Neural Network Training | Uber Engineering Blog - 2 views

  •  
    Fascinating insight into the question of how networks learn.
mkisantal

Robots Made Out of Branches Use Deep Learning to Walk - IEEE Spectrum - 1 views

  •  
    Random branches are collected, scanned to 3D, and connected with servos. Then a neural network is trained to control this "robot".
darioizzo2

Scientists Have Trained an AI to Spot Obesity From Space - 5 views

  •  
    If it can be done for obesity, I guess noise is also an option right? :)
  •  
    love it
mkisantal

Better Language Models and Their Implications - 1 views

  •  
    Just read some of the samples of text generated with their neural networks, insane.
  • ...3 more comments...
  •  
    "Pérez and his friends were astonished to see the unicorn herd. These creatures could be seen from the air without having to move too much to see them - they were so close they could touch their horns. While examining these bizarre creatures the scientists discovered that the creatures also spoke some fairly regular English. Pérez stated, "We can see, for example, that they have a common 'language,' something like a dialect or dialectic."
  •  
    Shocking. I assume that this could indeed have severe implications if it gets in the "wrong hands".
  •  
    "Feed it the first few paragraphs of a Guardian story about Brexit, and its output is plausible newspaper prose, replete with "quotes" from Jeremy Corbyn, mentions of the Irish border, and answers from the prime minister's spokesman." https://www.youtube.com/watch?time_continue=37&v=XMJ8VxgUzTc "Feed it the opening line of George Orwell's Nineteen Eighty-Four - "It was a bright cold day in April, and the clocks were striking thirteen" - and the system recognises the vaguely futuristic tone and the novelistic style, and continues with: "I was in my car on my way to a new job in Seattle. I put the gas in, put the key in, and then I let it run. I just imagined what the day would be like. A hundred years from now. In 2045, I was a teacher in some school in a poor part of rural China. I started with Chinese history and history of science." (https://www.theguardian.com/technology/2019/feb/14/elon-musk-backed-ai-writes-convincing-news-fiction)
  •  
    It's really lucky that it was OpenAI who made that development and Elon Musk is so worried about AI. This way at least they try to assess the whole spectrum of abilities and applications of this model before releasing the full research to the public.
  •  
    They released a smaller model, I got it running on Sandy. It's fairly straight forward: https://github.com/openai/gpt-2
mkisantal

Visualizing the Loss Landscape of Neural Nets - 1 views

shared by mkisantal on 10 Dec 18 - No Cached
  •  
    Really nice visualizations for rectifier (ReLU) neural nets, illustrating the effects of skip-connections, depth, width, etc. on the loss function curvature.
jaihobah

Artificial Neural Nets Grow Brainlike Navigation Cells - 0 views

  •  
    Faced with a navigational challenge, neural networks spontaneously evolved units resembling the grid cells that help living animals find their way.
dharmeshtailor

FB pre-trained deep neural net on billion image user-hashtag dataset - 0 views

  •  
    Dataset automatically constructed from public images uploaded by users on FB/Instagram with hashtags used as labels! They refer to this as 'weakly supervised learning'. Then neural net fine-tuned for ImageNet and achieved record 85.4% accuracy.
jcunha

Alibaba is making its own neural network chip - 3 views

  •  
    The race for the AI chips intensifies.
dharmeshtailor

Opening the Black Box of Deep Neural Networks via Information Theory - 1 views

Marcus Maertens

MIT, Mass Gen Aim Deep Learning at Sleep Research | NVIDIA Blog - 2 views

  •  
    Neural Networks to analyse sleeplessness.
dharmeshtailor

Comeback for Genetic Algorithms...Deep Neuroevolution! - 5 views

  •  
    Genetic algorithms are a competitive alternative for training deep neural networks for reinforcement learning. For paper see: https://arxiv.org/pdf/1712.06567.pdf
  •  
    Interesting pointers in this one! I would like to explore neuroevolution as well, although it seems extremely resource-demanding?
  •  
    Not necessarily, I think it can be made to be much faster hybridizing it with backprop and Taylor maps. Its one ideas in the closet we still have not explored (Differential Intelligence: accelerating neuroevolution).
jcunha

When AI is made by AI, results are impressive - 6 views

  •  
    This has been around for over a year. The current trend in deep learning is "deeper is better". But a consequence of this is that for a given network depth, we can only feasibly evaluate a tiny fraction of the "search space" of NN architectures. The current approach to choosing a network architecture is to iteratively add more layers/units and keeping the architecture which gives an increase in the accuracy on some held-out data set i.e. we have the following information: {NN, accuracy}. Clearly, this process can be automated by using the accuracy as a 'signal' to a learning algorithm. The novelty in this work is they use reinforcement learning with a recurrent neural network controller which is trained by a policy gradient - a gradient-based method. Previously, evolutionary algorithms would typically be used. In summary, yes, the results are impressive - BUT this was only possible because they had access to Google's resources. An evolutionary approach would probably end up with the same architecture - it would just take longer. This is part of a broader research area in deep learning called 'meta-learning' which seeks to automate all aspects of neural network training.
  •  
    Btw that techxplore article was cringing to read - if interested read this article instead: https://research.googleblog.com/2017/05/using-machine-learning-to-explore.html
jaihobah

Google's AI Wizard Unveils a New Twist on Neural Networks - 2 views

  •  
    "Hinton's new approach, known as capsule networks, is a twist on neural networks intended to make machines better able to understand the world through images or video. In one of the papers posted last week, Hinton's capsule networks matched the accuracy of the best previous techniques on a standard test of how well software can learn to recognize handwritten digits." Links to papers: https://arxiv.org/abs/1710.09829 https://openreview.net/forum?id=HJWLfGWRb&noteId=HJWLfGWRb
  •  
    impressive!
  •  
    seems a very impressive guy :"Hinton formed his intuition that vision systems need such an inbuilt sense of geometry in 1979, when he was trying to figure out how humans use mental imagery. He first laid out a preliminary design for capsule networks in 2011. The fuller picture released last week was long anticipated by researchers in the field. "Everyone has been waiting for it and looking for the next great leap from Geoff," says Kyunghyun Cho, a professor"
1 - 20 of 79 Next › Last »
Showing 20 items per page