Skip to main content

Home/ Advanced Concepts Team/ Group items tagged learning

Rss Feed Group items tagged

LeopoldS

physicists explain what AI researchers are actually doing - 5 views

  •  
    love this one ... it seems to take physicist to explain to the AI crowd what they are actually doing ... Deep learning is a broad set of techniques that uses multiple layers of representation to automatically learn relevant features directly from structured data. Recently, such techniques have yielded record-breaking results on a diverse set of difficult machine learning tasks in computer vision, speech recognition, and natural language processing. Despite the enormous success of deep learning, relatively little is understood theoretically about why these techniques are so successful at feature learning and compression. Here, we show that deep learning is intimately related to one of the most important and successful techniques in theoretical physics, the renormalization group (RG). RG is an iterative coarse-graining scheme that allows for the extraction of relevant features (i.e. operators) as a physical system is examined at different length scales. We construct an exact mapping from the variational renormalization group, first introduced by Kadanoff, and deep learning architectures based on Restricted Boltzmann Machines (RBMs). We illustrate these ideas using the nearest-neighbor Ising Model in one and two-dimensions. Our results suggests that deep learning algorithms may be employing a generalized RG-like scheme to learn relevant features from data.
Luís F. Simões

Poison Attacks Against Machine Learning - Slashdot - 1 views

  • Support Vector Machines (SVMs) are fairly simple but powerful machine learning systems. They learn from data and are usually trained before being deployed.
  • In many cases they need to continue to learn as they do the job and this raised the possibility of feeding it with data that causes it to make bad decisions. Three researchers have recently demonstrated how to do this with the minimum poisoned data to maximum effect. What they discovered is that their method was capable of having a surprisingly large impact on the performance of the SVMs tested. They also point out that it could be possible to direct the induced errors so as to produce particular types of error.
  •  
    http://arxiv.org/abs/1206.6389v2 for Guido; an interesting example of "takeover" research
Luís F. Simões

How to Grow a Mind: Statistics, Structure, and Abstraction - 4 views

  •  
    a nice review on the wonders of Hierarchical Bayesian models. It cites a paper on probabilistic programming languages that might be relevant given our recent discussions. At Hippo's farewell lunch there was a discussion on how kids are able to learn something as complex as language from a limited amount of observations, while Machine Learning algorithms no matter how many millions of instances you throw at them, don't learn beyond some point. If that subject interested you, you might like this paper.
  •  
    Had an opportunity to listen to JBT and TLG during one summer school.. if they're half as good in writing as they are in speaking, should be a decent read...
jcunha

When AI is made by AI, results are impressive - 6 views

  •  
    This has been around for over a year. The current trend in deep learning is "deeper is better". But a consequence of this is that for a given network depth, we can only feasibly evaluate a tiny fraction of the "search space" of NN architectures. The current approach to choosing a network architecture is to iteratively add more layers/units and keeping the architecture which gives an increase in the accuracy on some held-out data set i.e. we have the following information: {NN, accuracy}. Clearly, this process can be automated by using the accuracy as a 'signal' to a learning algorithm. The novelty in this work is they use reinforcement learning with a recurrent neural network controller which is trained by a policy gradient - a gradient-based method. Previously, evolutionary algorithms would typically be used. In summary, yes, the results are impressive - BUT this was only possible because they had access to Google's resources. An evolutionary approach would probably end up with the same architecture - it would just take longer. This is part of a broader research area in deep learning called 'meta-learning' which seeks to automate all aspects of neural network training.
  •  
    Btw that techxplore article was cringing to read - if interested read this article instead: https://research.googleblog.com/2017/05/using-machine-learning-to-explore.html
ESA ACT

PLoS Computational Biology - Machine Learning and Its Applications to Biology - 0 views

  •  
    A Tutorial on machine learning. Esp. the unsupervised learning could be interesting.
anonymous

Home - Toronto Deep Learning - 2 views

  •  
    Implementation of the deep learning-based image classifier (online). Try making a picture with your phone and upload it there. Pretty impressive results. EDIT: Okay, it works the best with well exposed simple objects (pen, mug).
jcunha

The thermodynamics of learning - 3 views

  •  
    It is typically considered that brain's process of learning is highly energy efficient. While investigating how efficiently the brain can learn new information, physicists have found that, at the neuronal level, learning efficiency is ultimately limited by the laws of thermodynamics.
pacome delva

Superconductors could simulate the brain - 2 views

  • who have shown how networks of artificial neurons containing two Josephson junctions would outpace more traditional computer-simulated brains by many orders of magnitude. Studying such junction-based systems could improve our understanding of long-term learning and memory along with factors that may contribute to disorders like epilepsy.
  • The existing design does not permit learning since the weighting of connections between synapses cannot be changed over time, but Segall believes that if this feature can be added then their neurons might allow a lifetime's worth of learning to be simulated in five or ten minutes. This, he adds, should help us to understand how learning changes with age and might give us clues as to how long-term disorders like Parkinson's disease develops.
  •  
    What I don't get is how the measure the extent of matching: how "close", or realistic is the modelisation they achieve with different methods? And moreover, if weights cannot adapt and there are no direct connections between neurons and layers of neurons, isnt that a very arbitrary matching?
ESA ACT

[0705.0693v1] Learning to Bluff - 0 views

  •  
    Learning to Bluff
ESA ACT

Molecular circuits for associative learning in single-celled organisms - 0 views

  •  
    Unicellular organisms learn. How they do is written in the paper somewhere.
Thijs Versloot

Scikit-learn is an open-source machine learning library for Python. Give it a try here! - 5 views

  •  
    Browsing Kaggle...
  •  
    Very nice library, we actually use it for GTOC7.
mkisantal

Reinforcement Learning with Prediction-Based Rewards - 3 views

  •  
    Prediction-based method for encouraging reinforcement learning agents to explore their environments through curiosity (reward for unfamiliar states). Learns some games without any extrinsic reward!
  •  
    Fun failure case: agent gets stuck in front of TV.
  •  
    Not read this article but on a related note: Curiosity and various metrics for it have been explored for some time in robotics (outside of RL) as a framework for exploring (partially) unfamiliar environments. I came across some papers on this topic applied to UAVs when prep'ing for a PhD app. This one (http://www.cim.mcgill.ca/~yogesh/publications/crv2014.pdf) comes to mind - which used a topic modelling approach.
Luís F. Simões

The AI Revolution: Why Deep Learning Is Suddenly Changing Your Life - 1 views

  • Indeed, corporations just may have reached another inflection point. “In the past,” says Andrew Ng, chief scientist at Baidu Research, “a lot of S&P 500 CEOs wished they had started thinking sooner than they did about their Internet strategy. I think five years from now there will be a number of S&P 500 CEOs that will wish they’d started thinking earlier about their AI strategy.” Even the Internet metaphor doesn’t do justice to what AI with deep learning will mean, in Ng’s view. “AI is the new electricity,” he says. “Just as 100 years ago electricity transformed industry after industry, AI will now do the same.”
  •  
    A good historical overview of the Deep Learning revolution. If you think the quote above is an exageration, here are some fresh news from Microsoft: Internal email: Microsoft forms new 5,000-person AI division
Dario Izzo

High-speed light-based systems could replace supercomputers for certain 'deep learning'... - 3 views

  •  
    New optics based computer architecture
htoftevaag

Machine Learning for Accelerated and Inverse Metasurface Design - 0 views

  •  
    If you have 45 minutes and you want to learn a bit about inverse design of metasurfaces using machine learning, then I would highly recommend this talk. I found it very easy to follow both the physics and machine learning parts of it.
darioizzo2

Integrating Machine Learning for Planetary Science: Perspectives for the Next Decade - 3 views

Hey! We also have an added review paper on ML/AI and G&C -> https://link.springer.com/article/10.1007/s42064-018-0053-6, weird they found those other papers instead ... I guess the keyword machine...

AI PHY

Marcus Maertens

New Techniques from Google and Ray Kurzweil Are Taking Artificial Intelligence to Anoth... - 1 views

  •  
    Winter is coming... and deep learning, too!
  •  
    "Sergey Brin has said he wants to build a benign version of HAL in 2001: A Space Odyssey" ... didn't they try to do that in that movie called "2001: A Space Odyssey" ?
Dario Izzo

Bold title ..... - 3 views

  •  
    I got a fever. And the only prescription is more cat faces! ...../\_/\ ...(=^_^) ..\\(___) The article sounds quite interesting, though. I think the idea of a "fake" agent that tries to trick the classifier while both co-evolve is nice as it allows the classifier to first cope with the lower order complexity of the problem. As the fake agent mimics the real agent better and better the classifier has time to add complexity to itself instead of trying to do it all at once. It would be interesting if this is later reflected in the neural nets structure, i.e. having core regions that deal with lower order approximation / classification and peripheral regions (added at a later stage) that deal with nuances as they become apparent. Also this approach will develop not just a classifier for agent behavior but at the same time a model of the same. The later may be useful in itself and might in same cases be the actual goal of the "researcher". I suspect, however, that the problem of producing / evolving the "fake agent" model might in most case be at least as hard as producing a working classifier...
  •  
    This paper from 2014 seems discribe something pretty similar (except for not using physical robots, etc...): https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf
  •  
    Yes, this IS basically adversarial learning. Except the generator part instead of being a neural net is some kind of swarm parametrization. I just love how they rebranded it, though. :))
jcunha

Why does deep and cheap learning work so well? - 2 views

  •  
    Physicists from MIT explain why deep learning is that much successful using physics.
Dario Izzo

Stacked Approximated Regression Machine: A Simple Deep Learning Approach - 5 views

  •  
    from one of the reddit threads discussing this: "bit fishy, crazy if real". "Incredible claims: - Train only using about 10% of imagenet-12, i.e. around 120k images (i.e. they use 6k images per arm) - get to the same or better accuracy as the equivalent VGG net - Training is not via backprop but more simpler PCA + Sparsity regime (see section 4.1), shouldn't take more than 10 hours just on CPU probably "
  •  
    clicking the link says the manuscript was withdrawn :))
  •  
    This "one-shot learning" paper by Googe Deepmind also claims to be able to learn from very few training data. Thought it might be interesting for you guys: https://arxiv.org/pdf/1605.06065v1.pdf
1 - 20 of 176 Next › Last »
Showing 20 items per page