Skip to main content

Home/ Advanced Concepts Team/ Group items tagged machine learning

Rss Feed Group items tagged

LeopoldS

physicists explain what AI researchers are actually doing - 5 views

  •  
    love this one ... it seems to take physicist to explain to the AI crowd what they are actually doing ... Deep learning is a broad set of techniques that uses multiple layers of representation to automatically learn relevant features directly from structured data. Recently, such techniques have yielded record-breaking results on a diverse set of difficult machine learning tasks in computer vision, speech recognition, and natural language processing. Despite the enormous success of deep learning, relatively little is understood theoretically about why these techniques are so successful at feature learning and compression. Here, we show that deep learning is intimately related to one of the most important and successful techniques in theoretical physics, the renormalization group (RG). RG is an iterative coarse-graining scheme that allows for the extraction of relevant features (i.e. operators) as a physical system is examined at different length scales. We construct an exact mapping from the variational renormalization group, first introduced by Kadanoff, and deep learning architectures based on Restricted Boltzmann Machines (RBMs). We illustrate these ideas using the nearest-neighbor Ising Model in one and two-dimensions. Our results suggests that deep learning algorithms may be employing a generalized RG-like scheme to learn relevant features from data.
Luís F. Simões

Poison Attacks Against Machine Learning - Slashdot - 1 views

  • Support Vector Machines (SVMs) are fairly simple but powerful machine learning systems. They learn from data and are usually trained before being deployed.
  • In many cases they need to continue to learn as they do the job and this raised the possibility of feeding it with data that causes it to make bad decisions. Three researchers have recently demonstrated how to do this with the minimum poisoned data to maximum effect. What they discovered is that their method was capable of having a surprisingly large impact on the performance of the SVMs tested. They also point out that it could be possible to direct the induced errors so as to produce particular types of error.
  •  
    http://arxiv.org/abs/1206.6389v2 for Guido; an interesting example of "takeover" research
darioizzo2

Integrating Machine Learning for Planetary Science: Perspectives for the Next Decade - 3 views

Hey! We also have an added review paper on ML/AI and G&C -> https://link.springer.com/article/10.1007/s42064-018-0053-6, weird they found those other papers instead ... I guess the keyword machine...

AI PHY

ESA ACT

PLoS Computational Biology - Machine Learning and Its Applications to Biology - 0 views

  •  
    A Tutorial on machine learning. Esp. the unsupervised learning could be interesting.
Luís F. Simões

How to Grow a Mind: Statistics, Structure, and Abstraction - 4 views

  •  
    a nice review on the wonders of Hierarchical Bayesian models. It cites a paper on probabilistic programming languages that might be relevant given our recent discussions. At Hippo's farewell lunch there was a discussion on how kids are able to learn something as complex as language from a limited amount of observations, while Machine Learning algorithms no matter how many millions of instances you throw at them, don't learn beyond some point. If that subject interested you, you might like this paper.
  •  
    Had an opportunity to listen to JBT and TLG during one summer school.. if they're half as good in writing as they are in speaking, should be a decent read...
htoftevaag

Machine Learning for Accelerated and Inverse Metasurface Design - 0 views

  •  
    If you have 45 minutes and you want to learn a bit about inverse design of metasurfaces using machine learning, then I would highly recommend this talk. I found it very easy to follow both the physics and machine learning parts of it.
darioizzo2

Machine learning leads mathematicians to unsolvable problem - 1 views

  •  
    Learnability cannot be proven! An important theoretical brick on machine learning capabilities.
Thijs Versloot

Deep Learning Machine Teaches Itself Chess in 72 Hours, Plays at International Master L... - 1 views

  •  
    In a world first, an artificial intelligence machine plays chess by evaluating the board rather than using brute force to work out every possible move. It's been almost 20 years since IBM's Deep Blue supercomputer beat the reigning world chess champion, Gary Kasparov, for the first time under standard tournament rules.
  •  
    The disadvantage in this kind of engine lies exactly in its inability to extrapolate. You might actually be able to beat it if you play like an idiot.
Dario Izzo

Miguel Nicolelis Says the Brain Is Not Computable, Bashes Kurzweil's Singularity | MIT ... - 9 views

  •  
    As I said ten years ago and psychoanalysts 100 years ago. Luis I am so sorry :) Also ... now that the commission funded the project blue brain is a rather big hit Btw Nicolelis is a rather credited neuro-scientist
  • ...14 more comments...
  •  
    nice article; Luzi would agree as well I assume; one aspect not clear to me is the causal relationship it seems to imply between consciousness and randomness ... anybody?
  •  
    This is the same thing Penrose has been saying for ages (and yes, I read the book). IF the human brain proves to be the only conceivable system capable of consciousness/intelligence AND IF we'll forever be limited to the Turing machine type of computation (which is what the "Not Computable" in the article refers to) AND IF the brain indeed is not computable, THEN AI people might need to worry... Because I seriously doubt the first condition will prove to be true, same with the second one, and because I don't really care about the third (brains is not my thing).. I'm not worried.
  •  
    In any case, all AI research is going in the wrong direction: the mainstream is not on how to go beyond Turing machines, rather how to program them well enough ...... and thats not bringing anywhere near the singularity
  •  
    It has not been shown that intelligence is not computable (only some people saying the human brain isn't, which is something different), so I wouldn't go so far as saying the mainstream is going in the wrong direction. But even if that indeed was the case, would it be a problem? If so, well, then someone should quickly go and tell all the people trading in financial markets that they should stop using computers... after all, they're dealing with uncomputable undecidable problems. :) (and research on how to go beyond Turing computation does exist, but how much would you want to devote your research to a non existent machine?)
  •  
    [warning: troll] If you are happy with developing algorithms that serve the financial market ... good for you :) After all they have been proved to be useful for humankind beyond any reasonable doubt.
  •  
    Two comments from me: 1) an apparently credible scientist takes Kurzweil seriously enough to engage with him in polemics... oops 2) what worries me most, I didn't get the retail store pun at the end of article...
  •  
    True, but after Google hired Kurzweil he is de facto being taken seriously ... so I guess Nicolelis reacted to this.
  •  
    Crazy scientist in residence... interesting marketing move, I suppose.
  •  
    Unfortunately, I can't upload my two kids to the cloud to make them sleep, that's why I comment only now :-). But, of course, I MUST add my comment to this discussion. I don't really get what Nicolelis point is, the article is just too short and at a too popular level. But please realize that the question is not just "computable" vs. "non-computable". A system may be computable (we have a collection of rules called "theory" that we can put on a computer and run in a finite time) and still it need not be predictable. Since the lack of predictability pretty obviously applies to the human brain (as it does to any sufficiently complex and nonlinear system) the question whether it is computable or not becomes rather academic. Markram and his fellows may come up with a incredible simulation program of the human brain, this will be rather useless since they cannot solve the initial value problem and even if they could they will be lost in randomness after a short simulation time due to horrible non-linearities... Btw: this is not my idea, it was pointed out by Bohr more than 100 years ago...
  •  
    I guess chaos is what you are referring to. Stuff like the Lorentz attractor. In which case I would say that the point is not to predict one particular brain (in which case you would be right): any initial conditions would be fine as far as any brain gets started :) that is the goal :)
  •  
    Kurzweil talks about downloading your brain to a computer, so he has a specific brain in mind; Markram talks about identifying neural basis of mental diseases, so he has at least pretty specific situations in mind. Chaos is not the only problem, even a perfectly linear brain (which is not a biological brain) is not predictable, since one cannot determine a complete set of initial conditions of a working (viz. living) brain (after having determined about 10% the brain is dead and the data useless). But the situation is even worse: from all we know a brain will only work with a suitable interaction with its environment. So these boundary conditions one has to determine as well. This is already twice impossible. But the situation is worse again: from all we know, the way the brain interacts with its environment at a neural level depends on his history (how this brain learned). So your boundary conditions (that are impossible to determine) depend on your initial conditions (that are impossible to determine). Thus the situation is rather impossible squared than twice impossible. I'm sure Markram will simulate something, but this will rather be the famous Boltzmann brain than a biological one. Boltzman brains work with any initial conditions and any boundary conditions... and are pretty dead!
  •  
    Say one has an accurate model of a brain. It may be the case that the initial and boundary conditions do not matter that much in order for the brain to function an exhibit macro-characteristics useful to make science. Again, if it is not one particular brain you are targeting, but the 'brain' as a general entity this would make sense if one has an accurate model (also to identify the neural basis of mental diseases). But in my opinion, the construction of such a model of the brain is impossible using a reductionist approach (that is taking the naive approach of putting together some artificial neurons and connecting them in a huge net). That is why both Kurzweil and Markram are doomed to fail.
  •  
    I think that in principle some kind of artificial brain should be feasible. But making a brain by just throwing together a myriad of neurons is probably as promising as throwing together some copper pipes and a heap of silica and expecting it to make calculations for you. Like in the biological system, I suspect, an artificial brain would have to grow from a small tiny functional unit by adding neurons and complexity slowly and in a way that in a stable way increases the "usefulness"/fitness. Apparently our brain's usefulness has to do with interpreting inputs of our sensors to the world and steering the body making sure that those sensors, the brain and the rest of the body are still alive 10 seconds from now (thereby changing the world -> sensor inputs -> ...). So the artificial brain might need sensors and a body to affect the "world" creating a much larger feedback loop than the brain itself. One might argue that the complexity of the sensor inputs is the reason why the brain needs to be so complex in the first place. I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain. Anyone? Or are they trying to simulate the human brain after it has been removed from the body? That might be somewhat easier I guess...
  •  
    Johannes: "I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain." In Artificial Life the whole environment+bodies&brains is simulated. You have also the whole embodied cognition movement that basically advocates for just that: no true intelligence until you model the system in its entirety. And from that you then have people building robotic bodies, and getting their "brains" to learn from scratch how to control them, and through the bodies, the environment. Right now, this is obviously closer to the complexity of insect brains, than human ones. (my take on this is: yes, go ahead and build robots, if the intelligence you want to get in the end is to be displayed in interactions with the real physical world...) It's easy to dismiss Markram's Blue Brain for all their clever marketing pronouncements that they're building a human-level consciousness on a computer, but from what I read of the project, they seem to be developing a platfrom onto which any scientist can plug in their model of a detail of a detail of .... of the human brain, and get it to run together with everyone else's models of other tiny parts of the brain. This is not the same as getting the artificial brain to interact with the real world, but it's a big step in enabling scientists to study their own models on more realistic settings, in which the models' outputs get to effect many other systems, and throuh them feed back into its future inputs. So Blue Brain's biggest contribution might be in making model evaluation in neuroscience less wrong, and that doesn't seem like a bad thing. At some point the reductionist approach needs to start moving in the other direction.
  •  
    @ Dario: absolutely agree, the reductionist approach is the main mistake. My point: if you take the reductionsit approach, then you will face the initial and boundary value problem. If one tries a non-reductionist approach, this problem may be much weaker. But off the record: there exists a non-reductionist theory of the brain, it's called psychology... @ Johannes: also agree, the only way the reductionist approach could eventually be successful is to actually grow the brain. Start with essentially one neuron and grow the whole complexity. But if you want to do this, bring up a kid! A brain without body might be easier? Why do you expect that a brain detached from its complete input/output system actually still works. I'm pretty sure it does not!
  •  
    @Luzi: That was exactly my point :-)
jaihobah

Machine Learning's 'Amazing' Ability to Predict Chaos - 2 views

  •  
    Researchers have used machine learning to predict the chaotic evolution of a model flame front.
Thijs Versloot

Scikit-learn is an open-source machine learning library for Python. Give it a try here! - 5 views

  •  
    Browsing Kaggle...
  •  
    Very nice library, we actually use it for GTOC7.
jcunha

When AI is made by AI, results are impressive - 6 views

  •  
    This has been around for over a year. The current trend in deep learning is "deeper is better". But a consequence of this is that for a given network depth, we can only feasibly evaluate a tiny fraction of the "search space" of NN architectures. The current approach to choosing a network architecture is to iteratively add more layers/units and keeping the architecture which gives an increase in the accuracy on some held-out data set i.e. we have the following information: {NN, accuracy}. Clearly, this process can be automated by using the accuracy as a 'signal' to a learning algorithm. The novelty in this work is they use reinforcement learning with a recurrent neural network controller which is trained by a policy gradient - a gradient-based method. Previously, evolutionary algorithms would typically be used. In summary, yes, the results are impressive - BUT this was only possible because they had access to Google's resources. An evolutionary approach would probably end up with the same architecture - it would just take longer. This is part of a broader research area in deep learning called 'meta-learning' which seeks to automate all aspects of neural network training.
  •  
    Btw that techxplore article was cringing to read - if interested read this article instead: https://research.googleblog.com/2017/05/using-machine-learning-to-explore.html
zoervleis

Moral Machine - 1 views

shared by zoervleis on 17 Aug 16 - No Cached
  •  
    "A platform for public participation in and discussion of the human perspective on machine-made moral decisions" Machine Ethics is basically the return of philosophy through code. Here you can learn a bit about it, and help the MIT collect data on how humans make choices when faced with ethical dilemmas, and how we perceive AIs making such choices.
Dario Izzo

Detexify LaTeX handwritten symbol recognition - 2 views

  •  
    For hardcore latex users (btw ... implemented in haskell ... classical machine learning app, but useful)
  •  
    Also available as Android app (not sure if called "texify" or "detexify".
  •  
    works actually quite well!!
Luís F. Simões

Lockheed Martin buys first D-Wave quantum computing system - 1 views

  • D-Wave develops computing systems that leverage the physics of quantum mechanics in order to address problems that are hard for traditional methods to solve in a cost-effective amount of time. Examples of such problems include software verification and validation, financial risk analysis, affinity mapping and sentiment analysis, object recognition in images, medical imaging classification, compressed sensing and bioinformatics.
  •  
    According to the company's wikipedia page, the computer costs $ 10 million. Can we then declare Quantum Computing has officially arrived?! quotes from elsewhere in the site: "first commercial quantum computing system on the market"; "our current superconducting 128-qubit processor chip is housed inside a cryogenics system within a 10 square meter shielded room" Link to the company's scientific publications. Interestingly, this company seems to have been running a BOINC project, AQUA@home, to "predict the performance of superconducting adiabatic quantum computers on a variety of hard problems arising in fields ranging from materials science to machine learning. AQUA@home uses Internet-connected computers to help design and analyze quantum computing algorithms, using Quantum Monte Carlo techniques". List of papers coming out of it.
ESA ACT

YouTube - HRI2008 - Phobot - 0 views

shared by ESA ACT on 24 Apr 09 - Cached
  •  
    A robot that is afraid of things. Brilliant: Machines learn human weaknesses instead of human strengths...
jcunha

Quantum machine learning - 1 views

  •  
    Quantum Computing and Machine Learning in the same sentence. The association started to be put forward by Google and NASA playing with D-WAVE computers. Meanwhile in the academic media, https://journals.aps.org/prx/pdf/10.1103/PhysRevX.4.031002
jmlloren

Unsupervised Generative Modeling Using Matrix Product States - 2 views

  •  
    Our work sheds light on many interesting directions of future exploration in the development of quantum-inspired algorithms for unsupervised machine learning, which are promisingly possible to realize on quantum devices.
thomasvas

AI researchers allege that machine learning is alchemy - 9 views

http://www.sciencemag.org/news/2018/05/ai-researchers-allege-machine-learning-alchemy

AI technology

started by thomasvas on 04 May 18 no follow-up yet
jcunha and dharmeshtailor liked it
1 - 20 of 45 Next › Last »
Showing 20 items per page