Skip to main content

Home/ Advanced Concepts Team/ Group items tagged machine learning

Rss Feed Group items tagged

Marcus Maertens

Breakthrough Initiatives - 2 views

  •  
    Machine learning yields detection of 72 new fast radio bursts from distant galaxy. SETI folks are getting excited.
Marcus Maertens

Aroma: Using ML for code recommendation - 2 views

  •  
    A simple, but neat helper for coding: ML gives idiomatic usage patterns to semi-automate the daily development work.
  •  
    Machine learning to write better machine learning code...count me in haha
koskons

Translating lost languages using machine learning | MIT News | Massachusetts Institute ... - 0 views

  •  
    System developed at MIT CSAIL aims to help linguists decipher languages that have been lost to history.
anonymous

Home - Toronto Deep Learning - 2 views

  •  
    Implementation of the deep learning-based image classifier (online). Try making a picture with your phone and upload it there. Pretty impressive results. EDIT: Okay, it works the best with well exposed simple objects (pen, mug).
Luís F. Simões

The AI Revolution: Why Deep Learning Is Suddenly Changing Your Life - 1 views

  • Indeed, corporations just may have reached another inflection point. “In the past,” says Andrew Ng, chief scientist at Baidu Research, “a lot of S&P 500 CEOs wished they had started thinking sooner than they did about their Internet strategy. I think five years from now there will be a number of S&P 500 CEOs that will wish they’d started thinking earlier about their AI strategy.” Even the Internet metaphor doesn’t do justice to what AI with deep learning will mean, in Ng’s view. “AI is the new electricity,” he says. “Just as 100 years ago electricity transformed industry after industry, AI will now do the same.”
  •  
    A good historical overview of the Deep Learning revolution. If you think the quote above is an exageration, here are some fresh news from Microsoft: Internal email: Microsoft forms new 5,000-person AI division
Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
Daniel Hennes

Google Just Open Sourced the Artificial Intelligence Engine at the Heart of Its Online ... - 2 views

  •  
    TensorFlow is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google's Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.
  •  
    And the interface even looks a bit less retarded than theano
anonymous

Physicists extend quantum machine learning to infinite dimensions - 1 views

Dario Izzo

Stacked Approximated Regression Machine: A Simple Deep Learning Approach - 5 views

  •  
    from one of the reddit threads discussing this: "bit fishy, crazy if real". "Incredible claims: - Train only using about 10% of imagenet-12, i.e. around 120k images (i.e. they use 6k images per arm) - get to the same or better accuracy as the equivalent VGG net - Training is not via backprop but more simpler PCA + Sparsity regime (see section 4.1), shouldn't take more than 10 hours just on CPU probably "
  •  
    clicking the link says the manuscript was withdrawn :))
  •  
    This "one-shot learning" paper by Googe Deepmind also claims to be able to learn from very few training data. Thought it might be interesting for you guys: https://arxiv.org/pdf/1605.06065v1.pdf
Francesco Biscani

Slashdot | Computers With Opinions On Visual Aesthetics - 0 views

  •  
    Cristina learns aesthetics.
Juxi Leitner

IDSIA Robotics | IM-CLeVeR - 1 views

  •  
    Toward Autonomous Humanoids check out our new video with the iCub in the IM-CLeVeR project
  •  
    Admit it ... You have fallen in love ....
  •  
    you dont' know how often we had to shoot that scene :) but it is an adorable baby robot (if it works :))
LeopoldS

Characterizing Quantum Supremacy in Near-Term Devices - 2 views

shared by LeopoldS on 04 Sep 16 - No Cached
  •  
    google paper on quantum computers ... anybody with further insight on how realistic this is
  •  
    Not an answer to Leopold's question but here is a little primer on quantum computers for those that are (like me) still confused about what they actually do: http://www.dwavesys.com/tutorials/background-reading-series/quantum-computing-primer It give a good intuitive idea of the kinds of problems that an adiabatic quantum computer can tackle, an easy analogy of the computation and an explanation of how this get set up in the computer. Also, there is emphasis on how and why quantum computers lend themselves to machine learning (and maybe trajectory optimization??? - ;-) ).
johannessimon81

The Neural Network Zoo - The Asimov Institute (...love that name!) - 2 views

  •  
    Cute info-graphics on different machine learning architectures
jcunha

Accelerated search for materials with targeted properties by adaptive design - 0 views

  •  
    There has been much recent interest in accelerating materials discovery. High-throughput calculations and combinatorial experiments have been the approaches of choice to narrow the search space. The emphasis has largely been on feature or descriptor selection or the use of regression tools, such as least squares, to predict properties. The regression studies have been hampered by small data sets, large model or prediction uncertainties and extrapolation to a vast unexplored chemical space with little or no experimental feedback to validate the predictions. Thus, they are prone to be suboptimal. Here an adaptive design approach is used that provides a robust, guided basis for the selection of the next material for experimental measurements by using uncertainties and maximizing the 'expected improvement' from the best-so-far material in an iterative loop with feedback from experiments. It balances the goal of searching materials likely to have the best property (exploitation) with the need to explore parts of the search space with fewer sampling points and greater uncertainty.
Alexander Wittig

Google AI experiment: fast drawing for everyone - 0 views

  •  
    AutoDraw is a new kind of drawing tool. It pairs machine learning with drawings from talented artists to help everyone create anything visual, fast. There's nothing to download. Nothing to pay for. And it works anywhere: smartphone, tablet, laptop, desktop, etc. AutoDraw's suggestion tool uses the same technology used in QuickDraw, to guess what you're trying to draw. Right now, it can guess hundreds of drawings and we look forward to adding more over time. If you are interested in creating drawings for others to use with AutoDraw, contact us here. We hope AutoDraw will help make drawing and creating a little more accessible and fun for everyone.
Daniel Hennes

The World's Largest Solar Plant Started Creating Electricity Today - 3 views

  •  
    The enormous solar plant-jointly owned by NRG Energy, BrightSource Energy and Google-opened for business today ... well yesterday, but still impressive!
  • ...1 more comment...
  •  
    impressive! and google is among the owners.
  •  
    impressive pictures - looking at the 2nd to last and 4th to last one, I am wondering how this distributed individually control of the mirrors works - and idea?
  •  
    Machine learning obviously. Most likely neural networks :P On the other hand: http://sploid.gizmodo.com/the-worlds-largest-solar-plant-is-killing-birds-meltin-1525107821
Francesco Biscani

Official Google Blog: Announcing Google's Focused Research Awards - 0 views

  • Today, we're announcing the first-ever round of Google Focused Research Awards — funding research in areas of study that are of key interest to Google as well as the research community. These awards, totaling $5.7 million, cover four areas: machine learning, the use of mobile phones as data collection devices for public health and environment monitoring, energy efficiency in computing, and privacy.
  •  
    Might be of some interest to Christos?
Thijs Versloot

Test shows big data text analysis inconsistent, inaccurate - 1 views

  •  
    Big data analytic systems are reputed to be capable of finding a needle in a universe of haystacks without having to know what a needle looks like. The very best ways to sort large databases of unstructured text is to use a technique called Latent Dirichlet allocation (LDA). Unfortunately, LDA is also inaccurate enough at some tasks that the results of any topic model created with it are essentially meaningless, according to Luis Amaral, a physicist whose specialty is the mathematical analysis of complex systems and networks in the real world and one of the senior researchers on the multidisciplinary team from Northwestern University that wrote the paper. Even for an easy case, big data analysis is proving to be far more complicated than many of the companies selling analysis software want people to believe.
  •  
    Most of those companies are using outdated algorithms like this LDA and just apply them like retards on those huge datasets. Of course they're going to come out with bad solutions. No amount of data can make up for bad algorithms.
Daniel Hennes

Discovery with Data: Leveraging Statistics with Computer Science to Transform Science ... - 3 views

  •  
    Responding to calls from the National Science Foundation (NSF) and White House Office of Science and Technology Policy (OSTP), a working group of the American Statistical Association has developed a whitepaper detailing how statisticians and computer scientists can contribute to administration research initiatives and priorities. The whitepaper includes a lot of topics central to machine learning and data mining, so please take a look.
  •  
    I guess Norvig is trumping Chomsky big time if this is the attitude of the NSF :)))
Marcus Maertens

Python is becoming the world's most popular coding language - Daily chart - 3 views

  •  
    In the past 12 months Americans have searched for Python on Google more often than for Kim Kardashian, a reality-TV star. The number of queries has trebled since 2010, while those for other major programming languages have been flat or declining.
  •  
    Likely this is correlated with the increased interest in machine learning in the past decade - all the popular DL libraries are Python-based after all...
‹ Previous 21 - 40 of 45 Next ›
Showing 20 items per page