I'll just leave this here :)
They only cut corners on the lens - which allegedly should be a Nikkor 8mm F8.0, but the latter is rare and much more expensive than the final product itself (if you manage to find one used, that is)
Turns out even ants can profit from a siesta on a hot day and they use network security and repair mechanisms. Maybe there is still something undiscovered that we can apply for our own networks.
Hey Guys,
Is one of you has any idea about how it's coded? I mean, is it just a basic database of already-written answers or something more sophisticated?
Anyway, have fun!
Alex
"A platform for public participation in and discussion of the human perspective on machine-made moral decisions"
Machine Ethics is basically the return of philosophy through code. Here you can learn a bit about it, and help the MIT collect data on how humans make choices when faced with ethical dilemmas, and how we perceive AIs making such choices.
from one of the reddit threads discussing this:
"bit fishy, crazy if real".
"Incredible claims:
- Train only using about 10% of imagenet-12, i.e. around 120k images (i.e. they use 6k images per arm)
- get to the same or better accuracy as the equivalent VGG net
- Training is not via backprop but more simpler PCA + Sparsity regime (see section 4.1), shouldn't take more than 10 hours just on CPU probably "
This "one-shot learning" paper by Googe Deepmind also claims to be able to learn from very few training data.
Thought it might be interesting for you guys: https://arxiv.org/pdf/1605.06065v1.pdf
"IBM delivered on the DARPA SyNAPSE project with a one million neuron brain-inspired processor. The chip consumes merely 70 milliwatts, and is capable of 46 billion synaptic operations per second, per watt-literally a synaptic supercomputer in your palm." --- No memristors..., yet.: https://www.technologyreview.com/s/537211/a-better-way-to-build-brain-inspired-chips/
Very interesting presentation on how the brain can back-propagate error signals during learning (using time-derivatives to encode errors). Hinton discusses how back-propagation can be achieved with very limited / unsophisticated tools and in excessively noise environments.
Implementation of the deep learning-based image classifier (online).
Try making a picture with your phone and upload it there. Pretty impressive results.
EDIT: Okay, it works the best with well exposed simple objects (pen, mug).
There has been much recent interest in accelerating materials discovery. High-throughput calculations and combinatorial experiments have been the approaches of choice to narrow the search space. The emphasis has largely been on feature or descriptor selection or the use of regression tools, such as least squares, to predict properties. The regression studies have been hampered by small data sets, large model or prediction uncertainties and extrapolation to a vast unexplored chemical space with little or no experimental feedback to validate the predictions. Thus, they are prone to be suboptimal. Here an adaptive design approach is used that provides a robust, guided basis for the selection of the next material for experimental measurements by using uncertainties and maximizing the 'expected improvement' from the best-so-far material in an iterative loop with feedback from experiments. It balances the goal of searching materials likely to have the best property (exploitation) with the need to explore parts of the search space with fewer sampling points and greater uncertainty.
A nice read.
IBM Watson wowed the tech industry with a 2011 win against two of television show Jeopardy greatest champions.
Using something that seemed like a sort of tree search for me IBM DeepQA algorithm managed to ingest sparse data (clues), process it getting one answer, understand what that answer means and come up with the question that leads to that answer.
Now, IBM tells us that the same system can tackle medical diagnosis and financial risk problems.
We discussed a few times about whether it is possible to determine the quality of a paper by extracting visual features from the paper and then learning a mapping to a measure of quality such as the number of citations etc. This paper circulated at CVPR 2010, and does exactly that, mapping visual features to estimate whether it has been accepted for the main conference or the workshops.