Skip to main content

Home/ Advanced Concepts Team/ Group items tagged Google

Rss Feed Group items tagged

Luzi Bergamin

Gmail Move - 5 views

  •  
    :D
  •  
    I believed it ... :-(
Francesco Biscani

xkcd: Future Timeline - 5 views

shared by Francesco Biscani on 18 Apr 11 - No Cached
  •  
    Our job is now useless :P
  •  
    this entry tells it all: "2066 - Cyprus achieves its goal"
  •  
    > Francesco And all it took was a simple well-written google bot...
LeopoldS

Ron Paul Would Erase Billions in Research Spending - ScienceInsider - 2 views

  •  
    your Ron Paul from yesterday ...
  •  
    not surprising, just a google search on 'ron paul evolution' and boom first result: Ron Paul: I don't accept the theory of evolution http://www.cbsnews.com/stories/2011/08/29/scitech/main20098876.shtml
santecarloni

Google Reader (1000+) - 2 views

  •  
    For the astronomy lovers...visually very impressive...
Thijs Versloot

Real-Time Recognition and Profiling of Home Appliances through a Single Electricity Sensor - 3 views

  •  
    A personal interest of mine that I want to explore a bit more in the future. I just bought a ZigBee electricity monitor and I am wondering whether from the signal of the mains one could detect (reliably) the oven turning on, lights, etc. Probably requires Neural Network training. The idea would be to make a simple device which basically saves you money by telling you how much electricity you are wasting. Then again, its probably already done by Google...
  • ...3 more comments...
  •  
    nice project!
  •  
    For those interested, this is what/where I ordered.. http://openenergymonitor.org/emon/
  •  
    Update two.. RF chip is faulty and tonight I have to solder a new chip into place.. That's open-source hardware for you!
  •  
    haha, yep, that's it... but we can do better than that right! :)
annaheffernan

How to make a tougher quantum computer - 0 views

  •  
    A system of nine quantum bits (qubits) that is robust to errors that would normally destroy a quantum computation has been created by researchers at the University of California, Santa Barbara (UCSB) and Google. The device relies on a quantum error-correction protocol, which the team says could be deployed in practical quantum computers of the future.
jcunha

Quantum machine learning - 1 views

  •  
    Quantum Computing and Machine Learning in the same sentence. The association started to be put forward by Google and NASA playing with D-WAVE computers. Meanwhile in the academic media, https://journals.aps.org/prx/pdf/10.1103/PhysRevX.4.031002
zoervleis

Google's Go AI Beats Professional Player - 0 views

  •  
    This is the biggest breakthrough in game AI (and one of the biggest in AI in general) since Deep Blue beat Kasparov in chess: For the first time, a human professional player was defeated in the game of Go. The approach was a combination of tree search and deep neural networks. Very proud of a former colleague on the team at Google Deepmind!
  •  
    Funny enough, facebook also had a very similar paper around the same time.
Marcus Maertens

Big Hero 6's Programmable Nanobots Are on the Horizon - 2 views

  •  
    This collaborating swarm of drones acts as 3D pixels (voxels) to create giant, flying interactive displays.
  •  
    I have never understood the flying part of these things. Isn't it really impracticle to have all those tiny quadrocopters zooming around. My money is on holography or still a google glass type of device, if only considering the energy requirements for doing anything kinetically.
Marion Nachon

Frontier Development Lab (FDL): AI technologies to space science - 3 views

Applications might be of interest to some: https://frontierdevelopmentlab.org/blog/2019/3/1/application-deadline-extended-cftt4?fbclid=IwAR0gqMsHJCJx5DeoObv0GSESaP6VGjNKnHCPfmzKuvhFLDpkLSrcaCwmY_c ...

technology AI space science

started by Marion Nachon on 08 Apr 19 no follow-up yet
Juxi Leitner

How To Make The World's Easiest $1 Billion - 7 views

  •  
    wow, i want to do that !!! The suggestion of raising the funds on facebook is a good idea :) Look at this video, the future of banking, frightening isn't it ? http://www.youtube.com/watch?v=cqESjpfb3OE&feature=player_embedded
  • ...2 more comments...
  •  
    ah yeah The Long Johns, very cool try googleing there video of the subprime crisis
  •  
    If it worked, they wouldn't write about it - they'd do it.
  •  
    the first step is already not that trivial it seems to me: STEP 1: Form a bank.
  •  
    depends on the country and of course the type of the bank :)
Marcus Maertens

Google AI Blog: Introducing AdaNet: Fast and Flexible AutoML with Learning Guarantees - 2 views

  •  
    Taking out the trial and error network design and adding ensembles.
Dario Izzo

Miguel Nicolelis Says the Brain Is Not Computable, Bashes Kurzweil's Singularity | MIT ... - 9 views

  •  
    As I said ten years ago and psychoanalysts 100 years ago. Luis I am so sorry :) Also ... now that the commission funded the project blue brain is a rather big hit Btw Nicolelis is a rather credited neuro-scientist
  • ...14 more comments...
  •  
    nice article; Luzi would agree as well I assume; one aspect not clear to me is the causal relationship it seems to imply between consciousness and randomness ... anybody?
  •  
    This is the same thing Penrose has been saying for ages (and yes, I read the book). IF the human brain proves to be the only conceivable system capable of consciousness/intelligence AND IF we'll forever be limited to the Turing machine type of computation (which is what the "Not Computable" in the article refers to) AND IF the brain indeed is not computable, THEN AI people might need to worry... Because I seriously doubt the first condition will prove to be true, same with the second one, and because I don't really care about the third (brains is not my thing).. I'm not worried.
  •  
    In any case, all AI research is going in the wrong direction: the mainstream is not on how to go beyond Turing machines, rather how to program them well enough ...... and thats not bringing anywhere near the singularity
  •  
    It has not been shown that intelligence is not computable (only some people saying the human brain isn't, which is something different), so I wouldn't go so far as saying the mainstream is going in the wrong direction. But even if that indeed was the case, would it be a problem? If so, well, then someone should quickly go and tell all the people trading in financial markets that they should stop using computers... after all, they're dealing with uncomputable undecidable problems. :) (and research on how to go beyond Turing computation does exist, but how much would you want to devote your research to a non existent machine?)
  •  
    [warning: troll] If you are happy with developing algorithms that serve the financial market ... good for you :) After all they have been proved to be useful for humankind beyond any reasonable doubt.
  •  
    Two comments from me: 1) an apparently credible scientist takes Kurzweil seriously enough to engage with him in polemics... oops 2) what worries me most, I didn't get the retail store pun at the end of article...
  •  
    True, but after Google hired Kurzweil he is de facto being taken seriously ... so I guess Nicolelis reacted to this.
  •  
    Crazy scientist in residence... interesting marketing move, I suppose.
  •  
    Unfortunately, I can't upload my two kids to the cloud to make them sleep, that's why I comment only now :-). But, of course, I MUST add my comment to this discussion. I don't really get what Nicolelis point is, the article is just too short and at a too popular level. But please realize that the question is not just "computable" vs. "non-computable". A system may be computable (we have a collection of rules called "theory" that we can put on a computer and run in a finite time) and still it need not be predictable. Since the lack of predictability pretty obviously applies to the human brain (as it does to any sufficiently complex and nonlinear system) the question whether it is computable or not becomes rather academic. Markram and his fellows may come up with a incredible simulation program of the human brain, this will be rather useless since they cannot solve the initial value problem and even if they could they will be lost in randomness after a short simulation time due to horrible non-linearities... Btw: this is not my idea, it was pointed out by Bohr more than 100 years ago...
  •  
    I guess chaos is what you are referring to. Stuff like the Lorentz attractor. In which case I would say that the point is not to predict one particular brain (in which case you would be right): any initial conditions would be fine as far as any brain gets started :) that is the goal :)
  •  
    Kurzweil talks about downloading your brain to a computer, so he has a specific brain in mind; Markram talks about identifying neural basis of mental diseases, so he has at least pretty specific situations in mind. Chaos is not the only problem, even a perfectly linear brain (which is not a biological brain) is not predictable, since one cannot determine a complete set of initial conditions of a working (viz. living) brain (after having determined about 10% the brain is dead and the data useless). But the situation is even worse: from all we know a brain will only work with a suitable interaction with its environment. So these boundary conditions one has to determine as well. This is already twice impossible. But the situation is worse again: from all we know, the way the brain interacts with its environment at a neural level depends on his history (how this brain learned). So your boundary conditions (that are impossible to determine) depend on your initial conditions (that are impossible to determine). Thus the situation is rather impossible squared than twice impossible. I'm sure Markram will simulate something, but this will rather be the famous Boltzmann brain than a biological one. Boltzman brains work with any initial conditions and any boundary conditions... and are pretty dead!
  •  
    Say one has an accurate model of a brain. It may be the case that the initial and boundary conditions do not matter that much in order for the brain to function an exhibit macro-characteristics useful to make science. Again, if it is not one particular brain you are targeting, but the 'brain' as a general entity this would make sense if one has an accurate model (also to identify the neural basis of mental diseases). But in my opinion, the construction of such a model of the brain is impossible using a reductionist approach (that is taking the naive approach of putting together some artificial neurons and connecting them in a huge net). That is why both Kurzweil and Markram are doomed to fail.
  •  
    I think that in principle some kind of artificial brain should be feasible. But making a brain by just throwing together a myriad of neurons is probably as promising as throwing together some copper pipes and a heap of silica and expecting it to make calculations for you. Like in the biological system, I suspect, an artificial brain would have to grow from a small tiny functional unit by adding neurons and complexity slowly and in a way that in a stable way increases the "usefulness"/fitness. Apparently our brain's usefulness has to do with interpreting inputs of our sensors to the world and steering the body making sure that those sensors, the brain and the rest of the body are still alive 10 seconds from now (thereby changing the world -> sensor inputs -> ...). So the artificial brain might need sensors and a body to affect the "world" creating a much larger feedback loop than the brain itself. One might argue that the complexity of the sensor inputs is the reason why the brain needs to be so complex in the first place. I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain. Anyone? Or are they trying to simulate the human brain after it has been removed from the body? That might be somewhat easier I guess...
  •  
    Johannes: "I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain." In Artificial Life the whole environment+bodies&brains is simulated. You have also the whole embodied cognition movement that basically advocates for just that: no true intelligence until you model the system in its entirety. And from that you then have people building robotic bodies, and getting their "brains" to learn from scratch how to control them, and through the bodies, the environment. Right now, this is obviously closer to the complexity of insect brains, than human ones. (my take on this is: yes, go ahead and build robots, if the intelligence you want to get in the end is to be displayed in interactions with the real physical world...) It's easy to dismiss Markram's Blue Brain for all their clever marketing pronouncements that they're building a human-level consciousness on a computer, but from what I read of the project, they seem to be developing a platfrom onto which any scientist can plug in their model of a detail of a detail of .... of the human brain, and get it to run together with everyone else's models of other tiny parts of the brain. This is not the same as getting the artificial brain to interact with the real world, but it's a big step in enabling scientists to study their own models on more realistic settings, in which the models' outputs get to effect many other systems, and throuh them feed back into its future inputs. So Blue Brain's biggest contribution might be in making model evaluation in neuroscience less wrong, and that doesn't seem like a bad thing. At some point the reductionist approach needs to start moving in the other direction.
  •  
    @ Dario: absolutely agree, the reductionist approach is the main mistake. My point: if you take the reductionsit approach, then you will face the initial and boundary value problem. If one tries a non-reductionist approach, this problem may be much weaker. But off the record: there exists a non-reductionist theory of the brain, it's called psychology... @ Johannes: also agree, the only way the reductionist approach could eventually be successful is to actually grow the brain. Start with essentially one neuron and grow the whole complexity. But if you want to do this, bring up a kid! A brain without body might be easier? Why do you expect that a brain detached from its complete input/output system actually still works. I'm pretty sure it does not!
  •  
    @Luzi: That was exactly my point :-)
LeopoldS

The Moon's mantle unveiled - 2 views

  •  
    first science results reported in Nature (as far as I know) from the Yutu-2 and Chang'e mission .... and they look very good!
  •  
    Sure they are very useful! It will be even better if they manage to fit the data to modeled circulation of the lunar magma ocean that was formed posterior to the "Theia" body collision with Earth. The collision was the cause of the magma ocean in the first place. The question now is how this circulation pattern of the lava-moon "froze" in time upon phase transition to solid. Because, what crystallizes last in sequence, is more rich in "incompatible" with the crystal structure, elements, we might combine data+models to predict their location. Those incompatible tracers are mainly radioactively decaying elements that produce heat (google publications about lunar KREEP elements (potassium (K), rare earth elements(REE), and phosphorus(P)). By knowing where the KREEP is: - we know where to dig for them mining (if they are useful for something, eg. Phosphorus for plants to be grown on the Moon) - we avoid planning to build the future human colony on top of radioactives, of course. The hope is that the Moon, due to lack of plate tectonics, has preserved this "signature of the freezing sequence". Let's see.
  •  
    thanks Nasia! very interesting comment
Marcus Maertens

Google AI Blog: Curiosity and Procrastination in Reinforcement Learning - 2 views

  •  
    What happens if you put a TV in the maze your robot is supposed to navigate (driven by curiosity)?
  •  
    Does the fact that I follow this process of learning, make me a meta-learner? Or a pre-robot?
Dario Izzo

Space4Life - Lab2Moon - 3 views

  •  
    Cyano bacteria to shield from radiation. An idea from italians flying to the Moon via Team Indus
  •  
    Nice idea, but is it really new: resistance of cyanob. to UV radiation has been known but studies have been inconclusive as to under what resource limitations it works, but according to what we see from evolution: on Earth it works, since they survived pre-ozone atmosphere! some papers from a quick google search: 1999 http://www.tandfonline.com/doi/pdf/10.1080/09670269910001736392 2014 https://www.ncbi.nlm.nih.gov/pubmed/25463663
jaihobah

Google's AI Wizard Unveils a New Twist on Neural Networks - 2 views

  •  
    "Hinton's new approach, known as capsule networks, is a twist on neural networks intended to make machines better able to understand the world through images or video. In one of the papers posted last week, Hinton's capsule networks matched the accuracy of the best previous techniques on a standard test of how well software can learn to recognize handwritten digits." Links to papers: https://arxiv.org/abs/1710.09829 https://openreview.net/forum?id=HJWLfGWRb&noteId=HJWLfGWRb
  •  
    impressive!
  •  
    seems a very impressive guy :"Hinton formed his intuition that vision systems need such an inbuilt sense of geometry in 1979, when he was trying to figure out how humans use mental imagery. He first laid out a preliminary design for capsule networks in 2011. The fuller picture released last week was long anticipated by researchers in the field. "Everyone has been waiting for it and looking for the next great leap from Geoff," says Kyunghyun Cho, a professor"
jcunha

When AI is made by AI, results are impressive - 6 views

  •  
    This has been around for over a year. The current trend in deep learning is "deeper is better". But a consequence of this is that for a given network depth, we can only feasibly evaluate a tiny fraction of the "search space" of NN architectures. The current approach to choosing a network architecture is to iteratively add more layers/units and keeping the architecture which gives an increase in the accuracy on some held-out data set i.e. we have the following information: {NN, accuracy}. Clearly, this process can be automated by using the accuracy as a 'signal' to a learning algorithm. The novelty in this work is they use reinforcement learning with a recurrent neural network controller which is trained by a policy gradient - a gradient-based method. Previously, evolutionary algorithms would typically be used. In summary, yes, the results are impressive - BUT this was only possible because they had access to Google's resources. An evolutionary approach would probably end up with the same architecture - it would just take longer. This is part of a broader research area in deep learning called 'meta-learning' which seeks to automate all aspects of neural network training.
  •  
    Btw that techxplore article was cringing to read - if interested read this article instead: https://research.googleblog.com/2017/05/using-machine-learning-to-explore.html
pablo_gomez

Quanta Magazine - 0 views

  •  
    Can some of our quantum experts elaborate a bit on implications etc.? :) I'm not sure I follow.
« First ‹ Previous 201 - 219 of 219
Showing 20 items per page