Skip to main content

Home/ Physics of the Future/ Contents contributed and discussions participated by Max Herm

Contents contributed and discussions participated by Max Herm

Max Herm

AI Example - 0 views

shared by Max Herm on 05 Mar 14 - Cached
  •  
    If you have not already, I recommend attempting to intelligently interact with an artificial intelligence such as this. It helps understand where we are so far in making a computer as intelligent as we are (which is not very impressive, in my opinion).
Max Herm

Technological Singularity - 0 views

shared by Max Herm on 05 Mar 14 - No Cached
  •  
    "What happens when machines become smarter than humans? Humans steer the future not because we're the strongest or the fastest but because we're the smartest. When machines become smarter than humans, we'll be handing them the steering wheel. What promises-and perils-will these powerful machines present?" I found this source when looking for information on the "technological singularity" predicted to come by many of Kaku's interviewees. It is difficult to find legitimate sources on this topic, but I think this one is going to be helpful. The link is not to any of their specific articles, but there are many to be explored. I recommend at least skimming some of these articles, as they will give us a more specific idea of how such a singularity would happen.
Max Herm

Alan Turing - 0 views

  •  
    Alan Turing drew much between 1928 and 1933 from the work of the mathematical physicist and populariser A. S. Eddington, from J. von Neumann's account of the foundations of quantum mechanics, and then from Bertrand Russell's mathematical logic. Meanwhile, his lasting fascination with the problems of mind and matter was heightened by emotional elements in his own life (Hodges 1983, p. 63). In 1934 he graduated with an outstanding degree in mathematics from Cambridge University, followed by a successful dissertation in probability theory which won him a Fellowship of King's College, Cambridge, in 1935. This was the background to his learning, also in 1935, of the problem which was to make his name. As far as history goes, Alan Turing was basically the father of AI. He was the one of the first to even work with computers, as he was a computer scientist during WWII. He worked on cracking German codes with advanced (for the time) computers.
Max Herm

Neurobiology - 0 views

  •  
    "Our basic computational element (model neuron) is often called a node or unit. It receives input from some other units, or perhaps from an external source. Each input has an associated weight w, which can be modified so as to model synaptic learning." This is a simple and brief article, but I think that a better understanding of neuroscience and how neurons work will help us to better grasp what research is being done regarding AI. This is an important research category, because if we wish to recreate ourselves on a scale of intelligence, we will need to better analyze our neural relays and systems. Apparently, according to Kaku's book, it is a very complicated endeavor that has been met so far with limited success. Hopefully, we can better understand the basics of neuron structure and function with this article.
Max Herm

Inductive Reasoning - 0 views

  •  
    In any realistic learning application, the entire instance space will be so large that any learning algorithm can expect to see only a small fraction of it during training. From this small fraction, a hypothesis must be formed that classifies all the unseen instances. If the learning algorithm performs well then most of these unseen instances should be classified correctly. However, if no restric- tions are placed on the hypothesis space and no "preference criterion" 1124] is supplied for comparing competing hypotheses, then all possible classifications of the unseen instances are equally possible and no inductive method can do better on average than random guessing [261. Hence all learning algorithms employ some mechanism whereby the space of hypotheses is restricted or whereby some hypotheses are preferred a priori over others. This is known as inductive bias I hope to use this source to learn more about how artificial intelligence learns, as I have read in other places that the kind that learns from the "bottom up" learns by making mistakes and learning from them. AI, if to be truly intelligent, is probably going to have to learn the way we did; by experience and example. In Kaku's book, he mentions the differences between two artificially intelligent robots that he "met". One, STAIR, had a limited database and was programmed to do what it did. Another, LAGR, piloted itself through a park, bumping into miscellaneous objects and learning their locations so that on the next pass, it would not hit them. I hope to learn more about that kind of logic by reading this article, as I think it is important to have a better understanding of exactly how artificial intelligence learns.
1 - 5 of 5
Showing 20 items per page