Skip to main content

Home/ Advanced Concepts Team/ Group items tagged human

Rss Feed Group items tagged

Guido de Croon

Will robots be smarter than humans by 2029? - 2 views

  •  
    Nice discussion about the singularity. Made me think of drinking coffee with Luis... It raises some issues such as the necessity of embodiment, etc.
  • ...9 more comments...
  •  
    "Kurzweilians"... LOL. Still not sold on embodiment, btw.
  •  
    The biggest problem with embodiment is that, since the passive walkers (with which it all started), it hasn't delivered anything really interesting...
  •  
    The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
  •  
    I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
  •  
    @Paul: How would embodiment be done RIGHT?
  •  
    Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless. This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
  •  
    @Paul While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key. Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
  •  
    Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential? All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness. On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
  •  
    Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain. But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence. That's point 1. Point 2 is that mere pattern recogn
  •  
    All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make. I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
  •  
    These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time). "More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment. What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
Dario Izzo

Miguel Nicolelis Says the Brain Is Not Computable, Bashes Kurzweil's Singularity | MIT ... - 9 views

  •  
    As I said ten years ago and psychoanalysts 100 years ago. Luis I am so sorry :) Also ... now that the commission funded the project blue brain is a rather big hit Btw Nicolelis is a rather credited neuro-scientist
  • ...14 more comments...
  •  
    nice article; Luzi would agree as well I assume; one aspect not clear to me is the causal relationship it seems to imply between consciousness and randomness ... anybody?
  •  
    This is the same thing Penrose has been saying for ages (and yes, I read the book). IF the human brain proves to be the only conceivable system capable of consciousness/intelligence AND IF we'll forever be limited to the Turing machine type of computation (which is what the "Not Computable" in the article refers to) AND IF the brain indeed is not computable, THEN AI people might need to worry... Because I seriously doubt the first condition will prove to be true, same with the second one, and because I don't really care about the third (brains is not my thing).. I'm not worried.
  •  
    In any case, all AI research is going in the wrong direction: the mainstream is not on how to go beyond Turing machines, rather how to program them well enough ...... and thats not bringing anywhere near the singularity
  •  
    It has not been shown that intelligence is not computable (only some people saying the human brain isn't, which is something different), so I wouldn't go so far as saying the mainstream is going in the wrong direction. But even if that indeed was the case, would it be a problem? If so, well, then someone should quickly go and tell all the people trading in financial markets that they should stop using computers... after all, they're dealing with uncomputable undecidable problems. :) (and research on how to go beyond Turing computation does exist, but how much would you want to devote your research to a non existent machine?)
  •  
    [warning: troll] If you are happy with developing algorithms that serve the financial market ... good for you :) After all they have been proved to be useful for humankind beyond any reasonable doubt.
  •  
    Two comments from me: 1) an apparently credible scientist takes Kurzweil seriously enough to engage with him in polemics... oops 2) what worries me most, I didn't get the retail store pun at the end of article...
  •  
    True, but after Google hired Kurzweil he is de facto being taken seriously ... so I guess Nicolelis reacted to this.
  •  
    Crazy scientist in residence... interesting marketing move, I suppose.
  •  
    Unfortunately, I can't upload my two kids to the cloud to make them sleep, that's why I comment only now :-). But, of course, I MUST add my comment to this discussion. I don't really get what Nicolelis point is, the article is just too short and at a too popular level. But please realize that the question is not just "computable" vs. "non-computable". A system may be computable (we have a collection of rules called "theory" that we can put on a computer and run in a finite time) and still it need not be predictable. Since the lack of predictability pretty obviously applies to the human brain (as it does to any sufficiently complex and nonlinear system) the question whether it is computable or not becomes rather academic. Markram and his fellows may come up with a incredible simulation program of the human brain, this will be rather useless since they cannot solve the initial value problem and even if they could they will be lost in randomness after a short simulation time due to horrible non-linearities... Btw: this is not my idea, it was pointed out by Bohr more than 100 years ago...
  •  
    I guess chaos is what you are referring to. Stuff like the Lorentz attractor. In which case I would say that the point is not to predict one particular brain (in which case you would be right): any initial conditions would be fine as far as any brain gets started :) that is the goal :)
  •  
    Kurzweil talks about downloading your brain to a computer, so he has a specific brain in mind; Markram talks about identifying neural basis of mental diseases, so he has at least pretty specific situations in mind. Chaos is not the only problem, even a perfectly linear brain (which is not a biological brain) is not predictable, since one cannot determine a complete set of initial conditions of a working (viz. living) brain (after having determined about 10% the brain is dead and the data useless). But the situation is even worse: from all we know a brain will only work with a suitable interaction with its environment. So these boundary conditions one has to determine as well. This is already twice impossible. But the situation is worse again: from all we know, the way the brain interacts with its environment at a neural level depends on his history (how this brain learned). So your boundary conditions (that are impossible to determine) depend on your initial conditions (that are impossible to determine). Thus the situation is rather impossible squared than twice impossible. I'm sure Markram will simulate something, but this will rather be the famous Boltzmann brain than a biological one. Boltzman brains work with any initial conditions and any boundary conditions... and are pretty dead!
  •  
    Say one has an accurate model of a brain. It may be the case that the initial and boundary conditions do not matter that much in order for the brain to function an exhibit macro-characteristics useful to make science. Again, if it is not one particular brain you are targeting, but the 'brain' as a general entity this would make sense if one has an accurate model (also to identify the neural basis of mental diseases). But in my opinion, the construction of such a model of the brain is impossible using a reductionist approach (that is taking the naive approach of putting together some artificial neurons and connecting them in a huge net). That is why both Kurzweil and Markram are doomed to fail.
  •  
    I think that in principle some kind of artificial brain should be feasible. But making a brain by just throwing together a myriad of neurons is probably as promising as throwing together some copper pipes and a heap of silica and expecting it to make calculations for you. Like in the biological system, I suspect, an artificial brain would have to grow from a small tiny functional unit by adding neurons and complexity slowly and in a way that in a stable way increases the "usefulness"/fitness. Apparently our brain's usefulness has to do with interpreting inputs of our sensors to the world and steering the body making sure that those sensors, the brain and the rest of the body are still alive 10 seconds from now (thereby changing the world -> sensor inputs -> ...). So the artificial brain might need sensors and a body to affect the "world" creating a much larger feedback loop than the brain itself. One might argue that the complexity of the sensor inputs is the reason why the brain needs to be so complex in the first place. I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain. Anyone? Or are they trying to simulate the human brain after it has been removed from the body? That might be somewhat easier I guess...
  •  
    Johannes: "I never quite see from these "artificial brain" proposals in how far they are trying to simulate the whole system and not just the brain." In Artificial Life the whole environment+bodies&brains is simulated. You have also the whole embodied cognition movement that basically advocates for just that: no true intelligence until you model the system in its entirety. And from that you then have people building robotic bodies, and getting their "brains" to learn from scratch how to control them, and through the bodies, the environment. Right now, this is obviously closer to the complexity of insect brains, than human ones. (my take on this is: yes, go ahead and build robots, if the intelligence you want to get in the end is to be displayed in interactions with the real physical world...) It's easy to dismiss Markram's Blue Brain for all their clever marketing pronouncements that they're building a human-level consciousness on a computer, but from what I read of the project, they seem to be developing a platfrom onto which any scientist can plug in their model of a detail of a detail of .... of the human brain, and get it to run together with everyone else's models of other tiny parts of the brain. This is not the same as getting the artificial brain to interact with the real world, but it's a big step in enabling scientists to study their own models on more realistic settings, in which the models' outputs get to effect many other systems, and throuh them feed back into its future inputs. So Blue Brain's biggest contribution might be in making model evaluation in neuroscience less wrong, and that doesn't seem like a bad thing. At some point the reductionist approach needs to start moving in the other direction.
  •  
    @ Dario: absolutely agree, the reductionist approach is the main mistake. My point: if you take the reductionsit approach, then you will face the initial and boundary value problem. If one tries a non-reductionist approach, this problem may be much weaker. But off the record: there exists a non-reductionist theory of the brain, it's called psychology... @ Johannes: also agree, the only way the reductionist approach could eventually be successful is to actually grow the brain. Start with essentially one neuron and grow the whole complexity. But if you want to do this, bring up a kid! A brain without body might be easier? Why do you expect that a brain detached from its complete input/output system actually still works. I'm pretty sure it does not!
  •  
    @Luzi: That was exactly my point :-)
LeopoldS

Common ecology quantifies human insurgency : Article : Nature - 0 views

  •  
    nice paper: like especially: To our knowledge, our model provides the first unified explanation of high-frequency, intra-conflict data across human insurgencies. Other explanations of human insurgency are possible, though any competing theory would also need to replicate the results of Figs 1, 2, 3. Our model's specific mechanisms challenge traditional ideas of insurgency based on rigid hierarchies and networks, whereas its striking similarity to multi-agent financial market models24, 25, 26 hints at a possible link between collective human dynamics in violent and non-violent settings1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19. Top of page
  •  
    There was also this paper ... Power Law Explains Insurgent Violence (http://sciencenow.sciencemag.org/cgi/content/full/2009/1216/1?rss=1)
evo ata

Future Human Evolution - 1 views

  •  
    Scientific and speculative articles about the future of human evolution regarding to artificial intelligence, genetic engineering, transhumanism, nanotechnology, space colonization, time travel, life extension and human enhancement
LeopoldS

Sex differences in the structural connectome of the human brain - 0 views

  •  
    it seems that there are indications that we are differently wired .... Sex differences in human behavior show adaptive complementarity: Males have better motor and spatial abilities, whereas females have superior memory and social cognition skills. Studies also show sex differences in human brains but do not explain this complementarity. In this work, we modeled the structural connectome using diffusion tensor imaging in a sample of 949 youths (aged 8-22 y, 428 males and 521 females) and discovered unique sex differences in brain connectivity during the course of development. Connection-wise statistical analysis, as well as analysis of regional and global network measures, presented a comprehensive description of network characteristics. In all supratentorial regions, males had greater within-hemispheric connectivity, as well as enhanced modularity and transitivity, whereas between-hemispheric connectivity and cross-module participation predominated in females. However, this effect was reversed in the cerebellar connections. Analysis of these changes developmentally demonstrated differences in trajectory between males and females mainly in adolescence and in adulthood. Overall, the results suggest that male brains are structured to facilitate connectivity between perception and coordinated action, whereas female brains are designed to facilitate communication between analytical and intuitive processing modes.
  •  
    I like this abstract: sex, sex, sex, sex, SEX, SEX, SEX, SEX...!!! I wonder if the "sex differences" are related to gender-specific differences...
LeopoldS

Decreasing human body temperature in the United States since the industrial revolution ... - 1 views

shared by LeopoldS on 11 Jan 20 - No Cached
  •  
    Nice paper and linked to so many other factors.... curious "The question of whether mean body temperature is changing over time is not merely a matter of idle curiosity. Human body temperature is a crude surrogate for basal metabolic rate which, in turn, has been linked to both longevity (higher metabolic rate, shorter life span) and body size (lower metabolism, greater body mass). We speculated that the differences observed in temperature between the 19th century and today are real and that the change over time provides important physiologic clues to alterations in human health and longevity since the Industrial Revolution."
santecarloni

How to Entangle Humans (contd) - Technology Review - 1 views

  •  
    If physicists are ever to entangle humans they'll need to understand the role that noise plays in the experiments. Now they've carried out the first tests to find out
Luís F. Simões

Evolution of AI Interplanetary Trajectories Reaches Human-Competitive Levels - Slashdot - 4 views

  • "It's not the Turing test just yet, but in one more domain, AI is becoming increasingly competitive with humans. This time around, it's in interplanetary trajectory optimization. From the European Space Agency comes the news that researchers from its Advanced Concepts Team have recently won the Gold 'Humies' award for their use of Evolutionary Algorithms to design a spacecraft's trajectory for exploring the Galilean moons of Jupiter (Io, Europa, Ganymede and Callisto). The problem addressed in the awarded article (PDF) was put forward by NASA/JPL in the latest edition of the Global Trajectory Optimization Competition. The team from ESA was able to automatically evolve a solution that outperforms all the entries submitted to the competition by human experts from across the world. Interestingly, as noted in the presentation to the award's jury (PDF), the team conducted their work on top of open-source tools (PaGMO / PyGMO and PyKEP)."
  •  
    We made it to Slashdot's frontpage !!! :)
  •  
    Congratulations, gentlemen!
Marcus Maertens

New Method Confirms Humans and Neandertals Interbred - 0 views

  •  
    Looks like the genetic evidence for the interbreding of homo sapiens and homo neanderthalensis becomes significant.
  •  
    "Humans and Neanderthals interbred" - isn't that a bit weird since we call the result of the interbreeding "Humans" as well? This is a bit like saying "Ligers and Tigers interbred".
Tom Gheysens

Direct brain-to-brain communication demonstrated in human subjects -- ScienceDaily - 2 views

  •  
    In a first-of-its-kind study, an international team of neuroscientists and robotics engineers has demonstrated the viability of direct brain-to-brain communication in humans.
  •  
    Was just about to post it... :) It seems after transferring the EEG signals of one person, converting it to bits and stimulating some brain activity using magnetic stimulation (TMS) the receiving person actually sees 'flashes of light' in their peripheral vision. So its using your vision sense to get the information across. Would it not be better to try to see if you can generate some kind of signal in the part of your brain that is connected to 'hearing'? Or would this be me thinking too naive?
  •  
    "transferring the EEG signals of one person, converting it to bits and stimulating some brain activity using magnetic stimulation (TMS)" How is this "direct"?
johannessimon81

Norwegian army driving tank using Oculus Rift - 1 views

  •  
    I guess it might also make sense to put a camera on an extension to look around corners without having to advance the vehicle to where it can be shot at... ?: Could the Oculus be used to let humans control humanoid robots? I guess so. Could humans perform experiments using such robots? Probably. Could Oculus be used to control these robots on the ISS? I guess so. --> Finally we eliminated the last need for humans in space!!! :-D (Maybe we could replace humans on Earth with robots that control one another through Oculus Rift...)
  •  
    Even cooler would be to have like a swarm of drones around the tank to act as a sensor array and look around corners for you.
pacome delva

TeamParis-SynthEthics - 5 views

  •  
    This is an interesting report from a student in sociology, who worked with a group of scientists on a synthetic biology project for the competition IGEM (http://2009.igem.org/Main_Page). This is what happen when you mix hard and soft sciences. For this project they won the special prize for "Best Human Practices Advance". You can read the part on self or exploded governance (p.34). When reading parts of this reports, I thought that it could be good to have a stagiaire or a YGT in human science to see if we can raise interesting question about ethics for the space sector. There are many questions I'm sure, about the governance, the legitimacy of spending millions to go in space, etc...
Luís F. Simões

The Space Age, as recorded on human written history - 4 views

  •  
    Google Books measurements of word frequencies on 15 million books (12% of all the books ever published). More about it in:  - Google Opens Books to New Cultural Studies - John Bohannon, Science 2010-12-17 - Slashdot: Google Books Makes a Word Cloud of Human History - http://ngrams.googlelabs.com/info
santecarloni

Documentary Tells the Tale of Nim Chimpsky, the Chimp Raised as a Human | 80beats | Dis... - 0 views

  •  
    The Tale of Nim Chimpsky, the Chimp Raised as a Human
ESA ACT

YouTube - HRI2008 - Phobot - 0 views

shared by ESA ACT on 24 Apr 09 - Cached
  •  
    A robot that is afraid of things. Brilliant: Machines learn human weaknesses instead of human strengths...
santecarloni

[1107.0392] Emergence of good conduct, scaling and Zipf laws in human behavioral sequen... - 3 views

  •  
    ... proof that humanity is good?
  •  
    "The dataset contains practically all actions of all players of the MMOG Pardus since the game went online in 2004 [18]. Pardus is an open-ended online game with a world- wide player base of currently more than 370,000 people. Play- ers live in a virtual, futuristic universe in which they interact with others in a multitude of ways to achieve their self-posed goals [22]. Most players engage in various economic activities typically with the (self-posed) goal to accumulate wealth and status. Social and economical decisions of players are often strongly influenced and driven by social factors such as friend- ship, cooperation, and conflict." quite impressive ...
jcunha

Computer model matches humans at predicting how objects move - 0 views

  •  
    We humans take for granted our remarkable ability to predict things that happen around us. Here, a deep learning model trained from real-world videos and a 3D graphics engine was able to infer physical properties of objects against humans.
Alexander Wittig

The Whorfian Time Warp: Representing Duration Through the Language Hourglass. - 0 views

  •  
    How do humans construct their mental representations of the passage of time? The universalist account claims that abstract concepts like time are universal across humans. In contrast, the linguistic relativity hypothesis holds that speakers of different languages represent duration differently. The precise impact of language on duration representation is, however, unknown. Here, we show that language can have a powerful role in transforming humans' psychophysical experience of time. Contrary to the universalist account, we found language-specific interference in a duration reproduction task, where stimulus duration conflicted with its physical growth. When reproducing duration, Swedish speakers were misled by stimulus length, and Spanish speakers were misled by stimulus size/quantity. These patterns conform to preferred expressions of duration magnitude in these languages (Swedish: long/short time; Spanish: much/small time). Critically, Spanish-Swedish bilinguals performing the task in both languages showed different interference depending on language context. Such shifting behavior within the same individual reveals hitherto undocumented levels of flexibility in time representation. Finally, contrary to the linguistic relativity hypothesis, language interference was confined to difficult discriminations (i.e., when stimuli varied only subtly in duration and growth), and was eliminated when linguistic cues were removed from the task. These results reveal the malleable nature of human time representation as part of a highly adaptive information processing system.
jaihobah

Breakthrough method means CRISPR just got a lot more relevant to human health - 0 views

  •  
    "scientists at Harvard University say they've modified the CRISPR method so it can be used to effectively reverse mutations involving changes in one letter of the genetic code. That's important because two-thirds of genetic illness in humans involve mutations where there's a change in a single letter."
  •  
    "Efficient introduction of specific homozygous and heterozygous mutations using CRISPR/Cas9" http://www.nature.com/nature/journal/vaop/ncurrent/full/nature17664.html?WT.ec_id=NATURE-20160428&spMailingID=51249830&spUserID=MTEzODM0NjYzMzgS1&spJobID=903461217&spReportId=OTAzNDYxMjE3S0 As posted here previously, the number and importance of CRISPR is growing steadily, but still plenty of work to make it a reliable tool. Maybe, next work for the Molecular Engineering RF?
jcunha

CRISPR/Cas9 and Targeted Genome Editing: A New Era in Molecular Biology | NEB - 1 views

  •  
    An incresingly popular scientific enome re-writting tool. Might prevent future generations from being born with some types of disorders or disabilities! Also, for fun, can be looked at one step closer to having a real wolverine..
1 - 20 of 230 Next › Last »
Showing 20 items per page