an experiment at the University of Pennsylvania. It might be a bit far-fetched but I thought it might be useful when exploring new planets.
Combined with AI the robot would able to assess the terrain and deploy another robot the shape of which would be chosen to best suit its environment. I was thinking of this in the context of exploring places on other planets which are inaccessible by regular rovers (e.g. caves on Mars).
quite well-reputed, they were founded as the American Association for AI (and only recently cahnged it to be more international). They are partly organizing IJCAI and the AAAI conferences (mostly in the US), which are quite good. Symposia around specific topics are also done and at those mostly professors and researchers with high impact are going.
Thanks Juxi. was contacted by one of the organisers for the Berlin edition of it in 2 years. looking at your answer, it seems having the ACT associated to it is not a bad idea. will check with the team. Would you yourself be interested?
any idea if what the canadians claim to sell is closer to a quantum computer than what they did 2011? (I remember Luzi's comment back then that this had nothing to do with a quantum computer)
Canada being member state of ESA ... should we start getting interested?
The European Parliament's Legal Affairs Committee voted in favour of a resolution calling for new laws addressing robotics and artificial intelligence (AI) to be set out to sit alongside a new voluntary ethical conduct code that would apply to developers and designers.
AutoDraw is a new kind of drawing tool. It pairs machine learning with drawings from talented artists to help everyone create anything visual, fast. There's nothing to download. Nothing to pay for. And it works anywhere: smartphone, tablet, laptop, desktop, etc.
AutoDraw's suggestion tool uses the same technology used in QuickDraw, to guess what you're trying to draw. Right now, it can guess hundreds of drawings and we look forward to adding more over time. If you are interested in creating drawings for others to use with AutoDraw, contact us here.
We hope AutoDraw will help make drawing and creating a little more accessible and fun for everyone.
The problem with embodiment is that it's done wrong. Embodiment needs to be treated like big data. More sensors, more data, more processing. Just putting a computer in a robot with a camera and microphone is not embodiment.
I like how he attacks Moore's Law. It always looks a bit naive to me if people start to (ab)use it to make their point. No strong opinion about embodiment.
Embodiment has some obvious advantages. For example, in the vision domain many hard problems become easy when you have a body with which you can take actions (like looking at an object you don't immediately recognize from a different angle) - a point already made by researchers such as Aloimonos.and Ballard in the end 80s / beginning 90s. However, embodiment goes further than gathering information and "mental" recognition. In this respect, the evolutionary robotics work by for example Beer is interesting, where an agent discriminates between diamonds and circles by avoiding one and catching the other, without there being a clear "moment" in which the recognition takes place. "Recognition" is a behavioral property there, for which embodiment is obviously important. With embodiment the effort for recognizing an object behaviorally can be divided between the brain and the body, resulting in less computation for the brain. Also the article "Behavioural Categorisation: Behaviour makes up for bad vision" is interesting in this respect. In the field of embodied cognitive science, some say that recognition is constituted by the activation of sensorimotor correlations. I wonder to which extent this is true, and if it is valid for extremely simple creatures to more advanced ones, but it is an interesting idea nonetheless.
This being said, if "embodiment" implies having a physical body, then I would argue that it is not a necessary requirement for intelligence. "Situatedness", being able to take (virtual or real) "actions" that influence the "inputs", may be.
@Paul
While I completely agree about the "embodiment done wrong" (or at least "not exactly correct") part, what you say goes exactly against one of the major claims which are connected with the notion of embodiment (google for "representational bottleneck"). The fact is your brain does *not* have resources to deal with big data. The idea therefore is that it is the body what helps to deal with what to a computer scientist appears like "big data". Understanding how this happens is key.
Whether it is the problem of scale or of actually understanding what happens should be quite conclusively shown by the outcomes of the Blue Brain project.
Wouldn't one expect that to produce consciousness (even in a lower form) an approach resembling that of nature would be essential?
All animals grow from a very simple initial state (just a few cells) and have only a very limited number of sensors AND processing units. This would allow for a fairly simple way to create simple neural networks and to start up stable neural excitation patterns. Over time as complexity of the body (sensors, processors, actuators) increases the system should be able to adapt in a continuous manner and increase its degree of self-awareness and consciousness.
On the other hand, building a simulated brain that resembles (parts of) the human one in its final state seems to me like taking a person who is just dead and trying to restart the brain by means of electric shocks.
Actually on a neuronal level all information gets processed. Not all of it makes it into "conscious" processing or attention. Whatever makes it into conscious processing is a highly reduced representation of the data you get. However that doesn't get lost. Basic, low processed data forms the basis of proprioception and reflexes. Every step you take is a macro command your brain issues to the intricate sensory-motor system that puts your legs in motion by actuating every muscle and correcting every step deviation from its desired trajectory using the complicated system of nerve endings and motor commands. Reflexes which were build over the years, as those massive amounts of data slowly get integrated into the nervous system and the the incipient parts of the brain.
But without all those sensors scattered throughout the body, all the little inputs in massive amounts that slowly get filtered through, you would not be able to experience your body, and experience the world. Every concept that you conjure up from your mind is a sort of loose association of your sensorimotor input. How can a robot understand the concept of a strawberry if all it can perceive of it is its shape and color and maybe the sound that it makes as it gets squished? How can you understand the "abstract" notion of strawberry without the incredibly sensible tactile feel, without the act of ripping off the stem, without the motor action of taking it to our mouths, without its texture and taste? When we as humans summon the strawberry thought, all of these concepts and ideas converge (distributed throughout the neurons in our minds) to form this abstract concept formed out of all of these many many correlations. A robot with no touch, no taste, no delicate articulate motions, no "serious" way to interact with and perceive its environment, no massive flow of information from which to chose and and reduce, will never attain human level intelligence.
That's point 1. Point 2 is that mere pattern recogn
All information *that gets processed* gets processed but now we arrived at a tautology. The whole problem is ultimately nobody knows what gets processed (not to mention how). In fact an absolute statement "all information" gets processed is very easy to dismiss because the characteristics of our sensors are such that a lot of information is filtered out already at the input level (e.g. eyes). I'm not saying it's not a valid and even interesting assumption, but it's still just an assumption and the next step is to explore scientifically where it leads you. And until you show its superiority experimentally it's as good as all other alternative assumptions you can make.
I only wanted to point out is that "more processing" is not exactly compatible with some of the fundamental assumptions of the embodiment. I recommend Wilson, 2002 as a crash course.
These deal with different things in human intelligence. One is the depth of the intelligence (how much of the bigger picture can you see, how abstract can you form concept and ideas), another is the breadth of the intelligence (how well can you actually generalize, how encompassing those concepts are and what is the level of detail in which you perceive all the information you have) and another is the relevance of the information (this is where the embodiment comes in. What you do is to a purpose, tied into the environment and ultimately linked to survival). As far as I see it, these form the pillars of human intelligence, and of the intelligence of biological beings. They are quite contradictory to each other mainly due to physical constraints (such as for example energy usage, and training time).
"More processing" is not exactly compatible with some aspects of embodiment, but it is important for human level intelligence. Embodiment is necessary for establishing an environmental context of actions, a constraint space if you will, failure of human minds (i.e. schizophrenia) is ultimately a failure of perceived embodiment.
What we do know is that we perform a lot of compression and a lot of integration on a lot of data in an environmental coupling. Imo, take any of these parts out, and you cannot attain human+ intelligence. Vary the quantities and you'll obtain different manifestations of intelligence, from cockroach to cat to google to random quake bot. Increase them all beyond human levels and you're on your way towards the singularity.
The Northeast Blackout of 2003 that forced the shut-down of over 100 power plants and affected 55M people - the largest black-out in US history - was precipitated by a single overloaded transmission line, in Ohio, sagging and touching overgrown vegetation.
Somehow, the mere fact does not surprise me. I always assumed that the genetic information is on multiple overlapping layers encoded. I do not see how this can be transferred exactly on genetic algorithms, but a good encoding on them is important and I guess that you could produce interesting effects by "overencoding" of parameters, apart from being more space-efficient.
I was actually thinking exactly about this question during my bike ride this morning. I am surprised that some codons would need to have a double meaning though because there is already a surplus of codons to translate into just 20-22 proteins (depending on organism). So there should be about 44 codons left to prevent translation errors and in addition regulate gene expression. If - as the article suggests - a single codon can take a dual role, does it so in different situations (needing some other regulator do discern those)? Or does it just perform two functions that always need to happen simultaneously?
I tried to learn more from the underlying paper: https://www.sciencemag.org/content/342/6164/1367.full.pdf
All I got from that was a headache. :-\
Probably both. Likely a consequence of energy preservation during translation. If you can do the same thing with less genes you save up on the effort required to reproduce. Also I suspect it has something to do with modularity. It makes sense that the gene regulating for "foot" cells also trigger the genes that generate "toe" cells for example. No point in having an extra if statement.
Nice video featuring the technology. Plus it comes with a good soundtrack!
Google's project wing uses a lifting wing concept (more fuel efficient than normal airplane layouts and MUCH more efficient than quadrocopters) but it equips the plane with engines strong enough to hover in a nose up position, allowing vertical landing and takeoff. For the delivery of packages the drone does not even need to land - it can lower them on a wire - much like the skycrane concept used to deliver the Curiosity rover on Mars. Not sure if the skycrane is really necessary but it is certainly cool.
Anyways, the video is great for its soundtrack alone! ;-P
> Not sure if the skycrane is really necessary but it is certainly cool.
I think apart from coolness using a skycrane helps keep the rotating knives away from the recipient...
Honest question, are we ever going to see this in practice? I mean besides some niche application somewhere, isn't it fundamentally flawed or do I need to keep my window opened on the 3rd floor without a balcony when I ordered something from DX? Its pretty cool yes, but practical?
Package delivery is indeed more complicated than it may seem at first sight, although solutions are possible for instance by restricting delivery to distribution centers. What we really need of course is some really efficient and robust AI to navigate without any problems in urban areas : )
The hybrid is interesting since it combines the advantage of a Vertical Takeoff and Landing (and hover), and a wing for more efficient forward flight. Challenges lie in the control of the vehicle under any angle and all that this entails also for higher levels of control. Our lab has first used this concept a few years ago for the DARPA UAVforge challenge, and we had two hybrids in our entry last year for the IMAV 2013 (for some shaky images: https://www.youtube.com/watch?v=Z7XgRK7pMoU ).
Fair enough, but even if you consider advanced/robust/efficient AI, why would you use a drone? Do we envision hundreds of drones above our heads in the street instead of UPS vans, or postmen, considering delivers letters might be more easily achievable. I am not so sure if personal delivery will take this route.
On the other hand, if the system would work smoothly, I can image that I'm send a mail with the question whether I'm home (or they might know already from my personal GPS tracker) and then notify me that they are launching my DVD and it will come crashing into my door in 5min.
I'm more curios how they're planning to keep people from stealing the drones. I could do with a drone army myself and having cheap amazon or google drones flying about sounds like a decent source.
From a quick glance, a serious contender to the most unreadable article ever. 60% of the pages of this document are references...
Still, in the use of abbreviations doesn't even come close to aerospace...
you know what? if you exclude the title it is not so much bullshit!
"Send a robot out into space - allow people to experience space via a virtual environment"
I have always loved this idea!
I looked at the wiki input on this, but I don't know what I should do. Aren't you writing the article. I definitely think there are some stuffs to pick up on the list and develop further.