On the research side, said Dr Goertzel, virtual worlds also solved the problem of giving an AI a relatively unsophisticated environment in which it could live and learn.
Researchers at Osaka University are stepping up efforts to develop robotic body parts controlled by thought, by placing electrode sheets directly on the surface of the brain.
A virtual child controlled by artificially intelligent software has passed a cognitive test regarded as a major milestone in human development. It could lead to smarter computer games able to predict human players' state of mind.
Children typically master the "false belief test" at age 4 or 5. It tests their ability to realise that the beliefs of others can differ from their own, and from reality.
The creators of the new character – which they called Eddie – say passing the test shows it can reason about the beliefs of others, using a rudimentary "theory of mind".
John Laird, a researcher in computer games and Artificial Intelligence (AI) at the University of Michigan in Ann Arbor, is not overly impressed. "It's not that challenging to get an AI system to do theory of mind," he says.
More impressive demonstration, says Laird, would be a character, initially unable to pass the test, that learned how to do so – just as humans do.
Eddie can pass the test thanks to a simple logical statement added to the reasoning engine: if someone sees something, they know it and if they don't see it, they don't.
That I'd call cheating. Eddie neither is able to go wrong having the algorithm applied, nor has made any kind od experience leading to the insight that enables to pass the test. The cognitive structures allowing human kids to pass the test are much more complex and "rich" than that simple algorithmic rule. They imply a whole world of (social) perspective taking.