Learning by Doing: Automated Development through Real-World Interaction
The mechanical sophistication of a full-fledged humanoid body poses a devastating challenge to even the most robust learning technique. The more complex a humanoid body, the harder it is to place constraints necessary for productive learning. If too few constraints are employed, learning becomes intractable. Too many constraints on the other hand, and we may curtail the ability of learning to scale. Ultimately, the conventional learning techniques described above are limited by the fact that they are tools wielded by human designers, rather than self-directed capabilities of the robot. This may not need to be the case. Although robots will always require an initial program, this fact does not preclude them from building indefinitely, willfully and creatively upon it. After all, humans also begin with a program encoded in our DNA. The key is that in humans the majority of this genetic code is devoted not to mere behavior, but to laying a foundation necessary for future development.
Researchers at MIT believe that the key to creating human-like behavior is to create a robot that can learn from natural interactions with a human (much as a human infant learns).
A growing number of humanoid researchers believe it is this ability to appropriately 'seed' development that will make learning tractable for humanoids. The goal is no longer for robots to merely learn (acquire knowledge and skill in a particular area), but to also develop (enrich cognitive ability to learn and extend physical ability to apply learning). Truly autonomous humanoids must ultimately play some role as arbiters of their own development, able to channel and structure learning across layers of control. This will require generalized learning starting from the ground up and continuing throughout the life of the humanoid, affecting what the robot is, rather than merely what the robot does.
Before we can transform a cognitive architecture into a developing mind, there are a host of difficult questions to be answered. How do we give humanoids the ability to impress their own meaning onto the world? How can humanoids direct their own development? How do we motivate this development? How much a priori skill and knowledge do we build in? Using what level of representation? What, if any, bounds should be imposed?
While there may never be definitive answers to these questions, a learning approach is emerging that provides a unique, functional balance of human input, self-development and real-world interaction. This technique, which we will call imitative learning, allows the robot to learn continuously through multi-modal interactions with a human trainer and the environment. The robot can pose questions, ask for actions to be repeatedly demonstrated, and use emotional states to communicate frustration, exhaustion or boredom to the human trainer. Advocates of imitative learning see it as the cornerstone in a developmental foundation that can enable self-directed, future learning.