Building Intelligence from the Bottom-Up
Today, the question for Humanoid Robotics is how best to impart these primitive behaviors to robots. Many researchers find it ineffective to directly hard-code such low-level behavior with imperative languages like C or C++ and instead use a more biologically motivated technique such as artificial neural networks. Artificial neural networks allow a 'supervised' learning approach where a designer trains a system's response to stimulation by adjusting weights between nodes of a network. The rise of artificial neural networks (ANNs) brought much optimism. Researchers believed that they could use ANNs to simulate the distributed, parallel nature of computation in the brain, allowing skills and knowledge to be conditioned as implicit generalizations of repeated experience.
As it turns out, ANNs fail to capture the recursive power of the human brain. Unlike an ANN where the structure of the network is usually fixed, the brain's highly integrated, well-ordered structure emerges through competition between separately evolving collectives of neurons. Critics argue that ANNs' lack of such an architecture prohibits meta-level learning -- the ability to not only generalize, but also extend acquired knowledge beyond the frontiers of experience. Although ANNs do not accurately model cognitive capacities of the human cortex, they do offer a truly unique and effective way to encode motor-skills and low-level behavior. It may be that, like the cerebellum and other, older structures of the brain, ANNs can provide a foundation on which high-level learning can be built. In any case, they have provided powerful insight into understanding both machine and biological learning.
Other learning techniques such as reinforcement learning and genetic algorithms have also played a role in modeling various levels of learning. Reinforcement learning can be used as an 'unsupervised,' learning-with-a-critic approach where mappings from percepts to actions are learned inductively through trial and error. Other approaches use evolutionary methods that begin with an initial pool of program elements and use genetic operators such as recombination and mutation to generate successive generations of increasingly 'fit' controllers. Using these approaches and others, robots can learn by adjusting parameters, exploiting patterns, evolving rule sets, generating entire behaviors, devising new strategies, predicting environmental changes, and recognizing the strategies of opponents or exchanging knowledge with other robots. Such robots have the potential to acquire new knowledge at a variety of levels and to adapt existing knowledge to new purposes. Robots now learn to solve problems in ways that humans can scarcely understand. In fact, one of the side effects of these learning methods is systems that are anything but 'explainable.' Emergent behavior is no longer suppressed by careful design, but instead, encouraged by similarly careful design.
Those working with humanoid robots often code learning mechanisms directly into their design environments and use them to hone existing behaviors, to develop new behaviors and even to string behaviors together. For instance, a designer can use a neural network to implicitly encode low-level motor control for an arm-reaching behavior and then use reinforcement learning to train the humanoid when to reach and when to grasp. If the humanoid still struggles, the designer might, for instance, optimize behavior using a genetic algorithm to tweak parameters controlling rotational torque.
Although these capabilities have proved invaluable, the devastating complexity of humanoids has required specialization. The goal of human-like versatility has often bowed to the goal of engineering specific human-like behaviors. The result is humanoids that can exhibit impressive functionality within a restricted domain or task. The next step is for an increasing number of capabilities to reside on general-purpose machines, capable of many tasks because they are engineered for none in particular.