Past Problems with “Thinking robots”
A humanoid project at Michigan State University attempts to integrate research from many disparate fields into its robot, SAIL, including cognitive science, biology, developmental psychology, robotics, and computer vision.
In their zeal to make robots "think like humans," early humanoid researchers focused on high-level cognition and provided no mechanism for building control from the bottom up. Although intended to model humans, most of the systems did not, like humans, acquire their knowledge through interaction with the real world. When situated in the real world, these robots possessed little mastery over it. Even in the fortunate event that sensors could accurately connect internal 'archetypes' to real-world objects, robots could only extend the knowledge thrust upon them in rudimentary, systematic ways. Such robots carried out preconceived actions with no ability to react to unforeseen features of the environment or task.
Realizing the limitations of hard-coded, externally derived solutions, many within the AI community decided to look to fields such as neuroscience, cognitive psychology, and biology for new insight. Before long, the multidisciplinary field of cognitive science drove home the notion that the planning and high-level cognition humans are consciously aware of represents only the tip of a vast neurological iceberg. 4 The mainstay of human action, it was argued, derives from motor skills and implicit behavior encodings that lie beneath the level of conscious awareness. Borrowing on this understanding, Agre and Chapman argued that robots should likewise spend less time deliberating and more time responding to a world in constant flux. 5 A new, behavior-based view of intelligence emerged which transferred the emphasis from intelligent processing to robust real-world action.
Neurobiology provided compelling evidence for a behavior-based approach with studies on the behavioral architecture of low-level animals. In one experiment, scientists severed the connection between a frog's spine and brain, effectively removing the possibility of centralized, high-level control. They then stimulated particular points along the spinal cord and found that much of the frog's behavior was encoded directly into the spine. 6 For instance, stimulating one location prompted the frog to wipe its head whereas another location encoded jumping behavior. It was this implicit, reactive control layer that classical AI methods had ignored.