Understanding the Context of Classical AI
Classical AI spent decades trying to model human-like intelligence, using knowledge-based systems that processed representation at a high, symbolic level. Symbolic representation was considered of paramount importance because it allowed agents to operate on sophisticated human concepts and report on their action at a linguistic level. As Donald Michie stated, “In AI-type learning, explainability is all.” (Michie 1988) Since the goal of early AI was to produce human-like intelligence, researchers used human-like approaches. Marvin Minsky, in many ways a father of the field of AI, believed an intelligent machine should, like a human, first build a model of its environment and then explore solutions abstractly before enacting strategies in the real world. (McCarthy et al. 1955) This emphasis on symbolic representation and planning had a great effect on robotics and spurred control strategies where functionality was coded using languages and programming architectures that made conceptual sense to a human designer. Although many of the strategies developed were both elaborate and elegant, the problem was that the intelligence in these systems belonged to the designer. The robot itself had little or no autonomy and often failed to perform if the environment changed. While classical AI viewed intelligence as the ability of a program to process internal encodings, a behavior-based approach considers intelligence to be demonstrated through “meaningful and purposeful” action in an environment. (Arkin 1999)
While many perceived the behavior-based movement to have forsaken the goal of human-like intelligence, others maintained that high-level intelligence would indeed arise once a strong, low-level foundation had been laid. Agre and Chapman argued that, in fact, human beings are actually much more reactive than we imagine ourselves to be. (Agre and Chapman 1987) The planning and cognition that we are consciously aware of represents only the tip of a cerebral iceberg comprised mostly of unconscious, reactive motor skills and implicit behavior encodings. In a sense, the behavioral approach did not abandon modeling human intelligence as much as human consciousness. One of the sideeffects has been that many behavior-based approaches produce systems that are anything but ‘explainable.’ High scientific aims aside, a main reason the behavior-based community is so intent on developing automated learning techniques is that a human designer often finds it excruciatingly tedious or impossibly difficult to orchestrate many behaviors operating in parallel. It is worse than frustrating to debug behavior that emerges from the interplay of many layers of asynchronous control. At times, a truly well-implemented, behavior-based approach will result in successful strategies the researchers themselves cannot explain or understand.