A major tenet of the behavior-based approach is action-oriented perception where the task frames the perceptual input. Perception without the context of action is meaningless. Accordingly, expectations about the task and environment should be used to direct perception, focusing attention on information determined to be crucial for the current task. For example, before the vision system on a robot named “Yuppy” can recognize gestures, it first must perform 2-D image comparisons to identify motion and then center in on the locus of that motion — the palm — before determining the gesture. No perceptual system should try to take in every aspect of an environment. Humans actually parse very little of the environmental data available to them. Good perception depends much more on ‘what’ than ‘how much.’ The learning disability, ADD, is caused by the child's attempts to absorb too much available stimulation. It is the ability to discriminate between important and unimportant environmental characteristics that is the key to both biological and artificial learning.
At times, it is impossible to produce accurate data, even about the environmental information deemed crucial. This brings us to another crucial characteristic of good perceptual schemas — the ability to report uncertainty. If a controller is aware of a perceptual deficit, it can work to redirect perceptual resources accordingly. Controllers should be able to trade off perception schemas based on new needs and changes in the environment. A robot trying to stay on a windy, narrow path may need to focus on the edges of the path, whereas the same robot may need to focus elsewhere if it finds itself on a wide road with many moving vehicles. Perception should be actively generated on a need-to-know basis.
Active perception allows perceptual processes to control themselves. For instance, a vision system should control its own cameras, deciding when to swivel, zoom, focus, etc. On the humanoid robot COG, there are two cameras, one that provides a broad view of the environment and another that focuses in on an area of interest. Perceptual schemas should be able to permit integration of diverse sensory modes such as sonar, infrared, laser scanners, ultrasound, vision, thermal, etc. With this use of multi-modal perception, it has become even more important to focus attention on crucial aspects of the environment. For many systems, a filtering mechanism can actively eliminate or choose certain modes of perception for a particular behavior. Perceptual fission can separate perceptual streams for specialized use by separate behaviors. On the other hand, fusion can combine percepts for a particular behavior. For instance, the SFX architecture uses an investigative phase to predict perceptual needs and strategically reconfigure its sensors, and then a performative phase to fuse raw sensor data, preprocessing it into a percept for a particular behavior. (Murphy & Arkin 1992)
Using such behavior-based sensory strategies, researchers at Carnegie-Mellon have created agents capable of driving real cars across America at speeds above 100 kph. (Pomerleau 1995) Robotic heads are another area where sensing must involve complex sensory integration. The humanoid robot COG is designed to have modes of perception similar to those of a human. The belief is that only an agent that perceives the world like a human will be better enabled to develop human-like intelligence. Perception influences learning, and if we want a robot to be able to learn from humans – either through emulation or through some other exchange of knowledge and skills – it must be able to relate its view of the universe to ours and map its body to ours. In an attempt to model human perception, researchers at MIT have given COG proprioception – a feeling of where and how body parts are oriented. When COG moves a limb, it receives a wealth of feedback information indicating the success of the motion in regard to the intent. In addition, they have attempted to model the human vestibulo-ocular reflex, which allows the eyes to remain focused on a target even while the head moves.