Our research to date has resulted in the development of a control architecture that supports four distinct modes of remote intervention:
Teleoperation: We have taken the interaction substrate used in previous INL teleoperated robotic systems and revamped it through feedback with people who have deployed such systems. Within teleoperation mode, the user has full, continuous control of the robot at a low level. The robot takes no initiative except to stop after a specified time if it recognizes that communications have failed. Because the robot takes little or no initiative in this mode, much work has gone into providing appropriate situation awareness to the user using perceptual data fused from many different sensors. A tilt sensor provides data on whether the robot is in danger of overturning. Inertial effects (measured using a DMU) and abnormal torque on the wheels (not associated with acceleration) are fused to produce a measure of resistance as when the robot is climbing over or pushing against an obstacle. Even in teleoperated mode, the user can choose to activate a resistance limit that permits the robot to respond to high resistance and bump sensors. Also, a specialized interface provides the user with abstracted auditory, graphical and textual representations of the environment and task.
Safe Mode: User directs movements of robot, but the robot takes initiative to protect itself. In doing so, this mode allows the user to issue motion commands with impunity, greatly accelerating the speed and confidence with which the user can accomplish remote tasks. The robot assess its own status and surrounding environment to decide whether commands are safe. For example, the robot has excellent proprioception and will stop its motion just before a collision, placing minimal limits on the user. The robot notifies the user of environmental features (e.g. box canyon, corner, hallway), immediate obstacles, tilt, resistance, etc. and also continuously assesses the validity of its diverse sensor readings and communication capabilities. The robot will refuse to undertake a task if it does not have the ability (i.e. sufficient power or perceptual resources) to safely accomplish it.
Shared Control: The robot takes the initiative to choose its own path, responds autonomously to the environment, and works to accomplish local objectives. This initiative, however, is primarily reactive rather than deliberative. In terms of navigation, the robot responds only to its local (~ 6-meter radius), sensed environment. Although the robot handles the low-level navigation and obstacle avoidance, the user supplies intermittent input, often at the robot's request, to guide the robot in general directions. The problem of deciding when the robot should ask for help has been a major line of HRI enquiry and will be a major issue in our upcoming human subject experiments. One of the most challenging efforts thus far has been developing a "Get Unstuck" behavior that allows the robot to autonomously extricate itself from highly cluttered areas that are difficult for a remote operator to handle.
Full Autonomy: The robot performs global path planning to select its own route, requiring no user input except high-level tasking such as "follow that target" or "search this area (specified by drawing a circle around a given area on the map created by the robot)." For all these levels, the intelligence resides wholly on the robot itself — no off-board processing is necessary. To permit deployment within shielded structures, we have developed a customized communication protocol, which allows very low bandwidth communications to pass over a serial radio link only when needed. The interface itself then unfolds these simple packets into a comprehensive interface. The system will use at least three separate communications channels with the ability to reroute data when one or more connection is lost. A critical issue for further research is if, when and on what basis we should allow the robot to recognize operator inefficiency or lack of input and autonomously adjust its own level of autonomy.