Dynamic Autonomy for Space and Planetary Exploration
NASA's Robonaut mounted on a modified Segway Robot Mobility Platform (RMP)
|Modes of Autonomy|
- “Safe” Manual Mode is a fully manual mode of operation (both the arm and the mobility platform), but with the robot expressing initiative to prevent collisions.
- Mixed Mode means that the human can select to have the robot mobility be autonomous while the arm is still subject to manual control.
- In Shared Mode , the robot operates autonomously, though primarily in a reactive rather than deliberative mode, while, the user supplies intermittent input, often at the robot's request, to guide the robot in general directions.
- Autonomous Mode consists of high-level tasking whereby the robot manages all decision-making and navigation.
NASA JSC has been working for several years with other universities including Vanderbilt, Massachusetts Institute of Technology and University of Massachusetts, Amherst to develop perceptual capabilities and reaching and autonomous grasping behaviors. However, in order to function as a human surrogate, the overall system must be able to do more than vision-based dexterous behaviors. To accomplish a variety of real-world tasks in unstructured environments, it will be necessary to marry simultaneous mapping and localization, obstacle avoidance, path planning, and waypoint behaviors with the physical dexterity, visual perception, and autonomous manipulation capabilities necessary to open doors, assemble simple structures, and use tools designed for human hands. The goal of this effort is not to provide full autonomy, but rather to promote dynamic autonomy such that the navigation and manipulation behaviors on board the robot can support whatever level of intervention is handed down from the user.
NASA Johnson Space Center (JSC) has provided the INL with the necessary software protocol to interface to the Robonaut system via a CANbus interface. Also, under the Joint Robotics Program (JRP) Technology Transfer Program, the Naval Space and Warfare Center in San Diego has already provided a specially modified Segway Robot Mobility Platform (RMP) to be used as the base platform for the Robonaut torso.
The INL Robot Intelligence Kernel has been adapted to the Segway RMP and modified to include a new 360° scanning laser that will provide the range information needed for obstacle avoidance and simultaneous localization and mapping. Through these efforts, the Robonaut system will soon inherit the entire suite of capabilities now included with the Intelligence Kernel including obstacle avoidance, mapping and localization, path planning, laser and visual tracking, human presence detection and real-time occupancy-grid change detection.