Urban Search and Rescue
Search and Rescue Experiments
As it explores the environment either autonomously or with varying degrees of user input, the
robot builds a real-time map of the environment. The robot is able to detect human victims based on their heat signature. By pinpointing victims on a map, the
system permits search and rescue personnel to quickly locate these victims. A salient feature of the
approach to robotic search and rescue is the use of multiple, distinct modes of autonomy which allow the user to shift the level of robot initiative as needed throughout the task. Experiments with experienced and novice robot operators have shown that these levels of autonomy enable users to successfully utilize the system regardless of their experience or their level of trust. As capabilities and limitations change for both the human and robot due to workload, communication dropouts, and other factors, the system can shift seamlessly from one mode into another.
This unique approach to robotic search and rescue allows the human operator to treat the robot as a teammate instead of a passive tool. Usability studies including Federal Emergency Management Agency (FEMA) personnel, military personnel, police officers, remote operators from a nuclear cleanup site and over one thousand novice users indicate that the robot’s ability to navigate autonomously through difficult terrain exceeds the ability of human operators to teleoperate. The ability of the robot to protect itself, make decisions, and accomplish task objectives without human assistance, challenges existing assumptions regarding authority and trust.
Urban search and rescue is a true challenge for robots. It tests not only the robustness of the robotic hardware and the overall agility of the robot, but also the methods of Human-Robot Interaction. A usable interface can facilitate successful search and rescue operations, potentially saving lives. Conversely, an interface that is not usable can hamper search and rescue operations, ultimately risking lives.
The search arena, featuring five objects (in black) and numerous obstacles (in grey).
With human lives at stake, it is important to test search and rescue robots before they are put to use in an actual life and death situation. The National Institute of Standards (NIST) has developed test arenas for urban search and rescue, which have been used in robotics competitions such as RoboCup Rescue and the annual conference of the American Association for Artifical Intelligence (AAAI). The NIST test arenas are classified into three color-demarked categories. The yellow category is the easiest, urban-type arena, with minimal obstructions and clear visibility. The orange category increases the complexity in moving through the environment and locating objects of interest. The orange arena is spread over two physical levels and may contain obstacles such as stairs, requiring greater robot agility. The red category is the most difficult, featuring highly obstructed terrain and mostly buried objects as would be typical in the rubble aftermath of a collapsed building. In a recent study, we approximated a NIST category yellow urban search and rescue arena. The study included 107 participants drawn at random from attendees of
’s annual community exposition. The participants consisted of 46 females and 61 males, ranging in age from 3 to 78 years old, with a mean age of 14. All participants were novice users of robotic interfaces.
The average number of objects found (out of five possible) according to age, gender, and mode or operation.
The participants used the interface to control the robot to search for five objects consisting of two dummies (representing injured humans), a stuffed dog, a disabled robot, and a simulated explosive device. They were given 60 seconds to search the 7x10 m arena (see Figure 3), with the robot either in safe (guarded motion) or shared mode. In order to facilitate realistic maneuvering through an urban environment, the robot’s search arena featured several obstacles. The central area was divided into quadrants using conventional office dividers, while the perimeter featured four pylons. The participants could not see the search arena while they controlled the robot, which forced them to rely on the available interface cues.
The total number of objects that were located and identified was studied with respect to participant age, gender, and operational mode. For analysis, ages grouped in five-year intervals up to 20 years old; thereafter they were grouped in ten-year intervals. This ensured that the analysis was sensitive to possible developmental differences in pre-adults.
There was no significant difference in the number of objects found across participants of different ages. Women and men statistically found the same number of objects, M=2.54 and M=2.68, respectively. Because we were interested in whether the method we used to increase robot autonomy improved the users’ ability to search, it was interesting to see that there was a statistically significant difference due to operational mode. Participants who used Shared Mode (i.e., allowed the robot to do the driving) found an average of 2.87 objects, while those who used Safe Mode (i.e., manual operation with automatic obstacle avoidance) found an average of 2.35 objects. It was also important to us that these novice users were successfully able to operate the robot in an urban search and rescue scenario. This tells us that even when users are searching a very simple course in a very short period of time, there is a benefit to allowing the robot to navigate and the person just to search. Because the person does not have to perform two tasks at once, and because our interface is easy to understand, the person is more likely to find “victims.”
The results highlight the value of iterative usability testing and redesign in making human-robot interfaces easy to use, as well as the value of carefully controlled HRI evaluation.