Robotics and Intelligence Systems >> Behavior-Based Robotics Tutorial >> Multi-Agent Control

Behavior-Based Robotics

Multi-Agent Control

Intelligent behavior can emerge not only through interaction with a chaotic environment, but also through interaction between multiple agents. Although the usefulness of multi-agent approaches has always been recognized, work with large numbers of real, physical robots is still a relatively new area of investigation. Increasingly, researchers are finding that multi-agent systems provide an excellent proving-ground for a behavior-based approach. Robots can be given simple rule sets that will produce impressively complex cooperative behavior. Although there are many hard problems yet to be solved, multi-agent approaches have already demonstrated a number of important advantages:

Co-evolutionary Learning:
Many different strategies can evolve simultaneously as they are developed, tested and shared by individuals of the population.
Cooperative Behavior:
Robots can share tasks and information to produce synergistic behavior. Robots can specialize to increase efficiency.
Distributed Processing:
Real-time response can be achieved by spreading the computational burden of control and information processing across a population.
Fault Tolerance:
By distributing the task across a population of robots, the collective can succeed even when individuals fail.
Extended Capabilities:
Augment the class of tasks that robots can accomplish.

Multiple agents can improve efficiency in many tasks, yet some tasks simply cannot be done by a single robot. Multi-agent strategies not only increase utility but also allow us to develop an important aspect of intelligence: social behavior. Some scientists, such as sociologists and psychologists, use simulated groups of robots to model social behavior in humans. Many aspects of human interaction can be studied in this way, including the spreading of diseases (Kennedy 1999) or learning how traffic jams form (Enee 1999).

Alongside the many advantages associated with multi-agent strategies, daunting obstacles exist. Designers must do much more than simply transfer traditional behavior-based strategies onto multiple robots. Multi-agent approaches have developed into an entirely new research area complete with its own prodigious class of problems. These problems demand revolutionary advances in size, communication and control. While military projects envision large, autonomous colonies of robots, the platforms necessary for implementation are still being developed. To lower cost and reduce training difficulties, robots used in multi-agent strategies are often miniaturized. Many researchers use robots no bigger than a quarter so that they can be trained and observed on a desktop rather than a highway, field or planet surface. Eventually, we will need social behavior to be built into robots of all shapes and sizes. In fact, one can imagine a future where the term ‘multi-agent strategy’ will be rendered obsolete by the fact that almost all robots can interact.

Although interaction is desirable, homogeneity is not. There is no universally optimal social behavior. Rather, different tasks require different strategies. Each designer must evaluate the following interrelated criteria:

Independence vs. Interdependence
The level to which each robot depends on the others is referred to along a spectrum from loosely to tightly coupled.
Centralizated vs. Distributed Control
Centralized control allows agents to be orchestrated and facilitates human interface, whereas a distributed approach lowers computational costs and improves fault tolerance.
Local vs. Global Communication
How loudly should each robot speak? Should each robot communicate only with its nearest neighbor or to every other robot?
Specialization vs. Homogeneity
Specialization can enable difficult tasks and often improve efficiency, but the lack of homogeneity complicates control.

Independence vs. Interdependence

A picture of a group of robots used at the University of Southern California A group of robots used at the University of Southern California to demonstrate various cooperative behaviors such as flocking and foraging.

Clearly, it is useful to create multi-agent systems that do not require every agent’s participation for success to be achieved. A tight coupling between robots would be disastrous for a team sent to find and deactivate mines. However, the answer is not to blindly create as loose a coupling as possible. If robots cannot depend at all on one another, there can be no division of labor and an unfortunate loss of efficiency will occur. The intuitive response is to allow each robot to communicate about what aspects of the task they have already completed. Unfortunately, this is easier said than done. Explicit communication invariably slows the action of the system and requires hardware which is expensive in terms of time, cost and computation. Other systems involve implicit communication where each robot observes the behavior and success of other agents. This allows the robots to share tasks and improve efficiency without making them overly dependent on the others. The disadvantage is that implicit communication is more likely to result in miscommunication. Ultimately, the degree of interdependence should depend on the task. Some tasks such as foraging may permit each robot to trust the information supplied by other agents. A critical task such as military reconnaissance may require that all information be validated repeatedly. Other tasks such as map-building may use a voting approach to control the influence of each robot.

Centralized vs. Distributed Control

A picture of Sony Iabo robots playing soccer. Sony Aibo robots used at the annual RoboCup competition where robot teams play soccer matches.

A related question that must be answered by all multi-agent strategies is whether to use centralized or distributed control. A centralized controller allows high-level control necessary for tasks such as taking seismographic readings or configuring a network of satellite dishes. These tasks demand a precision and directed intentionality which cannot merely emerge from low level interaction. Another advantage of centralized control is that it facilitates human interface. A commander must be able to tell a squadron of planes to abort their mission. If there is centralized control, the human will need only to issue one command. Without centralized control, it will be more difficult, though not impossible, for the command to reach each agent. On the other hand, a totally centralized approach places immense computational demands on the centralized controller and often prohibits real time action. For tasks such as gathering rock samples on a planetary surface, NASA has found that a careful balance is necessary. (Estlin, #4682B4, Mann, Rabideau, Castano, Chien, and Mjolsness 1999) Many of NASA’s robotics applications use a centralized controller called MISUS to orchestrate intentionality, aid in planning and foster cooperation. MISUS enables rovers which can be controlled at a high level and yet are able to act autonomously as individuals.

Local vs. Global Communication

One of the greatest problems with a centralized approach is that it requires reliable, explicit communication. This communication comes at a great cost. Many small robots simply do not have the capacity to carry sufficiently sophisticated communication devices. Hardware aside, the designer is still faced with the problem of how to synchronize communication. How should the centralized controller handle simultaneous queries from multiple robots? In reality, global communication is often more trouble than it is worth. Sometimes global communication causes quiet voices to be drowned out and results in a loss of diversity within the team. Paul Darwen (1999) has found that for co-evolutionary learning, diversity is crucial to avoid convergence to locally maximal solutions. Fortunately, many tasks such as gathering environmental information or surveillance, have been shown possible when using only simple, local communication. Scientists at the Navy Research Lab are developing control strategies for swarms of aerial surveillance vehicles which cooperate implicitly through local interaction. (Wu, Schultz, Agah 1999) While this capacity can currently be demonstrated only in simulation, it shows that it is possible to create intelligent swarming behavior using only communication with nearest neighbors.

Specialization vs. Homogeneity

One of the reasons that these swarming micro air vehicles can be so effectively controlled by simple rules is that the swarm is homogeneous in terms of strategy, sensors and mechanics. On the other hand, there are tasks which demand specialization. R. P. Bonasso and David Kortenkamp have worked to produce cooperative behavior between very different robots including small versatile robots, large delivery robots and stationary manipulators. Mortimer uses a vision system to provide detailed sensory information, but cannot carry much because its vision system gets in the way. A large delivery robot can carry large loads but has only simple sonar. A smaller robot, SodaPup, can navigate into small spaces. Together with a robot arm mounted on a table top, the robots attempt to accomplish tasks such as finding, loading, and moving cumbersome objects. (Kortenkamp 1995) The difficulty is how to benefit from specialization while still retaining a system of agents that can easily communicate and interact. While specialization can produce synergistic interaction, it also provides new complexity and the possibility for system degradation.

Robot soccer has been one of the most publicized areas for multi-agent research. Like any soccer team, soccer playing robots should be specialized. At the very least there should be a goal tender, strikers and defenders. For robot soccer, the need for real-time processing prohibits any centralization or global communication and consequently there can be no high-level control over how the different robots interact. Instead, the strategies which allow each robot to play its part must be built into the rules of the system. For example, the goal tender’s controller should be coded with a knowledge of how the defenders will move to confront an opposing attacker. Using only local communication, Carnegie Mellon’s CMUnited team of mobile robots has been successful in developing sophisticated cooperative behavior including an ability to pass, support, get open, etc. Carnegie-Mellon’s success has spurred optimism about the potential of multi-agent strategies. Despite the challenges, some researchers envision a future where myriad robots permeating all aspects of life will be innately wired to interact, share information, tasks, control code and even hardware. (Vasant 1999)

« Prev   Next »

Page Contact Information:

Department of energy

DOE Office of Nuclear Energy
DOE-Idaho Office