Abstract
Distributed multi-agent systems are becoming increasingly crucial for diverse applications in robotics because of their capacity for scalability, efficiency, robustness, resilience, and the ability to accomplish complex tasks. Controlling these large-scale swarms by relying on local information is very challenging. Although centralized methods are generally efficient or optimal, they face the issue of scalability and are often impractical. Given the challenge of finding an efficient decentralized controller that uses only local information to accomplish a global task, we propose a learning-based approach to decentralized control using supervised learning. Our approach entails training controllers to imitate a centralized controller's behavior but uses only local information to make decisions. The controller is parameterized by aggregation graph neural networks (GNNs) that integrate information from remote neighbors. The problems of segregation and aggregation of a swarm of heterogeneous agents are explored in 2D and 3D point mass systems as two use cases to illustrate the effectiveness of the proposed framework. The decentralized controller is trained using data from a centralized (expert) controller derived from the concept of artificial differential potential. Our learned models successfully transfer to actual robot dynamics in physics-based Turtlebot3 robot swarms in Gazebo/ROS2 simulations and hardware implementation and Crazyflie quadrotor swarms in Pybullet simulations. Our experiments show that our controller performs comparably to the centralized controller and demonstrates superior performance compared to a local controller. Additionally, we showed that the controller is scalable by analyzing larger teams and diverse groups with up to 100 robots.