Abstract

Autonomously navigating robots are used in many applications including assistive robotics, military, space exploration, manufacturing, etc. Unmanned ground vehicles (UGV) are an example of autonomous systems falling under the category of navigation, where navigation is dominantly composed of automatic transport and movement in real world environments. Simultaneous Localization and Mapping (SLAM) provides the best approach to the problems faced in unknown environments. Visual based cameras, light detection and ranging (LiDAR) sensors, global positioning systems, and inertial measuring units (IMU) feed constant streams of data to the SLAM algorithm. These sensors allow the UGV to explore an unknown outdoor environment whilst traversing around obstacles. The objects focused are construction barrels, cones, ramps, flags, and white lanes. Utilizing open source ROS packages, the UGV navigation algorithm uses 2D LiDAR, IMU, GPS, and depth camera data to combine the sensor inputs for a reliable robotic system. This paper demonstrates a robotic system combining RGB depth camera, object detection, and conventional outdoor SLAM navigation in unknown environments. Once a ramp or flag is detected an alternative path is implemented that is different from the global GPS path. The UGV combines all these methods to reliably explore unknown environments without the need for teleoperation.

This content is only available via PDF.
You do not currently have access to this content.