Abstract
In this paper, we present a hierarchical representation of LiDAR point cloud data in off-road environments with the aim of generating 3D scene abstractions for autonomous vehicles. Different objects in the environment are better represented at different levels of detail based on the type of object, the application context and the information available about the object. Despite this, prior works on point cloud scene processing focus on detailed scene rendering and scene reconstruction which are computationally expensive and often require prior training. In addition, these works are centered on structured on-road environments. Unlike on-road environments, off-road scenes consist of highly unstructured and irregularly shaped objects such as rough terrains, trees, and shrubs. As a result, most algorithms for segmenting, reconstructing, and consequently navigating such environments are inherently challenging. Furthermore, the transmission of off road scenes over networks to other vehicles operating in coordination or to remote sites is expensive when working with high fidelity scene representations. In contrast we present a geometric-statistical algorithm that is computationally inexpensive and provides a multi-level of detail scene abstraction for robot decision making and task execution. The low footprint of the abstraction also allows for lossless scene transmission over the network. We present qualitative results for scene abstractions for various outdoor scenes in the RELLIS-3D dataset and also present quantitative assessment of the abstractions compared to segmented objects in the scenes.