Visual Simultaneous Localization and Mapping (V-SLAM) is a trending robotics research concept as well as the basis for autonomous and smart navigation. It is an integral part of vision-based applications which include virtual reality, unmanned aerial vehicles, augmented reality, and unmanned ground vehicles. V-SLAM carries out localization and mapping by learning relevant feature points from images and estimating their pose based on the correlation between the camera and the feature points. It also represents the ability of a robot to effectively navigate itself, employing visual sensors and prior information of the given location, in an uncharted environment while updating and constructing a coordinated map of the scene. However, due to the challenges of data association triggered by illumination, different viewpoints and environment dynamics, there has been rapid adoption of deep learning in the area of feature extraction/description, pose/depth estimation, mapping, loop closure detection and global optimization as it concerns visual SLAM. This paper sets out to elucidate diverse applications of supervised and unsupervised deep learning methods in all aspects of visual SLAM. It also briefly explains a case study regarding the application of both deep learning and SLAM for underground mining applications. It highlights recent research developments in addition to limitations hindering their effective application and investigates how a combination of deep learning with other methods offers a promising direction for visual SLAM research.

This content is only available via PDF.
You do not currently have access to this content.