In field environments it is not usually possible to provide robots in advance with valid geometric models of its environment and task element locations. The robot or robot teams need to create and use these models to locate critical task elements by performing appropriate sensor based actions. Here, an information-based iterative algorithm to intelligently plan the robot’s visual exploration strategy is proposed to enable it to efficiently build 3D models of its environment and task elements. The method assumes mobile robot or vehicle with cameras carried by articulated mounts. The algorithm uses the measured scene information to find the next camera position based on expected new information content of that pose. This is achieved by utilizing a metric derived from Shannon’s information theory to determine optimal sensing poses for the agent(s) mapping a highly unstructured environment. Once an appropriate environment model has been built, the quality of the information content in the model is used to determine the constraint-based optimum view for task execution. Experimental demonstrations on a cooperative robot platform performing an assembly task in the field show the effectiveness of this algorithm for single and multiple cooperating robotic systems.

This content is only available via PDF.
You do not currently have access to this content.