In this paper, we discuss methods to efficiently render stereoscopic scenes of large-scale point-clouds on inexpensive VR systems. Recently, terrestrial laser scanners are significantly improved, and they can easily capture tens of millions points in a short time from large fields, such as engineering plants. If 3D stereoscopic scenes of large-scale point-clouds could be easily rendered using inexpensive devices, they might be involved in casual product development phases. However, it is difficult to render a huge number of points using common PCs, because VR systems require high frame rates to avoid VR sickness. To solve this problem, we introduce an efficient culling method for large-scale point-clouds. In our method, we project all points onto angle-space panoramic images, whose axes are the azimuth and elevation angles of head directions. Then we eliminate occluded and redundant points according to the resolutions of devices. Once visible points are selected, they can be rendered in high frame rates. Visible points are updated when the user stays at a certain position to observe target objects. Since points are processed on image space in our method, preprocessing is very fast. In our experiments, our method could render stereoscopic views of large-scale point-clouds in high frame rates.

This content is only available via PDF.
You do not currently have access to this content.