Abstract

The combination of robot-assisted surgery and instrument guiding technology provides an effective solution for complex and high-precision posterior segment surgeries. However, there is insufficient depth information inside and outside the eye during surgery, and instruments move from the exterior to the retina after a long-distance displacement. It is difficult to provide large-scale and real-time navigation with image-based guiding methods. Therefore, this paper constructed theoretical models for instrument pose reconstruction in both intraocular and extraocular situations based on the principles of monocular visual imaging. The multi-valued mapping problem between the light spot contour and instrument pose was identified through theoretical analysis, and a deterministic guiding method for posterior segment surgical instruments was proposed using dual-fiber collaborative perception. The method was validated through experiments using bench-eye models and ex vivo pig eyes, and the results showed that it possesses a centimeter-level detection range and a detection rate of 28 Hz. The average pose detection accuracy reached 0.33 mm and 1.93° in ex vivo pig eye experiments. The proposed method achieved a centimeter-scale detection range with a 28 Hz real-time capability, enabling a deterministic instrument to guide in intraocular and extraocular situations.

This content is only available via PDF.
You do not currently have access to this content.