We present a computationally efficient approach for the intra-operative update of the feedback control policy for the steerable needle in the presence of the motion uncertainty. The solution to dynamic programming (DP) equations, to obtain the optimal control policy, is difficult or intractable for nonlinear problems such as steering flexible needle in soft tissue. We use the method of approximating Markov chain to approximate the continuous (and controlled) process with its discrete and locally consistent counterpart. This provides the ground to examine the linear programming (LP) approach to solve the imposed DP problem that significantly reduces the computational demand. A concrete example of the two-dimensional (2D) needle steering is considered to investigate the effectiveness of the LP method for both deterministic and stochastic systems. We compare the performance of the LP-based policy with the results obtained through more computationally demanding algorithm, iterative policy space approximation. Finally, the reliability of the LP-based policy dealing with motion and parametric uncertainties as well as the effect of insertion point/angle on the probability of success is investigated.
Approximating Markov Chain Approach to Optimal Feedback Control of a Flexible Needle
Contributed by the Dynamic Systems Division of ASME for publication in the JOURNAL OF DYNAMIC SYSTEMS, MEASUREMENT, AND CONTROL. Manuscript received October 7, 2015; final manuscript received May 9, 2016; published online July 13, 2016. Assoc. Editor: Hashem Ashrafiuon.
- Views Icon Views
- PDF LinkPDF
- Share Icon Share
- Cite Icon Cite
- Search Site
Sovizi, J., Kumar, S., and Krovi, V. (July 13, 2016). "Approximating Markov Chain Approach to Optimal Feedback Control of a Flexible Needle." ASME. J. Dyn. Sys., Meas., Control. November 2016; 138(11): 111006. https://doi.org/10.1115/1.4033834
Download citation file:
- Ris (Zotero)
- Reference Manager