Markov decision process (MDP) is a well-known framework for devising the optimal decision-making strategies under uncertainty. Typically, the decision maker assumes a stationary environment which is characterized by a time-invariant transition probability matrix. However, in many real-world scenarios, this assumption is not justified, thus the optimal strategy might not provide the expected performance. In this paper, we study the performance of the classic value iteration algorithm for solving an MDP problem under nonstationary environments. Specifically, the nonstationary environment is modeled as a sequence of time-variant transition probability matrices governed by an adiabatic evolution inspired from quantum mechanics. We characterize the performance of the value iteration algorithm subject to the rate of change of the underlying environment. The performance is measured in terms of the convergence rate to the optimal average reward. We show two examples of queuing systems that make use of our analysis framework.
Adiabatic Markov Decision Process: Convergence of Value Iteration Algorithm
Contributed by the Dynamic Systems Division of ASME for publication in the JOURNAL OF DYNAMIC SYSTEMS, MEASUREMENT, AND CONTROL. Manuscript received November 7, 2014; final manuscript received February 22, 2016; published online April 6, 2016. Assoc. Editor: Srinivasa M. Salapaka.
- Views Icon Views
- Share Icon Share
- Cite Icon Cite
- Search Site
Duong, T., Nguyen-Huu, D., and Nguyen, T. (April 6, 2016). "Adiabatic Markov Decision Process: Convergence of Value Iteration Algorithm." ASME. J. Dyn. Sys., Meas., Control. June 2016; 138(6): 061009. https://doi.org/10.1115/1.4032875
Download citation file:
- Ris (Zotero)
- Reference Manager