Abstract
Dynamic programming (DP) provides a systematic, closed-loop solution for optimal control problems. However, it suffers from the curse of dimensionality in higher orders. Approximate dynamic programming (ADP) methods can remedy this by finding near-optimal rather than exact optimal solutions. In summary, ADP uses function approximators, such as neural networks, to approximate optimal control solutions. ADP can then converge to the near-optimal solution using techniques such as reinforcement learning (RL). The two main challenges in using this approach are finding a proper training domain and selecting a suitable neural network architecture for precisely approximating the solutions with RL. Users select the training domain and the neural networks mostly by trial and error, which is tedious and time-consuming. This paper proposes trading the closed-loop solution provided by ADP methods for more effectively selecting the domain of training. To do so, we train a neural network using a small and moving domain around the reference signal. We asses the method’s effectiveness by applying it to a widely used benchmark problem, the Van der Pol oscillator; and a real-world problem, controlling a quadrotor to track a reference trajectory. Simulation results demonstrate comparable performance to traditional methods while reducing computational requirements.