Abstract
This paper addresses the constrained H∞ optimal control problem for nonlinear active vehicle suspension systems, with a focus on deriving an approximate solution through data-driven reinforcement learning in the context of differential games. A dynamic model of the half-car active suspension system with constraints is first established, where the constrained control forces and external road disturbances are formulated as a zero-sum game between two players. This leads to the Hamilton–Jacobi–Isaacs (HJI) equation, with a Nash equilibrium as the desired solution. To efficiently solve the HJI equation and mitigate the impact of model parameter uncertainties, an actor-critic neural network framework is employed to approximate both the control policy and the value function of the system. A reinforcement learning algorithm based on the input-output data of the suspension system is subsequently derived. Numerical examples are provided to demonstrate the effectiveness of the proposed approach for time-invariant suspension systems. Under varying control force constraints, the active suspension system consistently exhibits excellent vibration reduction performance.