In this paper, we develop a unified framework to address the problem of optimal nonlinear analysis and feedback control for partial stability and partial-state stabilization of stochastic dynamical systems. Partial asymptotic stability in probability of the closed-loop nonlinear system is guaranteed by means of a Lyapunov function that is positive definite and decrescent with respect to part of the system state which can clearly be seen to be the solution to the steady-state form of the stochastic Hamilton–Jacobi–Bellman equation, and hence, guaranteeing both partial stability in probability and optimality. The overall framework provides the foundation for extending optimal linear-quadratic stochastic controller synthesis to nonlinear-nonquadratic optimal partial-state stochastic stabilization. Connections to optimal linear and nonlinear regulation for linear and nonlinear time-varying stochastic systems with quadratic and nonlinear-nonquadratic cost functionals are also provided. Finally, we also develop optimal feedback controllers for affine stochastic nonlinear systems using an inverse optimality framework tailored to the partial-state stochastic stabilization problem and use this result to address polynomial and multilinear forms in the performance criterion.

Introduction

In Ref. [1], we extended the framework developed in Refs. [2,3] to address the problem of optimal partial-state stabilization, wherein stabilization with respect to a subset of the system state variables is desired. Partial-state stabilization arises in many engineering applications [4,5]. Specifically, in spacecraft stabilization via gimballed gyroscopes, asymptotic stability of an equilibrium position of the spacecraft is sought while requiring Lyapunov stability of the axis of the gyroscope relative to the spacecraft [5]. Alternatively, in the control of rotating machinery with mass imbalance, spin stabilization about a nonprincipal axis of inertia requires motion stabilization with respect to a subspace instead of the origin [4]. The most common application where partial stabilization is necessary is adaptive control, wherein asymptotic stability of the closed-loop plant states is guaranteed without necessarily achieving parameter error convergence.

In this paper, we extend the framework developed in Ref. [1] to address the problem of optimal partial-state stochastic stabilization. Specifically, we consider a notion of optimality that is directly related to a given Lyapunov function that is positive definite and decrescent with respect to part of the system state. In particular, an optimal partial-state stochastic stabilization control problem is stated, and sufficient Hamilton–Jacobi–Bellman conditions are used to characterize an optimal feedback controller. Another important application of partial stability and partial stabilization theory is the unification it provides between time-invariant stability theory and stability theory for time-varying systems [3,6]. We exploit this unification and specialize our results to address optimal linear and nonlinear regulation for linear and nonlinear time-varying stochastic systems with quadratic and nonlinear-nonquadratic cost functionals.

Our approach focuses on the role of the Lyapunov function guaranteeing stochastic stability of the closed-loop system and its connection to the steady-state solution of the stochastic Hamilton–Jacobi–Bellman equation characterizing the optimal nonlinear feedback controller. In order to avoid the complexity in solving the stochastic steady-state, Hamilton–Jacobi–Bellman equation, we do not attempt to minimize a given cost functional, but rather, we parameterize a family of stochastically stabilizing controllers that minimizes a derived cost functional that provides the flexibility in specifying the control law. This corresponds to addressing an inverse optimal stochastic control problem [713].

The inverse optimal control design approach provides a framework for constructing the Lyapunov function for the closed-loop system that serves as an optimal value function and, as shown in Refs. [11,12], achieves desired stability margins. Specifically, nonlinear inverse optimal controllers that minimize a meaningful (in the terminology of Refs. [11,12]) nonlinear-nonquadratic performance criterion involving a nonlinear-nonquadratic, non-negative-definite function of the state and a quadratic positive-definite function of the feedback control are shown to possess sector margin guarantees to component decoupled input nonlinearities in the conic sector (1/2,).

The paper is organized follows. In Sec. 2, we establish notation, definitions, and present some key results on partial stability of nonlinear stochastic dynamical systems. In Sec. 3, we consider a stochastic nonlinear system with a performance functional evaluated over the infinite horizon. The performance functional is then evaluated in terms of a Lyapunov function that guarantees partial asymptotic stability in probability. We then state a stochastic optimal control problem and provide sufficient conditions for characterizing an optimal nonlinear feedback controller guaranteeing partial asymptotic stability in probability of the closed-loop system. These results are then used to address a stochastic optimal control problem for uniform asymptotic stabilization in probability of nonlinear time-varying stochastic dynamical systems.

In Sec. 4, we develop optimal feedback controllers for affine stochastic nonlinear systems using an inverse optimality framework tailored to the partial-state stochastic stabilization problem. This result is then used to derive time-varying extensions of the results in Refs. [14,15] involving nonlinear feedback controllers minimizing polynomial and multilinear performance criteria. In Sec. 5, we provide two illustrative numerical examples that highlight the optimal partial-state stochastic stabilization framework. In Sec. 6, we present conclusions and highlight some future research directions. Finally, we note that a preliminary version of this paper appeared in Ref. [16]. The present paper considerably expands on Ref. [16] by providing detailed proofs of all the results along with examples and additional motivation.

Notation, Definitions, and Mathematical Preliminaries

In this section, we establish notation, definitions, and review some basic results on partial stability of nonlinear stochastic dynamical systems [1722]. Specifically, denotes the set of real numbers, + denotes the set of positive real numbers, ¯+ denotes the set of non-negative numbers, + denotes the set of positive integers, n denotes the set of n × 1 real column vectors, n×m denotes the set of n × m real matrices, n denotes the set of n × n non-negative-definite matrices, and n denotes the set of n × n positive-definite matrices. We write Bε(x) for the open ball centered at x with radius ε, ||·|| for the Euclidean vector norm or an induced matrix norm (depending on context), ·F for the Frobenius matrix norm, AT for the transpose of the matrix A, ⊗ for the Kronecker product, ⊕ for the Kronecker sum, and In or I for the n × n identity matrix. Furthermore, Bn denotes the σ-algebra of Borel sets in Dn, and S denotes a σ-algebra generated on a set Sn.

We define a complete probability space as (Ω,F,), where Ω denotes the sample space, F denotes a σ-algebra, and defines a probability measure on the σ-algebra F; that is, is a non-negative countably additive set function on F such that (Ω)=1 [20]. Furthermore, we assume that w(⋅) is a standard d-dimensional Wiener process defined by (w(·),Ω,F,w0), where w0 is the classical Wiener measure [22, p. 10], with a continuous-time filtration {Ft}t0 generated by the Wiener process w(t) up to time t. We denote a stochastic dynamical system by G generating a filtration {Ft}t0 adapted stochastic process x:¯+×ΩD on (Ω,F,x0) satisfying FτFt,0τ<t, such that {ωΩ:x(t,ω)B}Ft,t0, for all Borel sets Bn contained in the Borel σ-algebra Bn. Here, we use the notation x(t) to represent the stochastic process x(t, ω) omitting its dependence on ω.

We denote the set of equivalence classes of measurable, integrable, and square-integrable n or n×m (depending on context) valued random processes on (Ω,F,) over the semi-infinite parameter space [0, ) by L0(Ω,F,),L1(Ω,F,), and L2(Ω,F,), respectively, where the equivalence relation is the one induced by -almost-sure equality. In particular, elements of L0(Ω,F,) take finite values -almost surely (a.s.). Hence, depending on the context, n will denote either the set of n × 1 real variables or the subspace of L0(Ω,F,) comprising n random processes that are constant almost surely. All inequalities and equalities involving random processes on (Ω,F,) are to be understood to hold -almost surely. Furthermore, E[·] and Ex0[·] denote, respectively, the expectation with respect to the probability measure and with respect to the classical Wiener measure x0.

Finally, we write tr(⋅) for the trace operator, (·)1 for the inverse operator, V(x)((V(x))/x) for the Fréchet derivative of V at x, V(x)((2V(x))/x2) for the Hessian of V at x, and Hn for the Hilbert space of random vectors xn with finite average power, that is, Hn{x:Ωn:E[xTx]<}. For an open set Dn,HnD{xHn:x:ΩD} denotes the set of all the random vectors in Hn induced by D. Similarly, for every x0n,Hnx0{xHn:x=x0a.s.}. Furthermore, C2 denotes the space of real-valued functions V:D that are two-times continuously differentiable with respect to xDn.

In this paper, we consider nonlinear stochastic autonomous dynamical systems G of the form 
dx1(t)=f1(x1(t),x2(t))dt+D1(x1(t),x2(t))dw(t),x1(t0)=x10a.s.,tt0
(1)
 
dx2(t)=f2(x1(t),x2(t))dt+D2(x1(t),x2(t))dw(t),x2(t0)=x20a.s.
(2)

where, for every tt0,x1(t)Hn1D and x2(t)Hn2 are such that x(t)[x1T(t),x2T(t)]T is a Ft-measurable random state vector, x(t0)Hn1D×Hn2,Dn1 is an open set with 0D, w(t) is a d-dimensional independent standard Wiener process (i.e., Brownian motion) defined on a complete filtered probability space (Ω,F,{Ft}tt0,),x(t0) is independent of (w(t)w(t0)),tt0, and f1:D×n2n1 is such that, for every x2n2,f1(0,x2)=0 and f1(⋅, x2) is locally Lipschitz continuous in x1, and f2:D×n2n2 is such that, for every x1D,f2(x1,·) is locally Lipschitz continuous in x2. In addition, the function D1:D×n2n1×d is continuous such that, for every x2n2,D1(0,x2)=0, and D2:D×n2n2×d is continuous.

A n1+n2-valued stochastic process x:[t0,τ]×ΩD×n2 is said to be a solution of Eqs. (1) and (2) on the interval [t0, τ] with initial condition x(t0) = x0 a.s., if x(⋅) is progressively measurable (i.e., x(⋅) is nonanticipating and measurable in t and ω) with respect to {Ft}tt0,f(x1,x2)[f1T(x1,x2),f2T(x1,x2)]TL1(Ω,F,),D(x1,x2)[D1T(x1,x2),D2T(x1,x2)]TL2(Ω,F,), and 
x(t)=x0+t0tf(x(s))ds+t0tD(x(s))dw(s)a.s.,t[t0,τ]
(3)

where the integrals in Eq. (3) are Itô integrals. Note that for each fixed t ≥ t0, the random variable ωx(t,ω) assigns a vector x(ω) to every outcome ω ∈ Ω of an experiment, and for each fixed ω ∈Ω, the mapping tx(t,ω) is the sample path of the stochastic process x(t), t ≥ t0. A pathwise solution tx(t) of Eqs. (1) and (2) in (Ω,{Ft}tt0,x0) is said to be right maximally defined if x cannot be extended (either uniquely or nonuniquely) forward in time. We assume that all right maximal pathwise solutions to Eqs. (1) and (2) in (Ω,{Ft}tt0,x0) exist on [t0, ), and hence, we assume that Eqs. (1) and (2) are forward complete. Sufficient conditions for forward completeness or global solutions to Eqs. (1) and (2) are given by Corollary 6.3.5 of Ref. [20].

Furthermore, we assume that f:D×n2n1+n2 and D:D×n2(n1+n2)×d satisfy the uniform Lipschitz continuity condition 
f(x)f(y)+D(x)D(y)FLxy,x,yD×n2
(4)
and the growth restriction condition 
f(x)2+D(x)F2L2(1+x2),xD×n2
(5)

for some Lipschitz constant L > 0, and hence, since x(t0)Hn1D×Hn2 and x(t0) is independent of (w(t)w(t0)),tt0, it follows that there exists a unique solution xL2(Ω,F,) of Eqs. (1) and (2) in the following sense. For every xHn1D×Hn2, there exists τx > 0 such that, if xI:[t0,τ1]×ΩD×n2 and xII:[t0,τ2]×ΩD×n2 are two solutions of Eqs. (1) and (2); that is, if xI,xIIL2(Ω,F,), with continuous sample paths almost surely, solve Eqs. (1) and (2), then τxmin{τ1,τ2} and (xI(t)=xII(t),t0tτx)=1. Sufficient conditions for forward existence and uniqueness in the absence of the uniform Lipschitz continuity condition and growth restriction condition can be found in Refs. [23,24].

A solution t[x1T(t),x2T(t)]T is said to be regular if and only if x0(τe=)=1 for all x(0)Hn1D×Hn2, where τe is the first stopping time of the solution to Eqs. (1) and (2) from every bounded domain in D×n2. Recall that regularity of solutions implies that solutions exist for t ≥ t0 almost surely. Here, we assume regularity of solutions to Eqs. (1) and (2), and hence, τx =  [18, p. 75]. Moreover, the unique solution determines a n1+n2-valued, time-homogeneous Feller continuous Markov process x(⋅), and hence, its stationary Feller transition probability function is given by (Refs. [18, Theorem 3.4] and [20, Theorem 9.2.8]) (x(t)B|x(t0)=a.s.x0)=(tt0,x0,0,B) for all x0D×n2 and t ≥ t0, and all Borel subsets B of D×n2, where (s,x,t,B),ts, denotes the probability of transition of the point xD×n2 at time instant s into the set BD×n2 at time instant t. Finally, recall that every continuous process with Feller transition probability function is also a strong Markov process [18, p. 101].

Definition 2.1 [22, Definition 7.7]. Let x() be a time-homogeneous Markov process inHn1D×Hn2and letV:D×n2. Then, the infinitesimal generatorLof x(t), t ≥ 0, with x(0) = x0 a.s., is defined by 
LV(x0)limt0+Ex0[V(x(t))]V(x0)t,x0D×n2
(6)
If VC2 and has a compact support, and x(t), t ≥ t0, satisfies Eqs. (1) and (2) , then the limit in Eq. (6) exists for all xD×n2 and the infinitesimal generator L of x(t), t ≥ t0, can be characterized by the system drift and diffusion functions f(x) and D(x) defining the stochastic dynamical system (1) and (2) with system state x(t), t ≥ t0, and is given by [22, Theorem 7.9] 
LV(x)V(x)xf(x)+12trDT(x)2V(x)x2D(x),xD×n2
(7)

In the following definition, we introduce the notion of stochastic partial stability.

Definition 2.2. (i) The nonlinear stochastic dynamical systemGgiven by Eqs.(1)and(2)is Lyapunov stable in probability with respect to x1 uniformly in x20if, for every ε > 0 and ρ > 0, there existδ=δ(ρ,ε)>0such that, for allx10Bδ(0) 
x0(suptt0x1(t)>ε)ρ
(8)

for all t ≥ 0 and allx20n2.

(ii)Gis asymptotically stable in probability with respect to x1 uniformly in x20ifGis Lyapunov stable in probability with respect to x1 uniformly in x20 and 
limx100x0(limtx1(t)=0)=1
(9)

uniformly in x20 for allx20n2.

(iii)Gis globally asymptotically stable in probability with respect to x1 uniformly in x20 if Gis Lyapunov stable in probability with respect to x1 uniformly in x20 andx0(limtx1(t)=0)=1holds uniformly in x20 for all(x10,x20)n1×n2.

Remark 2.1. It is important to note that there is a key difference between the stochastic partial stability definitions given in Definitions 2.2 and the definitions of stochastic partial stability given in Ref. [21]. In particular, the stochastic partial stability definitions given in Ref. [21] require that both the initial conditions x10 and x20 lie in a neighborhood of origin, whereas in Definition 2.2 x20 can be arbitrary. As will be seen below, this difference allows us to unify autonomous stochastic partial stability theory with time-varying stochastic stability theory. An additional difference between our formulation of the stochastic partial stability problem and the stochastic partial stability problem considered in Ref. [21] is in the treatment of the equilibrium of Eqs. (1) and (2). Specifically, in our formulation, we require the weaker partial equilibrium condition f1(0, x2) = 0 and D1(0, x2) = 0 for every x2n2, whereas in Ref. [21] the author requires the stronger equilibrium condition f1(0,0)=0,f2(0,0)=0,D1(0,0)=0, and D2(0, 0) = 0.

Remark 2.2. A more general stochastic stability notion can also be introduced here involving stochastic stability and convergence to an invariant (stationary) distribution. In this case, state convergence is not to an equilibrium point but rather to a stationary distribution. This framework can relax the vanishing perturbation assumption D1(0,x2)=0,x2n2, and requires a more involved analysis and synthesis framework showing stability of the underlying Markov semigroup [25].

As shown in Refs. [3] and [6], an important application of deterministic partial stability theory is the unification it provides between time-invariant stability theory and stability theory for time-varying systems. A similar unification can be provided for stochastic dynamical systems. Specifically, consider the nonlinear time-varying stochastic dynamical system given by 
dx(t)=f(t,x(t))dt+D(t,x(t))dw(t),x(t0)=x0a.s.,tt0
(10)
where, for every tt0,x(t)HnD,Dn,D is an open set with 0D,f(t,0)=0,D(t,0)=0, and f:[t0,)×Dn and D:[t0,)×Dn×d are jointly continuous in t and x, and satisfy Eqs. (4) and (5) for all xD uniformly in t for all t in compact subsets of [t0, ). Now, defining x1(τ)x(t) and x2(τ)t a.s., where τtt0, it follows that the solution x(t), t ≥ t0, to the nonlinear time-varying stochastic dynamical system (10) can be equivalently characterized by the solution x1(τ),τ0, to the nonlinear autonomous stochastic dynamical system 
dx1(τ)=f(x2(τ),x1(τ))dτ+D(x2(τ),x1(τ))dw(t),x1(0)=x0a.s.,τ0
(11)
 
dx2(τ)=dτ,x2(0)=t0a.s.
(12)

Note that Eqs. (11) and (12) are in the same form as the system given by Eqs. (1) and (2), and Definition 2.2 applied to Eqs. (11) and (12) specializes to the definitions of uniform Lyapunov stability in probability, uniform asymptotic stability in probability, and global uniform asymptotic stability in probability of Eq. (10); for details, see Refs. [17] and [20].

Next, we provide sufficient conditions for partial stability of the nonlinear stochastic dynamical system given by Eqs. (1) and (2). For the statement of this result, recall the definitions of a class K and K functions given in Ref. [3, p. 162].

Theorem 2.1. Consider the nonlinear stochastic dynamical systems (1) and (2). Then, the following statements hold:

(i) If there exist a two-times continuously differentiable functionV:D×n2and classKfunctionsα(·),β(·), andγ(·)such that, for all(x1,x2)D×n2 
α(x1)V(x1,x2)β(x1)
(13)
 
V(x1,x2)x1f1(x1,x2)+V(x1,x2)x2f2(x1,x2)+12trD1T(x1,x2)2V(x1,x2)x12D1(x1,x2)+12trD2T(x1,x2)2V(x1,x2)x22D2(x1,x2)γ(x1)
(14)

then the nonlinear dynamical system given by Eqs.(1)and(2)is asymptotically stable in probability with respect to x1 uniformly in x20.

(ii) If there exist a two-times continuously differentiable functionV:n1×n2, classKfunctions α() and β(), and a classKfunction γ() satisfying Eqs.(13)and(14), then the nonlinear dynamical system given by Eqs.(1)and(2)is globally asymptotically stable in probability with respect to x1 uniformly in x20.

Proof. (i) Let x20n2, let ε > 0 be such that Bε(0)D, let ρ > 0, and define Dε,ρ{x1Bε(0):V(x1,x20)<α(ε)ρ}. Since V (⋅, ⋅) is continuous and V(0,x2)=0,x2n2, it follows that Dε,ρ is nonempty and there exists δ = δ(ε, ρ) > 0 such that V(x1,x20)<α(ε)ρ,x1Bδ(0). Hence, Bδ(0)Dε,ρ. Next, it follows from Eq. (14) that V (x1(t), x2(t)) is a (positive) supermartingale [18, Lemma 5.4], and hence, for every x1(0)Hn1Bδ(0)Hn1Dε,ρ, it follows from Eq. (13) and the extended version of the Markov inequality for monotonically increasing functions [26, p. 193] that 
x0(supt0||x1(t)||ε)supt0Ex0[α(||x1(t)||)]α(ε)supt0Ex0[V(x1(t),x2(t))]α(ε)Ex0[V(x1(0),x2(0))]α(ε)ρ

which proves partial Lyapunov stability in probability with respect to x1 uniformly in x20.

To prove partial asymptotic stability in probability with respect to x1, note that it follows from Eqs. (13) and (14) that 
LV(x1,x2)γ(||x1||)γ°β1(V(x1,x2)),(x1,x2)D×n2

Furthermore, it follows from partial Lyapunov stability in probability that Bε(0)×n2 is an invariant set with respect to the solutions of Eqs. (1) and (2) as ε → 0, and hence, using Corollary 4.2 of Ref. [27] with η(·)=γ°β1(·) it follows that limtγ°β1(V(x1(t),x2(t)))=a.s.0. Furthermore, using the properties of the class K functions α(·),β(·), and γ(⋅), it follows that limtV(x1(t),x2(t))=a.s.0, which yields limtα(x1(t))limtV(x1(t),x2(t))=a.s.0. Hence, limtx1(t)a.s.0 as x10 → 0, which proves partial asymptotic stability in probability with respect to x1 uniformly in x20.

(ii) Finally, for D=n1, globally asymptotically stable in probability with respect to x1 uniformly in x20 is direct consequence of the radially unbounded condition on V(⋅, ⋅) using standard arguments and the fact that α(⋅) and β(⋅) are class K functions.

Stochastic Optimal Partial-State Stabilization

In the first part of this section, we provide connections between Lyapunov functions and nonquadratic cost evaluation. Specifically, we consider the problem of evaluating a nonlinear-nonquadratic performance measure that depends on the solution of the stochastic nonlinear dynamical system given by Eqs. (1) and (2). In particular, we show that the nonlinear-nonquadratic performance measure 
J(x10,x20)Ex0[0L(x1(t),x2(t))dt]
(15)

where L:n1×n2 is jointly continuous in x1 and x2, and x1(t) and x2(t), t ≥ 0, satisfy Eqs. (1) and (2), can be evaluated in a convenient form so long as Eqs. (1) and (2) are related to an underlying Lyapunov function that is positive definite and decrescent with respect to x1 and proves asymptotic stability in probability of Eqs. (1) and (2) with respect to x1 uniformly in x20.

Theorem 3.1. Consider the nonlinear stochastic dynamical systemGgiven by Eqs.(1)and(2)with performance measure (15). Assume that there exist a two-times continuously differentiable functionV:n1×n2, classKfunctions α() and β(), and a classKfunction γ() such that, for all(x1,x2)n1×n2 
α(||x1||)V(x1,x2)β(||x1||)
(16)
 
V(x1,x2)x1f1(x1,x2)+V(x1,x2)x2f2(x1,x2)+12trD1T(x1,x2)2V(x1,x2)x12D1(x1,x2)+12trD2T(x1,x2)2V(x1,x2)x22D2(x1,x2)γ(||x1||)
(17)
 
L(x1,x2)+V(x1,x2)x1f1(x1,x2)+V(x1,x2)x2f2(x1,x2)+12trD1T(x1,x2)2V(x1,x2)x12D1(x1,x2)+12trD2T(x1,x2)2V(x1,x2)x22D2(x1,x2)=0
(18)
Then, the nonlinear stochastic dynamical systemGis globally asymptotically stable in probability with respect to x1 uniformly in x20 and, for all(x10,x20)n1×n2 
J(x10,x20)=V(x10,x20)
(19)

Proof. Let x1(t) and x2(t), t ≥ t0, satisfy Eqs. (1) and (2). Then, Eqs. (16) and (17) are a restatement of Eqs. (13) and (14), and hence, it follows from Theorem 2.1 that the system G is globally asymptotically stable in probability with respect to x1 uniformly in x20. Consequently, x0(limtx1(t)=0)=1 holds for all initial conditions (x10,x20)n1×n2.

Next, using Itô’s (chain rule) formula, it follows that the stochastic differential of V(x1(t),x2(t)) along the system trajectories x1(t) and x2(t),tt0, is given by 
dV(x1(t),x2(t))=(V(x1(t),x2(t))x1f1(x1(t),x2(t))+V(x1(t),x2(t))x2f2(x1(t),x2(t))+12trD1T(x1(t),x2(t))2V(x1(t),x2(t))x12D1(x1(t),x2(t))+12trD2T(x1(t),x2(t))2V(x1(t),x2(t))x22D2(x1(t),x2(t)))dt+V(x(t))xD(x1(t),x2(t))dw(t)
(20)
Hence, using Eq. (18), it follows that 
L(x1(t),x2(t))dt+dV(x1(t),x2(t))=(L(x1(t),x2(t))+V(x1(t),x2(t))x1f1(x1(t),x2(t))+V(x1(t),x2(t))x2f2(x1(t),x2(t))+12trD1T(x1(t),x2(t))2V(x1(t),x2(t))x12D1(x1(t),x2(t))+12trD2T(x1(t),x2(t))2V(x1(t),x2(t))x22D2(x1(t),x2(t)))dt+V(x(t))xD(x1(t),x2(t))dw(t)=V(x(t))xD(x1(t),x2(t))dw(t)
(21)
Let {tn}n=0 be a monotonic sequence of positive numbers with tn as n,τm:Ω[t0,) be the first exit (stopping) time of the solution x1(t) and x2(t), t ≥ t0, from the set Bm(0)×n2, and let τlimmτm. Now, integrating Eq. (21) over [t0,min{tn,τm}], where (n,m)+×+, yields 
t0min{tn,τm}L(x1(t),x2(t))dt=t0min{tn,τm}dV(x1(t),x2(t))+t0min{tn,τm}V(x(t))xD(x1(t),x2(t))dw(t)=V(x1(t0),x2(t0))V(x1(min{tn,τm}),x2(min{tn,τm}))+t0min{tn,τm}V(x(t))xD(x1(t),x2(t))dw(t)
(22)
Next, taking the expectation on both sides of Eq. (22) yields 
Ex0[t0min{tn,τm}L(x1(t),x2(t))dt]=Ex0[V(x1(t0),x2(t0))V(x1(min{tn,τm}),x2(min{tn,τm}))+t0min{tn,τm}V(x(t))xD(x1(t),x2(t))dw(t)]=V(x10,x20)Ex0[V(x1(min{tn,τm}),x2(min{tn,τm}))]
(23)
Now, noting that L(x1,x2)0,(x1,x2)n1×n2, the sequence of random variables {fn,m}n,m=0H1, where fn,mt0min{tn,τm}L(x1(t),x2(t))dt, is a pointwise nondecreasing sequence in n and m of non-negative Ft-measurable random variables on Ω. Moreover, defining the improper integral 
t0L(x1(t),x2(t))dt
as the limit of a sequence of proper integrals, it follows from the Lebesgue monotone convergence theorem [28] that 
limmlimnEx0[t0min{tn,τm}L(x1(t),x2(t))dt]=limmEx0[limnt0min{tn,τm}L(x1(t),x2(t))dt]=Ex0[limmt0τmL(x1(t),x2(t))dt]=Ex0[t0L(x1(t),x2(t))dt]=J(x10,x20)
(24)
Next, since G is globally asymptotically stable in probability with respect to x1 uniformly in x20, V(⋅, ⋅) is continuous, and V(x1(t),x2(t)),tt0, is positive supermartingale by Eq. (17) and Ref. [18, Lemma 5.4], it follows from Ref. [18, Theorem 5.1] that 
limmlimnEx0[V(x1(min{tn,τm}),x2(min{tn,τm}))]=limmEx0[limnV(x1(min{tn,τm}),x2(min{tn,τm}))]=Ex0[limmlimnV(x1(min{tn,τm}),x2(min{tn,τm}))]
(25)
Now, it follows from Eq. (16) that 
V(x10,x20)Ex0[limmlimnβ(||x1(min{tn,τm})||)]V(x10,x20)Ex0[limmlimnV(x1(min{tn,τm}),x2(min{tn,τm}))]V(x10,x20)Ex0[limmlimnα(||x1(min{tn,τm})||)]
(26)
and hence, taking the limit as n and m on both sides of Eq. (23), using Eqs. (24) and (25), and using the continuity of α(⋅) and β(⋅), we obtain 
V(x10,x20)Ex0[β(limmlimnx1(min{tn,τm}))]J(x10,x20)V(x10,x20)Ex0[α(limmlimnx1(min{tn,τm}))]
(27)

Finally, using x0(limtx1(t)=0)=1 for all (x10,x20)n1×n2, Eq. (19) is a direct consequence of Eq. (27).

The following corollary to Theorem 3.1 considers the nonautonomous stochastic dynamical system (10) with performance measure 
J(t0,x0)Ex0[t0L(t,x(t))dt]
(28)

where L:[t0,)×n is jointly continuous in t and x, and x(t), t ≥ t0, satisfies Eq. (10).

Corollary 3.1. Consider the nonlinear time-varying stochastic dynamical system (10) with performance measure (28). Assume that there exist a two-times continuously differentiable functionV:[t0,)×n, classKfunctions α() and β(), and a classKfunction γ() such that, for all(t,x)[t0,)×n 
α(||x||)V(t,x)β(||x||)
(29)
 
V(t,x)t+V(t,x)xf(t,x)+12trDT(t,x)2V(t,x)x2D(t,x)γ(||x||)
(30)
 
V(t,x)t=L(t,x)+V(t,x)xf(t,x)+12trDT(t,x)2V(t,x)x2D(t,x)
(31)

Then, the stochastic nonlinear dynamical system (10) is globally uniformly asymptotically stable in probability andJ(t0,x0)=V(t0,x0)for all(t0,x0)[0,)×n.

Proof. The result is a direct consequence of Theorem 3.1 with n1=n,n2=1,x1(tt0)=x(t),x2(tt0)=t,f1(x1,x2)=f1(x2,x1)=f(t,x),f2(x1,x2)=1,D1(x1,x2)=D1(x2,x1)=D(t,x),D2(x1,x2)=0, and V(x1,x2)=V(x2,x1)=V(t,x).

Next, we use the framework developed in Theorem 3.1 to obtain a characterization of stochastic optimal feedback controllers that guarantee closed-loop, partial-state stabilization in probability. Specifically, sufficient conditions for optimality are given in a form that corresponds to a steady-state version of the stochastic Hamilton–Jacobi–Bellman equation. To address the problem of characterizing partially stabilizing feedback controllers, consider the nonlinear controlled stochastic dynamical system 
dx1(t)=F1(x1(t),x2(t),u(t))dt+D1(x1(t),x2(t),u(t))dw(t),x1(0)=x10a.s.,t0
(32)
 
dx2(t)=F2(x1(t),x2(t),u(t))dt+D2(x1(t),x2(t),u(t))dw(t),x2(0)=x20a.s.
(33)

where, for every t0,x1(t)Hn1,x2(t)Hn2,u(t)Hm,F1:n1×n2×mn1,F2:n1×n2×mn2,D1:n1×n2×mn1×d,D2:n1×n2×mn2×d, and F1(0, x2, 0) = 0 and D1(0, x2, 0) = 0 for every x2n2.

Here, we assume that u(⋅) satisfies sufficient regularity conditions such that Eqs. (32) and (33) have a unique solution forward in time. Specifically, we assume that the control process u(⋅) in Eqs. (32) and (33) is restricted to the class of admissible controls consisting of measurable functions u(⋅) adapted to the filtration {Ft}t0 such that u(t)Hm,t0, and, for all ts,w(t)w(s) is independent of u(τ),w(τ),τs, and x(0)=[x1T(0),x2T(0)]T, and hence, u(⋅) is nonanticipative. Furthermore, we assume u(⋅) takes values in a compact, metrizable set U and the uniform Lipschitz continuity and growth conditions (4) and (5) hold for the controlled drift and diffusion terms F(x1,x2,u)[F1T(x1,x2,u),F2T(x1,x2,u)]T and D(x1,x2,u)[D1T(x1,x2,u),D2T(x1,x2,u)]T uniformly in u. In this case, it follows from Theorem 2.2.4 of Ref. [29] that there exists a pathwise unique solution to Eqs. (32) and (33) in (Ω,{F}t0,x0).

A measurable function ϕ:n1×n2m satisfying ϕ(0,x2)=0,x2n2, is called a control law. If u(t)=ϕ(x1(t),x2(t)), t ≥ 0, where ϕ(·,·) is a control law and x1(t) and x2(t) satisfy Eqs. (32) and (33), then we call u(⋅) a feedback control law. Note that the feedback control law is an admissible control, since ϕ(x1(t),x2(t))Hm,t0. Given a control law ϕ(·,·) and a feedback control law u(t)=ϕ(x1(t),x2(t)),t0, the closed-loop systems(32) and (33) is given by 
dx1(t)=F1(x1(t),x2(t),ϕ(x1(t),x2(t)))dt+D1(x1(t),x2(t),ϕ(x1(t),x2(t)))dw(t),x1(0)=x10a.s.,t0
(34)
 
dx2(t)=F2(x1(t),x2(t),ϕ(x1(t),x2(t)))dt+D2(x1(t),x2(t),ϕ(x1(t),x2(t)))dw(t),x2(0)=x20a.s.
(35)
Next, we present a main theorem for partial-state stabilization in probability characterizing feedback controllers that guarantee partial closed-loop stability in probability and minimize a nonlinear-nonquadratic performance functional. For the statement of this result, let L:n1×n2×m be jointly continuous in x1, x2, and u, and define the set of partial regulation controllers given by 
S(x1(0),x2(0)){u(·):u(·)isadmissibleandx1(·)givenbyEq.(32)satisfiesx0(limt||x1(t)||=0)=1}

Note that restricting our minimization problem to u(·)S(x1(0),x2(0)), that is, inputs corresponding to partial-state null convergent in probability solutions, can be interpreted as incorporating a partial-state system detectability condition through the cost.

Theorem 3.2. Consider the nonlinear controlled stochastic dynamical systemGgiven by Eqs.(32)and(33)with performance functional 
J(x10,x20,u(·))Ex0[0L(x1(t),x2(t),u(t))dt]
(36)
where u() is an admissible control. Assume that there exist a two-times continuously differentiable functionV:n1×n2, classKfunctions α() and β(), a classKfunction γ(), and a control lawϕ:n1×n2msuch that, for all(x1,x2)n1×n2 
α(||x1||)V(x1,x2)β(||x1||)
(37)
 
V(x1,x2)F(x1,x2,ϕ(x1,x2))+12trDT(x1,x2,ϕ(x1,x2))V(x1,x2)D(x1,x2,ϕ(x1,x2))γ(||x1||)
(38)
 
ϕ(0,x2)=0
(39)
 
H(x1,x2,ϕ(x))=0
(40)
 
H(x1,x2,u)0,(x1,x2,u)n1×n2×m
(41)
where 
H(x1,x2,u)L(x1,x2,u)+V(x1,x2)F(x1,x2,u)+12trDT(x1,x2,u)V(x1,x2)D(x1,x2,u)
(42)
Then, with the feedback controlu=ϕ(x1,x2), the closed-loop system given by Eqs.(34)and(35)is globally asymptotically stable in probability with respect to x1 uniformly in x20 and 
J(x10,x20,ϕ(x1(·),x2(·)))=V(x10,x20),(x10,x20)n1×n2
(43)
In addition, if(x10,x20)n1×n2, then the feedback controlu(·)=ϕ(x1(·),x2(·))minimizesJ(x10,x20,u(·))in the sense that 
J(x10,x20,ϕ(·,·))=minu(·)S(x1(0),x2(0))J(x10,x20,u(·))
(44)

Proof. Global asymptotic stability in probability with respect to x1 uniformly in x20 is a direct consequence of Eqs. (37) and (38) by applying Theorem 2.1 to the closed-loop system given by Eqs. (34) and (35). Furthermore, using Eq. (40), condition (43) is a restatement of Eq. (19) as applied to the closed-loop system.

Next, let (x10,x20)n1×n2, let u(·)S(x1(0),x2(0)), and let x1(t) and x2(t), t ≥ 0, be solutions of Eqs. (32) and (33). Then, using Itô's (chain rule) formula, the stochastic differential of V(x1(t),x2(t)) along the system trajectories (x1(t),x2(t)),t0, is given by 
dV(x1(t),x2(t))=LV(x1(t),x2(t))dt+V(x(t))xD(x1(t),x2(t),u(t))dw(t)
(45)
Hence, using Eqs. (7) and (42) yields 
L(x1(t),x2(t),u(t))dt=dV(x1(t),x2(t))+(L(x1(t),x2(t),u(t))+LV(x1(t),x2(t)))dt+V(x(t))xD(x1(t),x2(t),u(t))dw(t)=dV(x1(t),x2(t))+H(x1(t),x2(t),u(t))dt+V(x(t))xD(x1(t),x2(t),u(t))dw(t)
(46)
Now, it follows from Eq. (37) that 
Ex0[limtα(||x1(t)||)]Ex0[limtV(x1(t),x2(t))]Ex0[limtβ(||x1(t)||)]
(47)
Using the continuity of α(⋅) and β(⋅), and the fact that x0(limt||x1(t)||=0)=1 for all u(·)S(x1(0),x2(0)), it follows from Eq. (47) that 
0=Ex0[α(limt||x1(t)||)]Ex0[limtV(x1(t),x2(t))]Ex0[β(limt||x1(t)||)]=0
(48)
Let {tn}n=0 be a monotonic sequence of positive numbers with tn as n,τm:Ω[0,) be the first exit (stopping) time of the solution x1(t) and x2(t), t ≥ 0, from the set Bm(0)×n2, and let τlimmτm. Now, integrating Eq. (46) over [t0,min{tn,τm}], where (n,m)+×+, yields 
0min{tn,τm}L(x1(t),x2(t),u(t))dt=0min{tn,τm}dV(x1(t),x2(t))+0min{tn,τm}H(x1(t),x2(t),u(t))dt+0min{tn,τm}V(x(t))xD(x1(t),x2(t))dw(t)=V(x1(0),x2(0))V(x1(min{tn,τm}),x2(min{tn,τm}))+0min{tn,τm}H(x1(t),x2(t),u(t))dt+0min{tn,τm}V(x(t))xD(x1(t),x2(t))dw(t)
(49)
Next, taking the expectation on both sides of Eq. (49) and using Eq. (41) yield 
Ex0[0min{tn,τm}L(x1(t),x2(t),u(t))dt]=Ex0[V(x1(0),x2(0))V(x1(min{tn,τm}),x2(min{tn,τm}))+0min{tn,τm}H(x1(t),x2(t),u(t))dt+0min{tn,τm}V(x(t))xD(x1(t),x2(t))dw(t)]=V(x10,x20)Ex0[V(x1(min{tn,τm}),x2(min{tn,τm}))]+Ex0[0min{tn,τm}H(x1(t),x2(t),u(t))dt]V(x10,x20)Ex0[V(x1(min{tn,τm}),x2(min{tn,τm}))]
(50)

Now, noting that for all u(·)S(x1(0),x2(0)),0|L(x1(t),x2(t),u(t))|dt<a.s., define the random variable gsupt0,m>00min{t,τm}|L(x1(s),x2(s),u(s))|ds. In this case, the sequence of Ft-measurable random variables {fn,m}n,m=0H1 on Ω, where fn,m0min{tn,τm}L(x1(t),x2(t),u(t))dt, satisfies |fn,m|<a.s.g.

Next, defining the improper integral 0L(x1(t),x2(t),u(t))dt as the limit of a sequence of proper integrals, it follows from dominated convergence theorem [28] that 
limmlimnEx0[0min{tn,τm}L(x1(t),x2(t),u(t))dt]=limmEx0[limn0min{tn,τm}L(x1(t),x2(t),u(t))dt]=Ex0[limm0τmL(x1(t),x2(t),u(t))dt]=Ex0[t0L(x1(t),x2(t),u(t))dt]=J(x10,x20,u(·))
(51)
Finally, using the fact that u(·)S(x1(0),x2(0)) and V(⋅, ⋅) is continuous, it follows that for every m > 0, V(x1(min{tn,τm}),x2(min{tn,τm})) is bounded for all {tn}n=0. Thus, using the dominated convergence theorem, we obtain 
limmlimnEx0[V(x1(min{tn,τm}),x2(min{tn,τm}))]=Ex0[limmlimnV(x1(min{tn,τm}),x2(min{tn,τm}))]
(52)

Now, taking the limit as n and m on both sides of Eq. (50) and using the fact u(·)S(x1(0),x2(0)), Eqs. (48), (51), (52), and J(x10,x20,ϕ(x1(·),x2(·)))=V(x10,x20) yield Eq. (44).

Note that Eq. (40) is the steady-state, stochastic Hamilton–Jacobi–Bellman equation for the nonlinear controlled stochastic dynamical systems (32) and (33) with performance criterion (36). Furthermore, conditions (40) and (41) guarantee optimality with respect to the set of admissible partially asymptotically stabilizing in probability controllers S(x0(0),x2(0)). However, it is important to note that an explicit characterization of S(x1(0),x2(0)) is not required. In addition, the stochastic optimal asymptotically stabilizing in probability with respect to x1 uniformly in x20feedback control law u=ϕ(x1,x2) is independent of the initial condition (x10, x20) and is given by 
ϕ(x1,x2)=argminuS(x1(0),x2(0))[L(x1,x2,u)+V(x1,x2)F(x1,x2,u)+12trDT(x1,x2,u)V(x1,x2)D(x1,x2,u)]
(53)
Remark 3.1. Setting n1 = n and n2 = 0, the nonlinear controlled stochastic dynamical system given by Eqs. (32) and (33) reduces to 
dx(t)=F(x(t),u(t))dt+D(x(t),u(t))dw(t),x(0)=x0a.s.,t0
(54)

In this case, Eq. (37) implies that V(⋅) is positive definite with respect to x, and the conditions of Theorem 3.2 reduce to the conditions given in Chap. 4 of Ref. [17] characterizing the classical stochastic optimal control problem for time-invariant systems on an infinite interval.

Next, we specialize the results of Theorem 3.1 to nonlinear affine in the control stochastic dynamical systems of the form 
dx1(t)=[f1(x1(t),x2(t))+G1(x1(t),x2(t))u(t)]dt+D1(x1(t),x2(t))dw(t),x1(0)=x10a.s.,t0
(55)
 
dx2(t)=[f2(x1(t),x2(t))+G2(x1(t),x2(t))u(t)]dt+D2(x1(t),x2(t))dw(t),x2(0)=x20a.s.
(56)
where, for every t0,x1(t)Hn1 and x2(t)Hn2,u(t)Hm, and f1:n1×n2n1,f2:n1×n2n2,G1:n1×n2n1×m,G2:n1×n2n2×m,D1:n1×n2n1×d, and D2:n1×n2n2×d are such that f1(0, x2) = 0 and D1(0, x2) = 0 for all x2n2; and F(x1,x2,u)[(f1(x1,x2)+G1(x1,x2)u)T,(f2(x1,x2)+G2(x1,x2)u)T]T,D(x1,x2,u)[D1T(x1,x2,u),D2T(x1,x2,u)]T satisfy Eqs. (4) and (5) uniformly in u. Furthermore, we consider performance integrands L(x1, x2, u) of the form 
L(x1,x2,u)=L1(x1,x2)+L2(x1,x2)u+uTR2(x1,x2)u,(x1,x2,u)n1×n2×m
(57)
where L1:n1×n2,L2:n1×n21×m, and R2(x1,x2)N(x1)>0,(x1,x2)n1×n2, so that Eq. (36) becomes 
J(x10,x20,u(·))=Ex0[0[L1(x1(t),x2(t))+L2(x1(t),x2(t))u(t)+uT(t)R2(x1(t),x2(t))u(t)]dt]
(58)
For the statement of the next result, define 
f(x1,x2)[f1T(x1,x2),f2T(x1,x2)]T,G(x1,x2)[G1T(x1,x2),G2T(x1,x2)]T,D(x1,x2)[D1T(x1,x2),D2T(x1,x2)]T
Corollary 3.2. Consider the controlled nonlinear affine stochastic dynamical systems (55) and (56) with performance measure (58). Assume that there exist a two-times continuously differentiable functionV:n1×n2, classKfunctions α() and β(), and a classKfunction γ() such that, for all(x1,x2)n1×n2 
α(||x1||)V(x1,x2)β(||x1||)
(59)
 
V(x1,x2)[f(x1,x2)12G(x1,x2)R21(x1,x2)L2T(x1,x2)12G(x1,x2)R21(x1,x2)GT(x1,x2)VT(x1,x2)]+12trDT(x1,x2)V(x1,x2)D(x1,x2)γ(||x1||)
(60)
 
L2(0,x2)=0
(61)
 
0=L1(x1,x2)+V(x1,x2)f(x1,x2)+12trDT(x1,x2)V(x1,x2)D(x1,x2)14[V(x1,x2)G(x1,x2)+L2(x1,x2)]R21(x1,x2)[V(x1,x2)G(x1,x2)+L2(x1,x2)]T
(62)
Then, with the feedback control 
u=ϕ(x1,x2)=12R21(x1,x2)[L2(x1,x2)+V(x1,x2)G(x1,x2)]T
(63)
the closed-loop system 
dx1(t)=[f1(x1(t),x2(t))+G1(x1(t),x2(t))ϕ(x1(t),x2(t))]dt+D1(x1(t),x2(t))dw(t),x1(0)=x10a.s.,t0
(64)
 
dx2(t)=[f2(x1(t),x2(t))+G2(x1(t),x2(t))ϕ(x1(t),x2(t))]dt+D2(x1(t),x2(t))dw(t),x2(0)=x20a.s.
(65)

is globally asymptotically stable in probability with respect to x1 uniformly in x20, and the performance measure (58) is minimized in the sense of Eq.(44). Finally, Eq.(43)holds.

Proof. The result is a consequence of Theorem 3.2 with F(x1,x2,u)=f(x1,x2)+G(x1,x2)u and L(x1,x2,u)=L1(x1,x2)+L2(x1,x2)u+uTR2(x1,x2)u.

Finally, we use Theorem 3.2 to provide a unification between optimal partial-state stochastic stabilization and stochastic optimal control for nonlinear time-varying systems. Specifically, consider the nonlinear time-varying controlled stochastic dynamical system 
dx(t)=F(t,x(t),u(t))dt+D(t,x(t),u(t))dw(t),x(t0)=x0a.s.,tt0
(66)
with performance measure 
J(t0,x0,u(·))Ex0[t0L(t,x(t),u(t))dt]
(67)
where, for every tt0,x(t)Hn,u(t)Hm,L:[t0,)×n×m,F:[t0,)×n×mn, and D:[t0,)×n×mn×d are jointly continuous in t, x, and u, F(t, ⋅, u) and D(t, ⋅, u) are Lipschitz continuous in x for every (t,u)[t0,)×m, and F(t, x, ⋅) and D(t, x, ⋅) are Lipschitz continuous in u for every (t,x)[t0,)×n. For the statement of the next result, define the set of regulation controllers 
S(t0,x(t0)){u(·):u(·)isadmissibleandx(·)givenbyEq.(66)satisfiesx0(limt||x(t)||=0)=1}
Corollary 3.3. Consider the nonlinear time-varying controlled stochastic dynamical system (66) with performance measure (67), where u() is an admissible control. Assume that there exist a two-times continuously differentiable functionV:[t0,)×n, classKfunctions α() and β(), a classKfunction γ(), and a control lawϕ:[t0,)×nmsuch that, for all(t,x)[t0,)×n 
α(||x||)V(t,x)β(||x||)
(68)
 
V(t,x)t+V(t,x)xF(t,x,ϕ(t,x))+12trDT(t,x,ϕ(t,x))·2V(t,x)x2D(t,x,ϕ(t,x))γ(||x||)
(69)
 
ϕ(t,0)=0
(70)
 
L(t,x,ϕ(t,x))+V(t,x)t+V(t,x)xF(t,x,ϕ(t,x))+12trDT(t,x,ϕ(t,x))2V(t,x)x2D(t,x,ϕ(t,x))=0
(71)
 
L(t,x,u)+V(t,x)t+V(t,x)xF(t,x,u)+12trDT(t,x,u)2V(t,x)x2D(t,x,u)0,(t,x,u)[t0,)×n×m
(72)
Then, with the feedback controlu=ϕ(t,x), the closed-loop system given by Eq.(66)is globally uniformly asymptotically stable in probability andJ(t0,x0,ϕ(·,·))=V(t0,x0)for all(t0,x0)[0,)×D0. In addition, if(t0,x0)[0,)×n, then the feedback controlu(·)=ϕ(·,x(·))minimizes J(x0, u()) in the sense that 
J(t0,x0,ϕ(·,·))=minu(·)S(t0,x(t0))J(t0,x0,u(·))
(73)

Proof. The proof is a direct consequence of Theorem 3.2 with n1=n,n2=1,x1(tt0)=x(t),x2(tt0)=t,F1(x1,x2,u)=F1(x2,x1,u)=F(t,x,u),F2(x1,x2,u)=1,D1(x1,x2,u)=D1(x2,x1,u)=D(t,x,u),D2(x1,x2,u)=0,ϕ(x1,x2)=ϕ(x2,x1)=ϕ(t,x), and V(x1,x2)=V(x2,x1)=V(t,x).

Note that Eqs. (71) and (72) give the stochastic Hamilton–Jacobi–Bellman equation 
V(t,x)t=minuS(t0,x(t0))[L(t,x,u)+V(t,x)xF(t,x,u)+12trDT(t,x,u)2V(t,x)x2D(t,x,u)],(t,x)[t0,)×n
(74)
which characterizes the optimal control 
ϕ(t,x)=argminuS(t0,x(t0))[L(t,x,u)+V(t,x)xF(t,x,u)+12trDT(t,x,u)2V(t,x)x2D(t,x,u)]
(75)

for time-varying stochastic systems on a finite or infinite interval.

Inverse Optimal Stochastic Control

In this section, we construct state feedback controllers for nonlinear affine in the control stochastic dynamical systems that are predicated on an inverse optimal control problem [713]. In particular, as noted in the Introduction, to avoid the complexity in solving the steady-state, stochastic Hamilton–Jacobi–Bellman equation (62), we do not attempt to minimize a given cost functional, but rather, we parameterize a family of stabilizing controllers that minimize some derived cost functional that provides flexibility in specifying the control law. The performance integrand is shown to explicitly depend on the nonlinear system dynamics, the Lyapunov function of the closed-loop system, and the stabilizing feedback control law, wherein the coupling is introduced via the stochastic Hamilton–Jacobi–Bellman equation. Hence, by varying the parameters in the Lyapunov function and the performance integrand, the proposed framework can be used to characterize a class of globally partial-state stabilizing (in probability) controllers that can meet closed-loop system response constraints.

Theorem 4.1. Consider the nonlinear controlled affine stochastic dynamical systems (55) and (56) with performance measure (58). Assume there exist a two-times continuously differentiable functionV:n1×n2, classKfunctions α() and β(), and a classKfunction γ() such that, for all(x1,x2)n1×n2 
α(||x1||)V(x1,x2)β(||x1||)
(76)
 
V(x1,x2)[f(x1,x2)12G(x1,x2)R21(x1,x2)L2T(x1,x2)12G(x1,x2)R21(x1,x2)·GT(x1,x2)VT(x1,x2)]+12trDT(x1,x2)V(x1,x2)D(x1,x2)γ(||x1||)
(77)
 
L2(0,x2)=0
(78)
Then, with the feedback control 
u=ϕ(x1,x2)=12R21(x1,x2)[L2(x1,x2)+V(x1,x2)G(x1,x2)]T
(79)
the closed-loop system given by Eqs.(64)and(65)is globally asymptotically stable in probability with respect to x1 uniformly in x20 and the performance functional (58) with 
L1(x1,x2)=ϕT(x1,x2)R2(x1,x2)ϕ(x1,x2)V(x1,x2)f(x1,x2)12trDT(x1,x2)V(x1,x2)D(x1,x2)
(80)

is minimized in the sense of Eq.(44). Finally, Eq.(43)holds.

Proof. The proof is identical to the proof of Corollary 3.2.

Next, we specialize Theorem 4.1 to linear time-varying stochastic systems controlled by nonlinear controllers that minimize a polynomial cost functional generalizing the results of Refs. [1] and [3] to the stochastic setting. Specifically, consider the linear time-varying stochastic dynamical system 
dx(t)=[A(t)x(t)+B(t)u(t)]dt+x(t)σT(t)dw(t),x(t0)=x0a.s.,tt0
(81)
where, for all tt0,x(t)Hn,u(t)Hm, and σ:[t0,)d,A:[t0,)n×n, and B:[t0,)n×m are continuous and uniformly bounded. For the following result, let R1:[t0,)n×n,R2:[t0,)m×m, and R̂q:[t0,)n×n,q=2,,r, where r is a positive integer, be continuous, uniformly bounded, and positive-definite matrices, that is, there exist γ, μ, μ̂q>0,q=2,,r, such that R1(t)γIn>0,R2(t)μIm>0, and R̂q(t)μ̂qIm>0, for all t ≥ t0. Furthermore, we consider performance integrands in Eq. (67) of the form 
L(t,x,u)=L1(t,x)+L2(t,x)u+uTR2(t,x)u,(t,x,u)[t0,)×n×m
(82)
where L1:[t0,)×n,L2:[t0,)×n1×m, and R2(t,x)N(x)>0,(t,x)[t0,)×n, so that Eq. (67) becomes 
J(t0,x0,u(·))=Ex0[t0[L1(t,x(t))+L2(t,x(t))u(t)+uT(t)R2(t,x(t))u(t)]dt]
(83)
Corollary 4.1. Consider the linear controlled time-varying stochastic dynamical system (81), where u() is admissible. Assume that there exist a uniformly bounded, continuously differentiable, positive definiteP:[t0,)n×nand continuously differentiable, uniformly bounded, non-negative definiteMq:[t0,)n×n,q=2,,r, such that 
P˙(t)=(A(t)+12σ(t)2In)TP(t)+P(t)(A(t)+12σ(t)2In)+R1(t)P(t)S(t)P(t),limtfP(tf)=P¯,t[t0,)
(84)
and 
M˙q(t)=(A(t)+12(2q1)σ(t)2InS(t)P(t))TMq(t)+Mq(t)(A(t)+12(2q1)σ(t)2InS(t)P(t))+R̂q(t),limtfMq(tf)=M¯q,q=2,,r,t[t0,)
(85)
whereS(t)B(t)R21(t)BT(t), andP¯andM¯qsatisfy Eqs.(84)and(85), respectively. Then, the zero solutionx(t)0of the closed-loop system 
dx(t)=[A(t)x(t)+B(t)ϕ(t,x)]dt+x(t)σT(t)dw(t),x(t0)=x0a.s.,tt0
(86)
is globally uniformly asymptotically stable in probability with feedback control 
u=ϕ(t,x)=R21(t)BT(t)(P(t)+q=2r(xTMq(t)x)q1Mq(t))x
(87)
and the performance functional (83) withR2(t,x)=R2(t),L2(t,x)=0, and 
L1(t,x)=xT(R1(t)+q=2r(xTMq(t)x)q1R̂q(t)+[q=2r(xTMq(t)x)q1Mq(t)]TS(t)[q=2r(xTMq(t)x)q1Mq(t)])x
(88)
is minimized in the sense of Eq.(73). Finally 
J(t0,x0,ϕ(·,·))=x0TP(t0)x0+q=2r1q(x0TMq(t0)x0)q,(t0,x0)[0,)×n
(89)
Proof. The result is a consequence of Theorem 4.1 with n1=n,n2=1,x1(tt0)=x(t),x2(tt0)=t,f1(x1,x2)=f1(x2,x1)=A(t)x,f2(x1,x2)=1,G1(x1,x2)=G1(x2,x1)=B(t), G2(x1,x2)=0,D1(x1,x2)=D1(x2,x1)=xσT(t),D2(x1,x2)=0,L1(x1,x2)=L1(x2,x1)=L1(t,x), where L1(t, x) is given by Eq. (88), L2(x1,x2)=0,R2(x1,x2)=R2(x2,x1)=R2(t),V(x1,x2)=V(x2,x1)=xTP(t)x+q=2r(1/q)(xTMq(t)x)q,α(||x1||)=α||x||2, β(||x1||)=β||x||2+q=2r(1/q)β̂qq||x||2q, and γ(||x1||)=γ||x||2q=2rσ̂qβ̂qq1||x||2q, for some α, β, γ, β̂q, and σ̂q>0,q=2,,r. Specifically, since P(⋅) and Mq(⋅) are uniformly bounded and, respectively, positive and non-negative definite, there exist constants α, β, and β̂q>0,q=2,,r, such that αInP(t)βIn and 0Mq(t)β̂qIn,tt0, and hence 
α||x||2V(t,x)β||x||2+q=2r1qβ̂qq||x||2q,(t,x)[t0,)×n
(90)

which verifies Eq. (76).

Next, Eq. (87) is a restatement of Eq. (79). Now, let ϕ(t,x)=ϕ1(t,x)+ϕ2(t,x), where 
ϕ1(t,x)R21(t)BT(t)P(t)x
(91)
 
ϕ2(t,x)R21(t)BT(t)q=2r(xTMq(t)x)q1Mq(t)x
(92)
Computing the infinitesimal generator LV(t,x) along the trajectories of the closed-loop system (86) gives 
LV(t,x)=xT(P˙(t)x+P(t)A(t)+AT(t)P(t))x+2xTP(t)B(t)ϕ(t,x)+σ(t)2xTP(t)x+q=2r(xTMq(t)x)q1[xT(M˙q(t)+Mq(t)A(t)+AT(t)Mq(t))x+2xTMq(t)B(t)ϕ(t,x)+(2q1)σ(t)2xTMq(t)x]=xT(P˙(t)+P(t)(A(t)+12σ(t)2In)+(A(t)+12σ(t)2In)TP(t)P(t)S(t)P(t))xxTP(t)S(t)P(t)x+2xTP(t)B(t)ϕ2(t,x)+q=2r(xTMq(t)x)q1[xT(M˙q(t)+Mq(t)(A(t)+12(2q1)σ(t)2InS(t)P(t))+(A(t)+12(2q1)σ(t)2InS(t)P(t))TMq(t))x+2xTMq(t)B(t)ϕ2(t,x)],(t,x)[t0,)×n
(93)
Now, using Eqs. (84) and (85), Eq. (93) yields 
LV(t,x)=xT(R1(t)+q=2r(xTMq(t)x)q1R̂q(t))xxTP(t)S(t)P(t)x2xT[q=2r(xTMq(t)x)q1Mq(t)]TS(t)[q=2r(xTMq(t)x)q1Mq(t)]x2xTP(t)S(t)q=2r(xTMq(t)x)q1Mq(t)xxTR1(t)xxTq=2r(xTMq(t)x)q1R̂q(t)xγ||x||2q=2r(β̂q||x||2)q1σ̂q||x||2γ||x||2q=2rσ̂qβ̂qq1||x||2q,(t,x)[t0,)×n
(94)

and hence, Eq. (77) holds.

Finally, note that 
ϕT(t,x)R2(t)ϕ(t,x)=xTP(t)S(t)P(t)x+2xTP(t)S(t)q=2r(xTMq(t)x)q1Mq(t)x+xT[q=2r(xTMq(t)x)q1Mq(t)]TS(t)[q=2r(xTMq(t)x)q1Mq(t)]x
(95)
which, using the first equality in Eq. (94), implies 
LV(t,x)=xTR1(t)xxTq=2r(xTMq(t)x)q1R̂q(t)xϕ(t,x)R2(t)ϕ(t,x)xT[q=2r(xTMq(t)x)q1Mq(t)]TS(t)[q=2r(xTMq(t)x)q1Mq(t)]x=L1(t,x)ϕT(t,x)R2(t)ϕ(t,x)
(96)

where L1(t, x) is given by Eq. (88), and thus, Eq. (80) is verified. The result now follows as a direct consequence of Theorem 4.1.

Finally, we specialize Theorem 4.1 to linear time-varying stochastic systems controlled by nonlinear controllers that minimize a multilinear cost functional. For the following result, define x[k]xxx and qAAAA, with x and A appearing k times, where k is a positive integer. Furthermore, define N(k,n){Ψ1×nk:Ψx[k]0, xn} and let P̂q:[t0,)1×n2q,R̂2q:[t0,)1×n2q,q=2,,r, where r is a positive integer, and R2:[t0,)m×m be continuous and uniformly bounded, R̂2q(t),P̂q(t)N(2q,n), and R2(t)μIm>0, for some μ > 0 and for all t ≥ t0.

Corollary 4.2. Consider the linear controlled time-varying stochastic dynamical system (81), where u() is admissible. Assume that there exist a continuously differentiable, uniformly bounded, positive definiteP:[t0,)n×nand continuously differentiable, uniformly boundedP̂q:[t0,)1×n2q,q=2,,r, such thatP̂qN(k,n) 
P˙(t)=(A(t)+12σ(t)2In)TP(t)+P(t)(A(t)+12σ(t)2In)+R1(t)P(t)S(t)P(t),limtfP(tf)=P¯,t[t0,)
(97)
and 
P̂˙q(t)=P̂q(t)[2q(A(t)+12(2q1)σ(t)2InS(t)P(t))]+R̂2q(t),   limtfP̂q(tf)=P̂¯q,q=2,,r,t[t0,)
(98)
whereS(t)B(t)R21(t)BT(t),andP¯andP̂¯qsatisfy Eqs.(97)and(98), respectively. Then, the zero solutionx(t)0of the closed-loop system (86) is globally uniformly asymptotically stable in probability with the feedback control law 
ϕ(t,x)=R21(t)BT(t)(P(t)x+12gT(t,x))
(99)
whereg(t,x)q=2rP̂q(t)x[2q], and the performance functional (83) withR2(t,x)=R2(t),L2(t,x)=0, and 
L1(t,x)=xTR1(t)x+q=2rR̂2q(t)x[2q]+14g(t,x)S(t)gT(t,x)
(100)
is minimized in the sense of Eq.(73). Finally 
J(t0,x0,ϕ(·,·))=x0TP(t0)x0+q=2rP̂q(t0)x0[2q],(t0,x0)[0,)×n
(101)
Proof. The result is a consequence of Theorem 4.1 with n1=n,n2=1,x1(tt0)=x(t),x2(tt0)=t,f1(x1,x2)=f1(x2,x1)=A(t)x,f2(x1,x2)=1,G1(x1,x2)=G1(x2,x1)=B(t),G2(x1,x2)=0,D1(x1,x2)=D1(x2,x1)=xσT(t),D2(x1,x2)=0,L1(x1,x2)=L1(x2,x1)=L1(t,x), where L1(t, x) is given by Eq. (100), L2(x1,x2)=0,R2(x1,x2)=R2(x2,x1)=R2(t),V(x1,x2)=V(x2,x1)=xTP(t)x+q=2rP̂q(t)x[2q],α(||x1||)=α||x||2, β(||x1||)=β||x||2, and γ(||x1||)=γ||x||2, for some α, β, γ > 0. Specifically, since P(⋅) is uniformly bounded and positive definite, there exist constants α, β > 0 such that αInP(t)βIn. In addition, since P̂q(t)N(2q,n),q=2,,n, for all t ≥ t0, it follows that 
α||x||2V(t,x)β||x||2,(t,x)[t0,)×n
(102)

which verifies Eq. (76).

Computing the infinitesimal generator LV(t,x) along the trajectories of the closed-loop system (86) gives 
LV(t,x)=xT(P˙(t)+P(t)A(t)+AT(t)P(t))x+2xTP(t)B(t)ϕ(t,x)+12tr(xσT(t))T2P(t)xσT(t)+q=2rP̂˙q(t)x[2q]+g(t,x)(A(t)x+B(t)ϕ(t,x))+12tr(xσT(t))Tg(t,x)xσT(t)=xT(P˙(t)x+P(t)(A(t)+12σ(t)2In)+(A(t)+12σ(t)2In)TP(t)P(t)S(t)P(t))xxTP(t)S(t)P(t)xxTP(t)S(t)gT(t,x)+q=2rP̂˙q(t)x[2q]+g(t,x)[(A(t)S(t)P(t))x12S(t)gT(t,x)]+12tr(xσT(t))Tg(t,x)xσT(t)
(103)
for all (t,x)[t0,)×n. Next, noting that 
g(t,x)(A(t)S(t)P(t))x+12tr(xσT(t))Tg(t,x)xσT(t)=x[q=2rP̂q(t)x[2q]](A(t)S(t)P(t))x+12xT2x2[q=2rP̂q(t)x[2q]]xσ(t)2=q=2rP̂q(t)(iq=12qxIniqthentryx)(A(t)S(t)P(t))x+q=2r12σ(t)2(i=1nj=1niq=12qjq=1,jqiq2qxiP̂q(t)(xeiiqthentryejjqthentryx)xj)=q=2rP̂q(t)(iq=12qx(A(t)S(t)P(t))xiqthentryx)+q=2r12σ(t)2(iq=12qjq=1,jqiq2qi=1nj=1nP̂q(t)(xxieiiqthentryxjejjqthentryx))=q=2rP̂q(t)(iq=12qIn(A(t)S(t)P(t))iqthentryIn)x[2q]+q=2r12σ(t)2(iq=12qjq=1,jqiq2qP̂q(t)(x(i=1nxiei)iqthentry(j=1nxjej)jqthentryx))=q=2rP̂q(t)(iq=12qIn(A(t)S(t)P(t))iqthentryIn)x[2q]+q=2r12σ(t)2P̂q(t)(iq=12qjq=1,jqiq2qxxiqthentryxjqthentryx)=q=2rP̂q(t)(iq=12qIn(A(t)S(t)P(t))iqthentryIn)x[2q]+q=2rP̂q(t)(iq=12qIn12(q1)σ(t)2IniqthentryIn)x[2q]=q=2rP̂q(t)(iq=12qIn((A(t)S(t)P(t))+12(q1)σ(t)2In)iqthentryIn)x[2q]=q=2rP̂q(t)[2q(A(t)+12(2q1)σ(t)2InS(t)P(t))]x[2q]
(104)
it follows from Eqs. (97), (98), and (104), that 
LV(t,x)=xTR1(t)xxTP(t)S(t)P(t)xxTP(t)S(t)gT(t,x)+q=2r(P̂˙q(t)+P̂q(t)[2q(A(t)+12(2q1)σ(t)2InS(t)P(t))])x[2q]12g(t,x)S(t)gT(t,x)=xTR1(t)xxTP(t)S(t)P(t)xxTP(t)S(t)gT(t,x)q=2rR̂2q(t)x[2q]12g(t,x)S(t)gT(t,x)
(105)
Finally, note that 
ϕT(t,x)R2(t)ϕ(t,x)=(xTP(t)+12g(t,x))S(t)(P(t)x+12gT(t,x))=xTP(t)S(t)P(t)x+14g(t,x)S(t)gT(t,x)+xTP(t)S(t)gT(t,x)
(106)
which, using Eq. (105), implies that 
LV(t,x)=xTR1(t)xq=2rR̂2q(t)x[2q]14g(t,x)S(t)gT(t,x)ϕT(t,x)R2(t)ϕ(t,x)
(107)
for all (t,x)[t0,)×n, and hence, Eq. (77) holds with γ(||x||)=γ||x||2. In addition, writing Eq. (107) as 
LV(t,x)=L1(t,x)ϕT(t,x)R2(t)ϕ(t,x)
(108)

where L1(t, x) is given by Eq. (100), and thus, Eq. (80) is verified. The result now follows as a direct consequence of Theorem 4.1.

Illustrative Numerical Examples

In this section, we provide two illustrative numerical examples to highlight the optimal and inverse optimal partial-state asymptotic stabilization framework developed in the paper.

Optimal Partial Stabilization of a Rigid Spacecraft.

Consider the rigid spacecraft with stochastic disturbances given by 
dω1(t)=[I23ω2(t)ω3(t)α1ω1(t)+u1(t)]dt+σ1ω1(t)dw(t),ω1(0)=ω10a.s.,t0
(109)
 
dω2(t)=[I31ω3(t)ω1(t)α2ω2(t)+u2(t)]dt+σ2ω2(t)dw(t),ω2(0)=ω20a.s.
(110)
 
dω3(t)=[I12ω1(t)ω2(t)]dt+σ3ω3(t)dw(t),ω3(0)=ω30a.s.
(111)
where I23(I2I3)/I1,I31(I3I1)/I2,I12(I1I2)/I3, I1, I2, and I3 are the principal moments of inertia of the spacecraft such that I1 > I2 > I3 > 0, α1 ≥ 0 and α2 ≥ 0 reflect dissipation in the ω1 and ω2 coordinates of the spacecraft, u1 and u2 are the spacecraft control moments, and w(t) is a standard Wiener process. Here, the state-dependent disturbances can be used to capture perturbations in atmospheric drag for low-altitude (i.e.,<600 km) satellites from the Earth's residual atmosphere as well as J2 perturbations due to the nonspherical mass distribution of the Earth and its nonuniform mass density. For details, see Refs. [30,31]. For this example, we seek a state feedback controller u=[u1,u2]T=ϕ(x1,x2), where x1=[ω1,ω2]T and x2 = ω3, such that the performance measure 
J(x10,x20,u(·))=Ex0[0[x1T(t)R1x1(t)+uT(t)u(t)]]dt
(112)

where R1 > 0 is minimized in the sense of Eq. (44), and Eqs. (109)(111) are globally asymptotically stable in probability with respect to x1 uniformly in x20.

Note that Eqs. (109)(111) with performance measure (112) can be cast in the form of Eqs. (55) and (56) with performance measure (58). In this case, Theorem 3.2 can be applied with n1 = 2, n2 = 1, m = 2, f(x1,x2)=f̃(x1,x2)Ax1,f̃(x1,x2)[I23ω2ω3,I31ω3ω1,I12ω1ω2]T, A[α1000α20]T, G(x1,x2)=[100010]T,D(x1,x2)=[σ1ω1σ2ω2σ3ω3]T, L1(x1,x2)=x1TR1x1,L2(x1,x2)=0, and R2(x1, x2) = I2 to characterize the optimal partially stabilizing controller. Specifically, in this case, Eq. (62) reduces to 
0=x1TR1x1+V(x1,x2)f̃(x1,x2)V(x1,x2)Ax1+12trDT(x1,x2)V(x1,x2)D(x1,x2)14V(x1,x2)G(x1,x2)GT(x1,x2)VT(x1,x2),(x1,x2)n1×n2
(113)
Now, choosing V(x1,x2)=x1TPx1, where P > 0, it follows from Eq. (113) that 
0=x1TR1x1+V(x1,x2)f̃(x1,x2)2x1TPHx1+x1TΣPΣx1x1TPPx1
(114)
where H[α100α2],Σ[σ100σ2], and V(x1,x2)f̃(x1,x2)=0 only if P = ρJ, where ρ > 0 and J[I3100I23]. In this case, Eq. (114) and P = ρJ imply that 
0=R12ρJH̃ρ2J2
(115)

where H̃=H(1/2)Σ2. Hence, Eq. (59) holds with α(||x1||)=ρλmin(J)||x1||2 and β(||x1||)=ρλmax(J)||x1||2, where λmin(⋅) and λmax(⋅) denote minimum and maximum eigenvalues, respectively, and Eq. (60) holds with γ(||x1||)=λmin(R1)||x1||2.

Since all of the conditions of Theorem 3.2 hold, it follows that the feedback control law (62) given by 
ϕ(x1,x2)=12R21(x1,x2)GT(x1,x2)VT(x1,x2)=ρJx1,(x1,x2)n1×n2
(116)

guarantees that the stochastic dynamical systems (109)(111) is globally asymptotically stable in probability with respect to x1 uniformly in x20 and J(x10,x20,ϕ(x1(·),x2(·)))=x10TPx10 for all (x10,x20)n1×n2.

Let I1=20kgm2,I2=15kgm2,I3=10kgm2, ω10=π/3Hz,ω20=π/4Hz,ω30=π/5Hz, α1=1.1668Hz,α2=0.2Hz,σ1=1,σ2=0.4,σ3=0.1, and R1=[5000.54]Hz2. Figure 1 shows the sample average along with the standard deviation of the controlled system state versus time for 20 sample paths for ρ=2.5Hz/(N·m2). Note that x1(t)=[ω1(t),ω2(t)]T0 a.s. as t, whereas x2(t)=ω3(t) does not converge to zero. Figure 2 shows the sample average along with the standard deviation of the corresponding control signal versus time. Finally, J(x10,x20,ϕ(x1(·),x2(·)))=2.2132Hz3.

Thermoacoustic Combustion Model.

In this example, we consider control of thermoacoustic instabilities in combustion processes. Engineering applications involving steam and gas turbines and jet and ramjet engines for power generation and propulsion technology involve combustion processes. Due to the inherent coupling between several intricate physical phenomena in these processes involving acoustics, thermodynamics, fluid mechanics, and chemical kinetics, the dynamic behavior of combustion systems is characterized by highly complex nonlinear models [32,33]. The unstable dynamic coupling between heat release in combustion processes generated by reacting mixtures releasing chemical energy and unsteady motions in the combustor develop acoustic pressure and velocity oscillations that can severely affect operating conditions and system performance.

Consider the nonlinear stochastic dynamical system adopted from Refs. [3] and [32] given by 
dq1(t)=[α1q1(t)βq1(t)q2(t)cosq3(t)+u(t)]dt+σ1q1(t)dw(t),q1(0)=a.s.q10,t0
(117)
 
dq2(t)=[α2q2(t)+βq12(t)cosq3(t)+u(t)]dt+σ2q2(t)dw(t),q2(0)=a.s.q200
(118)
 
dq3(t)=[2θ1θ2β(q12(t)q2(t)2q2(t))sinq3(t)]dt+σ3q1(t)q2(t)dw(t),q3(0)=a.s.q30
(119)

representing a time-averaged, two-mode thermoacoustic combustion model with state-dependent stochastic disturbances, where α1 > 0 and α2 > 0 represent decay constants, θ1 and θ2 represent frequency shift constants, β=((γ+1)/8γ)ω1, where γ denotes the ratio of specific heats and ω1 is the frequency of the fundamental mode, σ1, σ2, and σ3 are such that α1>(1/2)σ12 and α2>(1/2)σ22 and represent augmentation factors of the variance of the state-dependent stochastic disturbance, and u is the control input signal. As shown in Refs. [32,33], only the first two states q1 and q2 representing the modal amplitudes of a two-mode thermoacoustic combustion model are relevant in characterizing system instabilities, since the third state q3 represents the phase difference between the two modes [34]. Hence, we require asymptotic stability of q1(t),t0, and q2(t),t0, which necessitates partial stabilization.

For this example, we seek a state feedback controller u=ϕ(x1,x2), where x1=[q1,q2]T and x2 = q3, such that the performance measure 
J(x1(0),x2(0),u(·))=0[x1T(t)R1x1(t)+u2(t)]dt
(120)
where 
R1=ρ[2α1σ12+ρρρ2α2σ22+ρ],ρ>0
(121)

is minimized in the sense of Eq. (44), and Eqs. (117)(119) are globally asymptotically stable with respect to x1 uniformly in x20.

Note that Eqs. (117)(119) with performance measure (120) can be cast in the form of Eqs. (55) and (56) with performance measure (58). In this case, Theorem 3.2 can be applied with n1 = 2, n2 = 1, m = 1, f(x1,x2)=[α1q1βq1q2cosq3,α2q2+βq12cosq3,2θ1θ2β(q12/q22q2)sinq3]T, G(x1,x2)=[110]T,D(x1,x2)=[σ1q1σ2q2σ3q1q2]T, L1(x1,x2)=x1TR1x1,L2(x1,x2)=0, and R2(x1,x2)=1 to characterize the optimal partially stabilizing controller. Specifically, Eq. (62) reduces to 
0=x1TR1x1+V(x1,x2)f(x1,x2)+12trDT(x1,x2)V(x1,x2)D(x1,x2)14V(x1,x2)G(x1,x2)GT(x1,x2)VT(x1,x2),(x1,x2)n1×n2
(122)

which implies that V(x1,x2)=2ρ[q1,q2,0]. Furthermore, since V(0,x2)=0,x2,V(x1,x2)=ρx1Tx1, which is positive definite with respect to x1, and hence, Eq. (59) holds.

Since all of the conditions of Theorem 3.2 hold, it follows that the feedback control (63) given by 
ϕ(x1,x2)=12R21(x1,x2)GT(x1,x2)VT(x1,x2)=ρ[110][q1q20]T=ρ[110][x10],(x1,x2)n1×n2
(123)

guarantees that the dynamical systems (117)(119) is globally asymptotically stable with respect to x1 uniformly in x20 and J(x10,x20,ϕ(x1(·),x2(·)))=ρx10Tx10 for all (x10,x20)2×.

Let α1=5Hz,α2=45Hz,σ1=2,σ2=5,σ3=1,γ=1.4,ω1=1Hz,θ1=4Hz,θ2=32Hz,ρ=1Hz,q10=4,q20=2, and q30 = 10. Figure 3 shows the sample average along with the standard deviation of the controlled system state versus time, whereas Fig. 4 shows the sample average along with the standard deviation of the corresponding control signal versus time for 20 sample paths. Note that x1(t)=[q1(t),q2(t)]Ta.s.0 as t, whereas x2(t)=q3(t) is unstable. Finally, J(x1(0),x2(0),ϕ(x1(·),x2(·)))=20Hz.

Conclusion

In this paper, an optimal control problem for partial-state stochastic stabilization is stated, and sufficient conditions are derived to characterize an optimal nonlinear feedback controller that guarantees asymptotic stability in probability of part of the closed-loop system state. Specifically, we utilized a steady-state stochastic Hamilton–Jacobi–Bellman framework to characterize optimal nonlinear feedback controllers with a notion of optimality that is directly related to a given Lyapunov function that is positive definite and decrescent with respect to part of the system state. This result was then used to address optimal linear and nonlinear regulation for linear and nonlinear time-varying stochastic systems with quadratic and nonlinear-nonquadratic performance measures. In addition, we developed inverse optimal feedback controllers for affine nonlinear systems and linear time-varying stochastic systems with polynomial and multilinear performance criteria. Extensions of this framework for addressing discrete-time systems with computation constraints as well as optimal adaptive controllers for stochastic dynamical systems are currently under development.

Acknowledgment

This work was supported in part by the Air Force Office of Scientific Research under Grant No. FA9550-16-1-0100.

References

References
1.
L'Afflitto
,
A.
,
Haddad
,
W. M.
, and
Bakolas
,
E.
,
2016
, “
Partial-State Stabilization and Optimal Feedback Control
,”
Int. J. Robust Nonlinear Control
,
26
(
5
), pp.
1026
1050
.
2.
Bernstein
,
D. S.
,
1993
, “
Nonquadratic Cost and Nonlinear Feedback Control
,”
Int. J. Robust Nonlinear Control
,
3
(
3
), pp.
211
229
.
3.
Haddad
,
W. M.
, and
Chellaboina
,
V.
,
2008
,
Nonlinear Dynamical Systems and Control: A Lyapunov-Based Approach
,
Princeton University Press
,
Princeton, NJ
.
4.
Lum
,
K.-Y.
,
Bernstein
,
D. S.
, and
Coppola
,
V. T.
,
1995
, “
Global Stabilization of the Spinning Top With Mass Imbalance
,”
Dyn. Stab. Syst.
,
10
(
4
), pp.
339
365
.
5.
Vorotnikov
,
V. I.
,
1998
,
Partial Stability and Control
,
Birkhäuser
,
Boston, MA
.
6.
Chellaboina
,
V.
, and
Haddad
,
W. M.
,
2002
, “
A Unification Between Partial Stability and Stability Theory for Time-Varying Systems
,”
IEEE Control Syst.
,
22
(
6
), pp.
66
75
.
7.
Molinari
,
B.
,
1973
, “
The Stable Regulator Problem and Its Inverse
,”
IEEE Trans. Autom. Control
,
18
(
5
), pp.
454
459
.
8.
Moylan
,
P. J.
, and
Anderson
,
B.
,
1973
, “
Nonlinear Regulator Theory and an Inverse Optimal Control Problem
,”
IEEE Trans. Autom. Control
,
18
(
5
), pp.
460
465
.
9.
Jacobson
,
D. H.
,
1977
,
Extensions of Linear-Quadratic Control Optimization and Matrix Theory
,
Academic Press
,
New York
.
10.
Jacobson
,
D. H.
,
Martin
,
D. H.
,
Pachter
,
M.
, and
Geveci
,
T.
,
1980
,
Extensions of Linear-Quadratic Control Theory
,
Springer-Verlag
,
Berlin
.
11.
Freeman
,
R. A.
, and
Kokotović
,
P. V.
,
1996
, “
Inverse Optimality in Robust Stabilization
,”
SIAM J. Control Optim.
,
34
(
4
), pp.
1365
1391
.
12.
Sepulchre
,
R.
,
Jankovic
,
M.
, and
Kokotovic
,
P.
,
1997
,
Constructive Nonlinear Control
,
Springer
,
London
.
13.
Deng
,
H.
, and
Krstić
,
M.
,
1997
, “
Stochastic Nonlinear Stabilization—Part II: Inverse Optimality
,”
Syst. Control Lett.
,
32
(
3
), pp.
151
159
.
14.
Speyer
,
J.
,
1976
, “
A Nonlinear Control Law for a Stochastic Infinite Time Problem
,”
IEEE Trans. Autom. Control
,
21
(
4
), pp.
560
564
.
15.
Bass
,
R.
, and
Webber
,
R.
,
1966
, “
Optimal Nonlinear Feedback Control Derived From Quartic and Higher-Order Performance Criteria
,”
IEEE Trans. Autom. Control
,
11
(
3
), pp.
448
454
.
16.
Rajpurohit
,
T.
, and
Haddad
,
W. M.
,
2016
, “
Partial-State Stabilization and Optimal Feedback Control for Stochastic Dynamical Systems
,”
American Control Conference
(
ACC
), Boston, MA, July 6–8, pp.
6562
6567
.
17.
Kushner
,
H. J.
,
1967
,
Stochastic Stability and Control
,
Academic Press
,
New York
.
18.
Khasminskii
,
R. Z.
,
2012
,
Stochastic Stability of Differential Equations
,
Springer-Verlag
,
Berlin
.
19.
Kushner
,
H. J.
,
1971
,
Introduction to Stochastic Control
,
Holt, Rinehart and Winston
,
New York
.
20.
Arnold
,
L.
,
1974
,
Stochastic Differential Equations: Theory and Applications
,
Wiley Interscience
,
New York
.
21.
Sharov
,
V.
,
1978
, “
Stability and Stabilization of Stochastic Systems Vis-a-Vis Some of the Variables
,”
Avtom. Telemekh.
,
11
(1), pp.
63
71
(in Russian).
22.
Øksendal
,
B.
,
1995
,
Stochastic Differential Equations: An Introduction With Applications
,
Springer-Verlag
,
Berlin
.
23.
Yamada
,
T.
, and
Watanabe
,
S.
,
1971
, “
On the Uniqueness of Solutions of Stochastic Differential Equations
,”
J. Math. Kyoto Univ.
,
11
(
1
), pp.
155
167
.
24.
Watanabe
,
S.
, and
Yamada
,
T.
,
1971
, “
On the Uniqueness of Solutions of Stochastic Differential Equations II
,”
J. Math. Kyoto Univ.
,
11
(
3
), pp.
553
563
.
25.
Meyn
,
S. P.
, and
Tweedie
,
R. L.
,
1993
,
Markov Chains and Stochastic Stability
,
Springer-Verlag
,
London
.
26.
Folland
,
G. B.
,
1999
,
Real Analysis: Modern Techniques and Their Applications
,
Wiley Interscience
,
New York
.
27.
Mao
,
X.
,
1999
, “
Stochastic Versions of the LaSalle Theorem
,”
J. Differ. Equations
,
153
(
1
), pp.
175
195
.
28.
Apostol
,
T. M.
,
1957
,
Mathematical Analysis
,
Addison-Wesley
,
Reading, MA
.
29.
Arapostathis
,
A.
,
Borkar
,
V. S.
, and
Ghosh
,
M. K.
,
2012
,
Ergodic Control of Diffusion Processes
,
Cambridge University Press
,
Cambridge, UK
.
30.
Curtis
,
H. D.
,
2014
,
Orbital Mechanics for Engineering Students
,
Elsevier
,
Oxford, UK
.
31.
Junkins
,
J.
, and
Schaub
,
H.
,
2009
,
Analytical Mechanics of Space Systems
,
AIAA Education Series
,
Reston, VA
.
32.
Culick
,
F. E. C.
,
1976
, “
Nonlinear Behavior of Acoustic Waves in Combustion Chambers—I
,”
Acta Astronaut.
,
3
(
9–10
), pp.
715
734
.
33.
Paparizos
,
L. G.
, and
Culick
,
F. E. C.
,
1989
, “
The Two-Mode Approximation to Nonlinear Acoustics in Combustion Chambers—I: Exact Solution for Second Order Acoustics
,”
Combust. Sci. Technol.
,
65
(
1–3
), pp.
39
65
.
34.
Yang
,
V.
,
Kim
,
S. I.
, and
Culick
,
F. E. C.
,
1987
, “
Third-Order Nonlinear Acoustic Waves and Triggering of Pressure Oscillations in Combustion Chambers—Part I: Longitudinal Modes
,”
AIAA
Paper No. 87-1873.