For linear dynamic systems with uncertain parameters, design of controllers which drive a system from an initial condition to a desired final state, limited by state constraints during the transition is a nontrivial problem. This paper presents a methodology to design a state constrained controller, which is robust to time invariant uncertain variables. Polynomial chaos (PC) expansion, a spectral expansion, is used to parameterize the uncertain variables permitting the evolution of the uncertain states to be written as a polynomial function of the uncertain variables. The coefficients of the truncated PC expansion are determined using the Galerkin projection resulting in a set of deterministic equations. A transformation of PC polynomial space to the Bernstein polynomial space permits determination of bounds on the evolving states of interest. Linear programming (LP) is then used on the deterministic set of equations with constraints on the bounds of the states to determine the controller. Numerical examples are used to illustrate the benefit of the proposed technique for the design of a rest-to-rest controller subject to deformation constraints and which are robust to uncertainties in the stiffness coefficient for the benchmark spring-mass-damper system.
It is well known that the performance of open-loop control strategies for motion control of flexible structures is sensitive to imperfect models. This has prompted numerous researchers to investigate the problem of developing controllers that are robust to parametric uncertainties in the models [1–6]. One approach has been to design controllers to take care of worst case scenarios among all realizations, which corresponds to a minimax design which requires knowledge of the support of the uncertainties. A second strategy of dealing with parametric uncertainties in the model has been to reduce the sensitivity of the cost function in the proximity of the nominal model by forcing the local sensitivity to zero.
The aforementioned problem forumulations have often considered minimizing the residual energy, i.e., the undesired energy in the system at the end of the maneuver. This is important in rest-to-rest class of problems. In addition to the residual energy constraint, deflection constraints have been considered by Singhose et al.  where an analytical expression for the evolution of the deflection for a spring-mass system is derived and its gradient is forced to zero to identify time instants, which correspond to maximum or minimum deflections. Deflection sampling is also used to constrain the permitted deflection in the design of the controller. Vossler and Singh  addressed the problem of design of the optimal control profiles for systems with uncertain parameters, for rest-to-rest maneuvers with state constraints. They formulated a linear programming (LP) problem with a cost function of minimizing the maximum excursion of the states over the duration of the maneuver called the minimax control. This, in conjunction with constraints on the terminal energy, resulted in a robust state and energy constrained controller. A uniform sampling of the uncertain space was used to solve the minimax problem. Monte Carlo (MC) realization can also be used to sample the uncertain space. However, it is well known that MC methods suffer from slow convergence with convergence rate being inversely proportional to the square root of number of sample points. This means that one needs to increase the number of samples by 100 to improve the accuracy by one decimal place. Furthermore, there is no guarantee that model parameters, which span the complement of the finite samples, will not violate the state and energy constraints. This paper considers applications where constraints have to be imposed on states over the duration of the maneuver, in the presence of model parameter uncertainties. The proposed approach does not depend on sampling to identify the realization(s), which correspond to the active constraints.
Building on the rich literature of the use of polynomial chaos (PC) in control [9–13], Nandi et al.  presents an approach, which rewrites the polynomials associated with the specified distributions of the uncertain variables using Bernstein polynomials. The mapping permits determination of time evolutions of the bounds on the uncertain states and is illustrated on a numerical example with one uncertain parameter. This paper generalizes the development in Ref.  and presents a systematic technique to generate tight bounds on the evolving states which are required for the controller design. In addition, since the illustrative examples in the paper include uniformly distributed uncertain parameters, comparisons to interval analysis (IA) methods are also presented. It is observed that bounds determined from IA are severely conservative making the Bernstein formulation a more appropriate choice. Then, an optimization problem is posed to ensure that the state constraints are not violated for all possible realizations of the uncertain variables. The proposed approach is attractive since it does not require Monte Carlo simulations to estimate the bounds on the states. Finally, an LP problem can be posed to determine a controller, which is robust to the uncertain variables. Although the individual ideas of a PC expansion of stochastic states, the Bernstein formulation to bound polynomials and control input design methodology via linear programming is not new, the novelty of the work lies in the fluid integration of the three frameworks, which has not been studied before.
The structure of the paper is as follows: In Sec. 2, an LP-based approach for the design of optimal control profiles, which are robust to model uncertainties, is presented. This motivates the need for the development of a PC-based approach for state constrained control for uncertain systems. A review of PC is provided in Sec. 3 to characterize uncertainties and propagate them through the dynamical model. This is followed in Sec. 4 by the PC mapping using Bernstein polynomial basis function, which illustrates how the bounds on the states are determined. Section 5 presents the bounds determined from IA and provides motivation for using the Bernstein method over IA. Section 6 elaborates on the improvement on these bounds by splitting the Bernstein coefficients. Section 7 illustrates the robust controller design using LP and the numerical results corresponding to the benchmark floating oscillator problem.
where and represent the lower and upper bounds, respectively. The nonlinear optimization problem can be transformed into a convex problem in the discrete domain by posing it as a linear programming problem .
where is the initial condition of the states.
The objective of the problem is to determine a control input u, which drives the dynamical system from an initial state to a final state , where k = Nt is the final iterative step of Eq. (4) in a given finite interval of time. Thus, the known quantities of the problem are initial time (T0), final time (Tf), initial conditions (), and final conditions ().
where ulb and uub are lower and upper bounds on the control inputs. The solution to the problem stated above yields the desired control input over all the time instants (k = [1: Nt]).
Singh  proposed a linear programming-based approach for the design of optimal controllers, which are robust to model parameter uncertainties. This was accomplished by uniformly sampling the compact support of the uncertainty. A minimax problem is now formulated to minimize the worst residual energy at the end of the maneuver over the sampled models. It was shown that the resulting solution closely approximated the solution generated by the nonlinear programming problem.
When state constraints are included in the controller design problem, a large number of additional constraints are included in the linear programming problem. These constraints correspond to the satisfaction of the state constraint at every time instant. Since the accuracy of the optimal control improves as the sampling time decreases, one needs to tradeoff the computational cost of the additional state constraints to the improved representation of the optimal control. When one endeavors to solve the state-constrained optimal control problem in the presence of model uncertainties, the problem quickly becomes intractable as the number of uncertainties grows. To address this issue, a polynomial chaos-based approach is proposed in this paper.
Polynomial Chaos Expansion
Polynomial chaos was first introduced by Norbert Wiener in 1938 in his article . In this work, he first approximated a stochastic state following a Gaussian process by an infinite series expansion of orthogonal Hermite polynomials. Later, in 1947, Cameron and Martin  proved that such an expansion always converges for stochastic processes with a finite variance. Years later, this property was used by Ghanem and Spanos in their book  to solve stochastic differential equations. Instead of an infinite expansion, they truncate the series to a finite number of members. Then they use Galerkin projection to formulate a set of deterministic equations, and finally solve them to obtain the coefficients of their original series expansion. Xiu and Karniadakis  generalize the concept of PC to express the generic stochastic process as a series expansion of appropriate orthogonal polynomials (given by the Wiener–Askey scheme). This development was termed as the generalized polynomial chaos (gPC) theory. This particular concept is used in the present work to approximate a stochastic process with uniformly distributed variables.
A detailed formulation of the PC expansion is presented in this section and illustrated on a benchmark problem.
where , , , and .
where is a complete set of multivariate orthogonal polynomials depending on the type of distribution of and is the time varying coefficient vector of the polynomials .
where xi is the ith state and xij is the coefficient of the jth polynomial belonging to the ith state.
The essence of PC expansion is to form a set of deterministic differential equations from the stochastic equation (13), whose solution allows us to approximate the states over time. These equations are formed by performing the Galerkin Projection on it over each of the orthogonal basis functions (i.e., Ψk, where k = 0, 1,…, N). The solution to these equations yields our desired coefficients xij.
Equation (16) is a set of (N + 1) vector differential equations with the vectors having n elements each. Hence, a total of (N + 1)n scalar equations need to be solved simultaneously. Since the equations are deterministic, they can be easily numerically evaluated.
To clearly demonstrate the theory presented above, an implementation of it on a simple two mass spring damper system (Fig. 1) is undertaken. PC expansion is used to determine the stochastic states, and the results are compared with MC simulations.
The first and the second bodies have masses m1 and m2 with displacements y1 and y2, respectively. The coefficient of the spring between them is k and the damping constant is c. The deterministic control input u(t) is applied to the first mass.
Similarly, Galerkin projections on the other equations from Eqs. (21) and (22) yield a total of ((N + 1)n = 24) differential equations. These equations are solved to determine all the coefficients of the PC expansion representation of the states.
Simulating the unforced system with nonuncertain system parameters of c = 1, m1 = 5, m2 = 5, and initial conditions , Fig. 2 illustrates the time evolution of the mean of the position of the first mass evaluated using PC expansion (truncated to the fifth-order, i.e., N = 5). It is plotted along with Monte Carlo simulations and the mean is derived from 10,000 such simulations. The overlapping curves for the mean of the Monte Carlo and the mean determined from PC prove the convergence of PC expansion over the given interval of time. The inset graph is an illustration of the difference between the mean estimated from Monte Carlo simulations and the mean estimated from the PC expansion, clearly illustrating the small difference in the two estimates. The higher-order moments can also be similarly compared.
Since there are two independent variables, the basis functions (orthogonal polynomial functions) are now functions derived from the tensor product of univariate polynomial functions. In this case, since both the uncertain variables have uniform distribution, the product is between Legendre polynomials. Numerical simulation of the gPC model model for two uncertain variables confirmed the agreement with the mean estimated from 10,000 Monte Carlo runs.
Determination of Bernstein Bounds
In numerous fields of engineering, it is often desired to determine the bounds on the range of a particular state or function. If the particular function of interest is, or can be well approximated by a multivariate polynomial, Bernstein polynomials can be exploited to determine these bounds . In fact, algorithms have also been proposed to determine the exact range of multivariate polynomials using Bernstein expansions [12,22]. The work presented here, however, makes use of the bounding properties of Bernstein bases to estimate tight bounds on the range of stochastic states and use these bounds as constraints to design a robust controller.
In Sec. 3, it was shown that gPC allows a good approximation of states of a stochastic process, as a polynomial function of the random variables. The example in particular had Legendre polynomials as the basis functions. This section highlights a procedure to find those bounds and improve them. As bounds only exist for compact support, the procedure only works when variables have a finite distribution.
Bounds' determination is done by a basis transformation from the existing bases (of PC expansion) to the Bernstein bases. The transformation allows the exploitation of Bernstein properties to establish bounds. Garloff  shows that these bounds can be improved if the domain of the Bernstein polynomials is subdivided. This subdivision to improve bounds is done using De Casteljau's algorithm. The algorithm can be found in Ref. .
Other popular forms include the Bernstein basis or the set of orthogonal polynomials (like Legendre, Jacobi, Hermite, etc). Polynomials whose orthogonality domain is finite (Legendre, shifted Legendre, Jacobi,…) are used as bases in PC expansion of stochastic states with compact support. Hence, the first objective is a basis transformation from the orthogonal bases to the Bernstein bases.
Equations (32)–(34) refer to linear transformations and vary according to the number of random variables, the original bases functions, and the order of truncation. For example, Legendre–Bernstein transformations and Jacobi–Bernstein transformation methods are provided by Farouki  and Rababah , respectively.
Using the property mentioned in Eq. (36), at every instant in time, the maximum and minimum values of the state are obtained directly by observing the coefficients [b10,…, b15]T. The maximum values of the coefficients provide the upper bound while the minimum values provide the lower bound. Figure 3(a) shows that the state values determined from all the MC simulations of the model lie within the envelope provided by the bounds.
Slices taken from Fig. 3(a) at times t = 11, 17, and 29 are shown in Figs. 4(a)–4(c). These time instants were selected to show a variety of convex hulls. The stars denote the control points derived from Eq. (37). The convex hull determined from the set of control points form a superset for all points lying on the black curve (curve of mass 1 position with varying k). Hence, finally, a deterministic bound is now available for the states of a stochastic process.
Similar to the one-dimensional (1D) case (one uncertain variable), a transformation can be used to convert the multivariate Legendre coefficients to Multivariate Bernstein coefficients . The definitive bounds once again directly drop out of the coefficients and have been shown in Fig. 3(b).
Slices are once again taken, at times t = 17, 19, and 27. Since there are two uncertain parameters now, the variation of the state is shown as a surface plot. Using relation (37), new control points are generated. A convex hull is determined from the set of these control points and is shown to envelope the entire surface of the states (Figs. 5(a)–5(c)), thus, proving the validity of the bounds.
Determination of Interval Analysis Bounds
Interval analysis, first introduced by Moore , represents any real number (x) by an interval bounded by a lower limit and an upper limit (i.e., ). All mathematical operations between such quantities are done in terms of their intervals. For example, if and , then . Similar definitions for difference, multiplication, etc., are present, with the guarantee that the output variable will lie within the resulting interval.
For linear time invariant systems (class of systems being studied), parametric uncertainties can be mapped to an interval system matrix [A] where every element of [A] is an interval variable. The solution to an unforced linear time invariant system with nonuncertain initial conditions and an interval system matrix [A] is given by: . This expression requires the evaluation of another interval matrix e[A]t at every instant t, which is typically approximated by a truncated Taylor series expansion and a bounding truncation error .
Interval analysis suffers from the major drawback of overestimation of true intervals. In fact, calculating the true bounds of the interval matrix exponential (e[A]t) is a NP-hard problem . Althoff et al.  calculates the exponential interval by determining exact bounds on the first three terms of the Taylor series expansion and overestimating the rest of the series (method 1). Truncation error is also considered. An improvement to this method is seen by re writing the Taylor expansion in the Horner format of polynomials accompanied by a scaling-squaring process (method 2) .
Both these methods allow determination of bounds in a continuous domain. An extension can be easily made to the discrete one. The motivation to do so lies in the fact that determining the matrix exponential at every instant in time can be computationally significantly expensive. In discrete domain, however, the matrix exponential needs to be evaluated just once. For a given sampling time T, the discrete state transition matrix (e[A]T) is calculated using similar methods as the continuous domain. Once the interval state transition matrix is known, the interval states are propagated using the rules of interval arithmetic matrix–vector product.
The bounds can be further improved if the entire interval is subdivided and the results from the analysis of each separate smaller interval are combined .
Figure 6 shows a comparison of the nature of bounds determined from each of these methods as well as the ones determined from Bernstein coefficients. Simulation parameters are the same as Sec. 3.2 with the stiffness interval k = [0.7, 1.3]. For the simulation, the stiffness interval was divided into four equal subintervals: [0.7, 0.85]; [0.85, 1]; [1, 1.15] and [1.15, 1.3] and bounds were combined from each interval. Continuous method 1 and both discrete methods (sampling time: T = 0.01) quickly become very conservative and thus have not been shown beyond times t = 4 and t = 6, respectively. Continuous method 2 performs the best among all the IA techniques. However, bounds from Bernstein coefficients are far superior. Hence, it makes sense to adopt the Bernstein bounds over the IA techniques for subsequent work.
Splitting Bernstein Coefficients
Definite bounds were established from the extreme values of the Bernstein coefficients in Sec. 4. However, Figs. 4(a)–5(c) make a visual point toward stating that the bounds yet have room for improvement. Existing bounds can be made tighter by splitting the Bernstein coefficients along each dimension.
Splitting of Bezier curves (parametric curves which are determined by Bernstein polynomials and lie within the convex hull of its control points) was first presented by French engineer de Casteljau . His algorithm (famously known as de Casteljau's algorithm in computer graphics) is used to date to split the convex hull into segments. In our case, that reduces the conservative nature of the bounds.
Figure 4(c) (pertaining to example 1) shows a Bezier curve with its control points marked with asterisk. De Casteljau's algorithm is used to split this Bezier curve into two separate curves each with its own set of control points. The split can be made at any arbitrary point .
The procedure for problems with one uncertain variable (1D) is explained prior to making the procedure more generic.
Hence, the lower bound and upper bound for mass 1 position at time = 29 is 0.4948 and 0.5040, respectively.
The lower bound for mass 1 position increases and the upper bound decreases, thus improving the definitive bound. In this particular case, the range on the bounds see a decrease of 54.13% when the coefficients are split, indicating a substantial improvement.
Splitting of Bernstein surfaces or higher dimensional coefficients is slightly more involved than the 1D case. An explanation of the 2D case makes it simpler to understand the procedure for higher dimensional cases. The following discussion deals with the splitting of the Bernstein surface represented in Fig. 5(c).
Since, now we have a Bernstein surface (2D), the control points must be laid out in the form of a meshgrid (Table 3). Unsplit Bernstein coefficients at t = 27 (Table 3) shows all the Bernstein coefficients with horizontally varying ξ1 (i.e., k) and vertically varying ξ2 (i.e., c).
Hence, the lower and upper bounds of the state at time = 27 are 0.4779 and 0.5068, respectively, for unsplit control points.
Using De Casteljau's algorithm, the coefficients are split along ξ1 = 0.5 (i.e., k = 1) first to get the partially split control points. The coefficients so obtained are now further split; however, this time along ξ2 = 0.5 (i.e., c = 1) to obtain completely split coefficients (Table 4).
Bounds are determined in the same way from Table 4. The lower bound found is 0.4922 and the upper bound is 0.5068. Again, we see that there is an improvement in the bounds once the coefficients are split (Split bounds improve the range by 49.52%). The convex hull formed from the split control points is shown in Fig. 7(b) as opposed to unsplit control points in Fig. 5(c). For higher dimensional cases, the procedure for the 2D case is followed. The splitting is carried out separately along each dimension independently. The bounds as well as the convex hull are determined after the split control points (i.e., split Bernstein coefficients) have been calculated.
It should be noted that it is not necessary that the splitting be done along all the dimensions. Splitting, however, should be carried out along dimensions, which have the most influence over the states to substantially improve bounds.
Moreover, throughout the procedure, the splitting has been done once and for all cases at ξ = 0.5. Splitting the coefficients more than once is possible and does improve bounds. However, splitting the coefficients more than once greatly increases computational requirements considering that the computations have to be performed at all times. Splitting can also be done at values other than ξ = 0.5 and would yield different results. It would be ideal if the splitting could be done at an optimal point; however, determining the optimal point at every stage is once again expensive and a computational compromise is made.
Determination of definite bounds on the states of a stochastic process leads to an obvious application in controller design. The information from the Bernstein coefficients can now be used to satisfy state constraints in the design of a controller robust to the uncertainties in the model.
The controller is designed in a discrete time setting of the system. A LP formulation is used (as the constraints can be reduced to linear equalities and inequalities) to solve for the desired control input. The LP problem formulation for linear dynamical systems as elaborated by Singh  was discussed in Sec. 2. The extension of the LP approach, which incorporates the Bernstein bounds is discussed next.
Linear Programming Problem for Stochastic Processes.
The LP formulation for a stochastic process has a similar framework as that of the deterministic problem. However, the primary difference lies in the fact that even though the desired final states are known, the final state values for a stochastic process remains unknown.
To formulate the problem considering uncertainty, the residual energy of the final states is chosen to be the cost to be minimized. The residual potential energy is only a function of the position states and the residual kinetic energy is only a function of the velocity states. Hence, constraining the residual energy constrains the final state residues to become minimum.
However, the residual energy is a quadratic term and does not fit the requirements of an LP problem. Hence, the residual energy constraint is approximated to conform to LP rules.
where and are the desired final states.
where and r is the dimension of z.
The final states, and hence the residual states, are a linear function of the control input. The new set of states (y and ) defined by Eq. (49) are linear transformations of the residual states. Thus, a set of linear inequalities can be framed to solve for the minimax problem.
where p is the number of uncertain models selected over the space of uncertainty on which the residue is to be minimized, ulband uub are the bounds on the control and the State Constraints are the desired state restrictions during the control operation.
where Cs = ±1; r is the dimension of z, p is the number of models selected within the region of uncertainty, and k the time index of the discrete system. As each Cs can have two values (±1), the total number of residual energy constraints becomes 2r.
Linear Programming Problem With Bernstein Coefficients.
In Secs. 7.1.1 and 7.1.2, it was stated that the number of models used to form the constraints was p. Accurate representations of multivariate stochastic systems with sampling require a large number of such samples. The magnitude of this number exponentially increases with the dimensionality of the uncertainty . Hence, the robustness of the controller is limited by the number of samples of the model chosen.
However, when the stochastic process has a Bernstein formulation, the control points of the Bernstein expansion provide absolute deterministic bounds on the states. Hence, a smart sampling method, which requires the model samples to be the control points (which make the convex hull), always makes the control design valid.
The case for using the unsplit control points is first presented before the split control points case is presented. Once again, the same example is used to show the theory discussed.
For this example, it is once again assumed that the uncertainty in the model is one-dimensional. The uncertainty lies in the spring constant value given by Eq. (17).
For all simulations, Kh is used as the effective stiffness matrix with the value of kp being 0.05.
where and are vectors of dimension 6 having the Bernstein coefficients of the final residual positions of mass 1 and 2, respectively. Similarly, and hold the coefficients for their residual velocities.
Now, depending on the way in which the residual energy constraint is implemented (l1 or l∞), separate formulations of the LP problem can be made.
Note that Eq. (72) is a shorthand for eight constraints. Once again, these constraints can be expressed through an output matrix Cl1. The final problem can be posed in a similar fashion as the l∞ norm formulation described in Sec. 7.1.2.
On solving these LP problems, the resulting residual energy as a function of the uncertain stiffness for both the l1 and l∞ can be calculated to illustrate the relative robustness of the solutions. Figure 8(a) shows the variation of residual energy with stiffness for both norms and the worst cost is comparable. Figures 8(b) and 8(c) show the residual state distributions over the entire region of uncertainty.
Linear Programming Problem With Split Bernstein Coefficients.
Using the previously illustrated example, the methodology to apply the split Bernstein coefficients to the LP problem is shown in this section.
Initially, the order of PC expansion chosen was (N = 5). Hence, each state was represented by (N + 1 = 6) coefficients. Since there are four states in our chosen problem, the total number of coefficients were ((N + 1) × 4 = 24). As the Bernstein coefficients are split, the Bernstein coefficients representing each state increases to (2N + 1 = 11) and the total number of coefficients become (11 × 4 = 44).
where and are split and unsplit coefficients of the same states, respectively. As the initial PC order was N = 5 in the problem, and is used to enforce the constraints previously discussed.
On solving the split LP problem, the control (obtained from l∞) is used to make MC simulations of the relative displacement of the masses (Fig. 9(a)) and is shown to lie within the constraints () as well as within the envelope of Bernstein bounds.
Figure 8(a) shows the residual energy sensitivity, and Figs. 8(b) and 8(c) show the residual state distributions. Comparisons with the unsplit Bernstein control show that the split residual energy is consistently lower than the unsplit counterpart. Figures 9(b) and 9(c) show a comparison between the unsplit and split methods based on residual energy sensitivity. The dotted curves correspond to the split method while the solid lines correspond to the unsplit method. The difference in color corresponds to results obtained from different orders of PC expansion.
Once again, it is evident that in the case of the split method, the residual energy has lower values at the edges (i.e., worst case values). Also, as expected, with an increase in the order of the PC expansion, the worst case residual energy improves.
This paper proposes to exploit the Bernstein Polynomials to determine the bounds of states of uncertain dynamical systems. A two mass spring system is used to illustrate the development of a set of deterministic differential equations after parameterizing the time-invariant model parameter uncertainties using polynomial chaos series. The coefficient of the Bernstein polynomials defines a convex hull, which permits determination of the upper and lower bounds of the uncertain evolution of the system states. The splitting of the Bernstein coefficients allows the development of a tighter convex hull. Finally, a rest-to-rest maneuver for a system with uncertain stiffness is used to illustrate the design of a state constrained controller, which is robust to model parameter uncertainties.
National Science Foundation, Directorate for Engineering (CMMI 1537210).