For linear dynamic systems with uncertain parameters, design of controllers which drive a system from an initial condition to a desired final state, limited by state constraints during the transition is a nontrivial problem. This paper presents a methodology to design a state constrained controller, which is robust to time invariant uncertain variables. Polynomial chaos (PC) expansion, a spectral expansion, is used to parameterize the uncertain variables permitting the evolution of the uncertain states to be written as a polynomial function of the uncertain variables. The coefficients of the truncated PC expansion are determined using the Galerkin projection resulting in a set of deterministic equations. A transformation of PC polynomial space to the Bernstein polynomial space permits determination of bounds on the evolving states of interest. Linear programming (LP) is then used on the deterministic set of equations with constraints on the bounds of the states to determine the controller. Numerical examples are used to illustrate the benefit of the proposed technique for the design of a rest-to-rest controller subject to deformation constraints and which are robust to uncertainties in the stiffness coefficient for the benchmark spring-mass-damper system.

## Introduction

It is well known that the performance of open-loop control strategies for motion control of flexible structures is sensitive to imperfect models. This has prompted numerous researchers to investigate the problem of developing controllers that are robust to parametric uncertainties in the models [16]. One approach has been to design controllers to take care of worst case scenarios among all realizations, which corresponds to a minimax design which requires knowledge of the support of the uncertainties. A second strategy of dealing with parametric uncertainties in the model has been to reduce the sensitivity of the cost function in the proximity of the nominal model by forcing the local sensitivity to zero.

The aforementioned problem forumulations have often considered minimizing the residual energy, i.e., the undesired energy in the system at the end of the maneuver. This is important in rest-to-rest class of problems. In addition to the residual energy constraint, deflection constraints have been considered by Singhose et al. [7] where an analytical expression for the evolution of the deflection for a spring-mass system is derived and its gradient is forced to zero to identify time instants, which correspond to maximum or minimum deflections. Deflection sampling is also used to constrain the permitted deflection in the design of the controller. Vossler and Singh [8] addressed the problem of design of the optimal control profiles for systems with uncertain parameters, for rest-to-rest maneuvers with state constraints. They formulated a linear programming (LP) problem with a cost function of minimizing the maximum excursion of the states over the duration of the maneuver called the minimax control. This, in conjunction with constraints on the terminal energy, resulted in a robust state and energy constrained controller. A uniform sampling of the uncertain space was used to solve the minimax problem. Monte Carlo (MC) realization can also be used to sample the uncertain space. However, it is well known that MC methods suffer from slow convergence with convergence rate being inversely proportional to the square root of number of sample points. This means that one needs to increase the number of samples by 100 to improve the accuracy by one decimal place. Furthermore, there is no guarantee that model parameters, which span the complement of the finite samples, will not violate the state and energy constraints. This paper considers applications where constraints have to be imposed on states over the duration of the maneuver, in the presence of model parameter uncertainties. The proposed approach does not depend on sampling to identify the realization(s), which correspond to the active constraints.

Building on the rich literature of the use of polynomial chaos (PC) in control [913], Nandi et al. [14] presents an approach, which rewrites the polynomials associated with the specified distributions of the uncertain variables using Bernstein polynomials. The mapping permits determination of time evolutions of the bounds on the uncertain states and is illustrated on a numerical example with one uncertain parameter. This paper generalizes the development in Ref. [14] and presents a systematic technique to generate tight bounds on the evolving states which are required for the controller design. In addition, since the illustrative examples in the paper include uniformly distributed uncertain parameters, comparisons to interval analysis (IA) methods are also presented. It is observed that bounds determined from IA are severely conservative making the Bernstein formulation a more appropriate choice. Then, an optimization problem is posed to ensure that the state constraints are not violated for all possible realizations of the uncertain variables. The proposed approach is attractive since it does not require Monte Carlo simulations to estimate the bounds on the states. Finally, an LP problem can be posed to determine a controller, which is robust to the uncertain variables. Although the individual ideas of a PC expansion of stochastic states, the Bernstein formulation to bound polynomials and control input design methodology via linear programming is not new, the novelty of the work lies in the fluid integration of the three frameworks, which has not been studied before.

The structure of the paper is as follows: In Sec. 2, an LP-based approach for the design of optimal control profiles, which are robust to model uncertainties, is presented. This motivates the need for the development of a PC-based approach for state constrained control for uncertain systems. A review of PC is provided in Sec. 3 to characterize uncertainties and propagate them through the dynamical model. This is followed in Sec. 4 by the PC mapping using Bernstein polynomial basis function, which illustrates how the bounds on the states are determined. Section 5 presents the bounds determined from IA and provides motivation for using the Bernstein method over IA. Section 6 elaborates on the improvement on these bounds by splitting the Bernstein coefficients. Section 7 illustrates the robust controller design using LP and the numerical results corresponding to the benchmark floating oscillator problem.

## Controller Design

This section describes the linear programming-based approach for robust optimal control by following the development in Ref. [15]. The problem is formulated for rest-to-rest maneuvers of vibratory system in the presence of model uncertainties using the minimax approach [8], where the inclusion of state constraints into the design results in a much more challenging design problem. The resulting problem statement for state-limited control is
$minu(t) F=[maxαE(α)]$
(1a)

$subject to ẋ(t)=Acx(t)+Bcu(t)$
(1b)

$x(t0)=x0; x(tf)=xf$
(1c)

$ulb≤u(t)≤uub ∀ t$
(1d)

$state constraints ∀ t$
(1e)
where $x(t)=[pT,vT]T∈ℝn, u(t)∈ℝk$, ulb, and uub are lower and upper bounds on the control input, respectively. $E(α)$ is the residual energy given by
$E(α)=12v(α) TMv(α)+12[p(α)−pref]TK(α)[p(α)−pref]$
(2)
where M is the mass matrix, K(α) is the uncertain stiffness matrix, $p(α)$ and $v(α)$ are the positional and velocity states corresponding to a realization of the uncertain $α$, respectively. $α$ represents a finite parametric set of the uncertain variable that must lie within the domain of uncertainty specified by
$αlb≤α≤αub$
(3)

where $αlb$ and $αub$ represent the lower and upper bounds, respectively. The nonlinear optimization problem can be transformed into a convex problem in the discrete domain by posing it as a linear programming problem [15].

Consider the discrete time realization of Eq. (1b)
$x(k+1)=Ax(k)+Bu(k)$
(4)
The recursive relation of Eq. (4) can be expressed as a linear expression of the initial conditions and the control input u as
$x(k+1)=Akx(1)+∑i=1kAk−iBu(i)$
(5)

where $x(1)$ is the initial condition of the states.

The objective of the problem is to determine a control input u, which drives the dynamical system from an initial state $x(1)$ to a final state $x(Nt+1)$, where k = Nt is the final iterative step of Eq. (4) in a given finite interval of time. Thus, the known quantities of the problem are initial time (T0), final time (Tf), initial conditions ($x(1)$), and final conditions ($x(Nt+1)$).

The LP problem is posed as a feasibility problem in this case with linear constraints. From Eq. (5), the final condition constraint is given by
$[ANt−1BANt−2B…ABB][u(1)u(2)⋮u(Nt)]=x(Nt+1)−ANtx(1)$
(6)
State constraints can be implemented by
$CAkx(1)+∑i=1kCAk−iBu(i) ⩽ Φ ∀ k∈[1:Nt]$
(7)
where C is the output matrix and Φ represents a prespecified bound. The control constraints are implemented as
$ulb ⩽ u(k) ⩽ uub ∀ k∈[1:Nt]$
(8)

where ulb and uub are lower and upper bounds on the control inputs. The solution to the problem stated above yields the desired control input over all the time instants (k = [1: Nt]).

Singh [16] proposed a linear programming-based approach for the design of optimal controllers, which are robust to model parameter uncertainties. This was accomplished by uniformly sampling the compact support of the uncertainty. A minimax problem is now formulated to minimize the worst residual energy at the end of the maneuver over the sampled models. It was shown that the resulting solution closely approximated the solution generated by the nonlinear programming problem.

When state constraints are included in the controller design problem, a large number of additional constraints are included in the linear programming problem. These constraints correspond to the satisfaction of the state constraint at every time instant. Since the accuracy of the optimal control improves as the sampling time decreases, one needs to tradeoff the computational cost of the additional state constraints to the improved representation of the optimal control. When one endeavors to solve the state-constrained optimal control problem in the presence of model uncertainties, the problem quickly becomes intractable as the number of uncertainties grows. To address this issue, a polynomial chaos-based approach is proposed in this paper.

## Polynomial Chaos Expansion

Polynomial chaos was first introduced by Norbert Wiener in 1938 in his article [17]. In this work, he first approximated a stochastic state following a Gaussian process by an infinite series expansion of orthogonal Hermite polynomials. Later, in 1947, Cameron and Martin [18] proved that such an expansion always converges for stochastic processes with a finite variance. Years later, this property was used by Ghanem and Spanos in their book [19] to solve stochastic differential equations. Instead of an infinite expansion, they truncate the series to a finite number of members. Then they use Galerkin projection to formulate a set of deterministic equations, and finally solve them to obtain the coefficients of their original series expansion. Xiu and Karniadakis [20] generalize the concept of PC to express the generic stochastic process as a series expansion of appropriate orthogonal polynomials (given by the Wiener–Askey scheme). This development was termed as the generalized polynomial chaos (gPC) theory. This particular concept is used in the present work to approximate a stochastic process with uniformly distributed variables.

A detailed formulation of the PC expansion is presented in this section and illustrated on a benchmark problem.

### Methodology.

Let a stochastic dynamical system be expressed in the form
$ẋ(t,ξ)=f(x(t),ξ,u(t)) and x(t0,ξ)=x0$
(9)

where $x∈ℝn is the state vector$, $ξ∈ℝm is the vector of randomvariables with known probability distributions$, $x0∈ℝn is theinitial state (which may also be uncertain)$, and $u(t) is the control input$.

From the theory of generalized gPC, the states can be expressed as
$x(t,ξ)=∑i=0∞xi(t)Ψi(ξ)$
(10)

where $Ψi(ξ)$ is a complete set of multivariate orthogonal polynomials depending on the type of distribution of $ξ$ and $xi∈ℝn$ is the time varying coefficient vector of the polynomials $Ψi(ξ)$.

The selection of the set of orthogonal polynomials for popular distributions is given by the Wiener–Askey scheme [20]. The expansion is typically truncated to a finite number of terms as an approximation of the expression. The number of terms to be included in the approximation is determined by the degree of accuracy desired. Larger the number of terms (N), longer the time span over which the states are accurate [20]. Hence, Eq. (10) is rewritten as
$x(t,ξ)=∑i=0Nxi(t)Ψi(ξ)$
(11)
which is a vector equation
$[x1x2⋮xn]︸x=[x10x20⋮xn0]︸x0Ψ0+[x11x21⋮xn1]︸x1Ψ1+⋯+[x1Nx2N⋮xnN]︸xNΨN$
(12)

where xi is the ith state and xij is the coefficient of the jth polynomial belonging to the ith state.

The objective is to determine all the time varying coefficients, i.e., xij. The basis functions of the states, i.e., Ψi, are a set of orthogonal polynomials with respect to the probability density function (pdf) of the random variable. If a system has multivariate independent random variables, the basis functions are the multivariate polynomials derived from the tensor product of the univariate basis functions of each random variable. To continue with the PC formulation, Eq. (11) is substituted in Eq. (9) to get
$∑i=0Nxi.(t)Ψi(ξ)=f(∑i=0Nxi(t)Ψi(ξ),ξ,u(t))$
(13)

The essence of PC expansion is to form a set of deterministic differential equations from the stochastic equation (13), whose solution allows us to approximate the states over time. These equations are formed by performing the Galerkin Projection on it over each of the orthogonal basis functions (i.e., Ψk, where k = 0, 1,…, N). The solution to these equations yields our desired coefficients xij.

The kth deterministic differential equation on taking the Galerkin Projection is given by
$〈∑i=0Nxi.(t)Ψi(ξ),Ψk(ξ)〉=〈f(∑i=0Nxi(t)Ψi(ξ),ξ,u(t)),Ψk(ξ)〉$
(14)
The property of orthogonality of the polynomials, permits writing the inner product as
$〈Ψi(ξ),Ψj(ξ)〉=∫ΩΨi(ξ)Ψj(ξ)pdf(ξ)dξ=ciδij$
(15)
for k = 0, 1,…, N where δij is the Kronecker delta function, ci is a coefficient whose value depends on the orthogonal polynomials, and Ω is the support of the distribution of the random variables. Now, Eq. (14) can be simplified to yield
$ẋk(t)=1ck∫Ωf(∑i=0NxiΨi(ξ),ξ,u(t))Ψk(ξ)pdf(ξ)dξ$
(16)
for k = 0, 1,…, N.

Equation (16) is a set of (N + 1) vector differential equations with the vectors having n elements each. Hence, a total of (N + 1)n scalar equations need to be solved simultaneously. Since the equations are deterministic, they can be easily numerically evaluated.

### Illustrative Example.

To clearly demonstrate the theory presented above, an implementation of it on a simple two mass spring damper system (Fig. 1) is undertaken. PC expansion is used to determine the stochastic states, and the results are compared with MC simulations.

The first and the second bodies have masses m1 and m2 with displacements y1 and y2, respectively. The coefficient of the spring between them is k and the damping constant is c. The deterministic control input u(t) is applied to the first mass.

Assuming that the spring constant is uncertain with a uniform distribution
$k=U(0.7,1.3)$
(17)
the Wiener–Askey scheme, for a uniform distribution, requires the orthogonal polynomials of expansion to be Legendre polynomials. The orthogonality of the polynomials is present in the domain [−1, 1]. Hence, the spring constant (k) is defined in terms of another random variable (ξ) with a similar domain, i.e., $ξ=U(−1,1)$ and thus, we have
$k=1+0.3ξ$
(18)
Truncating the PC expansion to the fifth-order, i.e., N = 5, the states are expanded as
$[y1y2y3y4]=[y10y20y30y40]Ψ0+[y11y21y31y41]Ψ2+⋯+[y15y25y35y45]Ψ5$
(19)
and the dynamic system in terms of the uncertain variable is
$[ẏ1ẏ2ẏ3ẏ4]=[00100001−(1+0.3ξ)m1(1+0.3ξ)m1−cm1cm1(1+0.3ξ)m2−(1+0.3ξ)m2cm2−cm2]︸A[y1y2y3y4]+[001m10]︸Bu(t)$
(20)
Differentiating Eq. (19) with respect to time and equating it to Eq. (20), we get four equations
$ẏ10Ψ0+ẏ11Ψ1+⋯+ẏ15Ψ5=A11(y10Ψ0+⋯+y15Ψ5)+A12(y20Ψ0+⋯+y25Ψ5)+⋯+A14(y40Ψ0+⋯+y45Ψ5)+B11u(t)$
(21)

$⋮$

$ẏ40Ψ0+ẏ41Ψ1+⋯+ẏ45Ψ5=A41(y10Ψ0+⋯+y45Ψ5)+A42(y20Ψ0+⋯+y25Ψ5)+⋯+A44(y40Ψ0+⋯+y45Ψ5)+B41u(t)$
(22)
where Aij and Bij are the ith row-jth column elements of A and B, respectively. Taking the Galerkin projection on Eq. (21), we get (N + 1 = 6) deterministic equations
$ẏ10〈Ψ0,Ψ0〉=f10(y10,…,y4N,u)$
(23)

$⋮$
(24)

$ẏ15〈Ψ5,Ψ5〉=f15(y10,…,y4N,u)$
(25)

Similarly, Galerkin projections on the other equations from Eqs. (21) and (22) yield a total of ((N + 1)n = 24) differential equations. These equations are solved to determine all the coefficients of the PC expansion representation of the states.

Simulating the unforced system with nonuncertain system parameters of c = 1, m1 = 5, m2 = 5, and initial conditions $y(0)=(1,0,0,0)T$, Fig. 2 illustrates the time evolution of the mean of the position of the first mass evaluated using PC expansion (truncated to the fifth-order, i.e., N = 5). It is plotted along with Monte Carlo simulations and the mean is derived from 10,000 such simulations. The overlapping curves for the mean of the Monte Carlo and the mean determined from PC prove the convergence of PC expansion over the given interval of time. The inset graph is an illustration of the difference between the mean estimated from Monte Carlo simulations and the mean estimated from the PC expansion, clearly illustrating the small difference in the two estimates. The higher-order moments can also be similarly compared.

To illustrate the development of the PC model for two uncertain variables, consider the mass spring damper system with the following uncertainties:
$k=U(0.7,1.3); c=U(0.8,1.2)$
(26)
Following the same argument as before, new random variables are defined as:
$ξ1=U(−1,1); ξ2=U(−1,1)$
(27)
$k=1+0.3ξ1; c=1+0.2ξ2$
(28)

Since there are two independent variables, the basis functions (orthogonal polynomial functions) are now functions derived from the tensor product of univariate polynomial functions. In this case, since both the uncertain variables have uniform distribution, the product is between Legendre polynomials. Numerical simulation of the gPC model model for two uncertain variables confirmed the agreement with the mean estimated from 10,000 Monte Carlo runs.

## Determination of Bernstein Bounds

In numerous fields of engineering, it is often desired to determine the bounds on the range of a particular state or function. If the particular function of interest is, or can be well approximated by a multivariate polynomial, Bernstein polynomials can be exploited to determine these bounds [21]. In fact, algorithms have also been proposed to determine the exact range of multivariate polynomials using Bernstein expansions [12,22]. The work presented here, however, makes use of the bounding properties of Bernstein bases to estimate tight bounds on the range of stochastic states and use these bounds as constraints to design a robust controller.

In Sec. 3, it was shown that gPC allows a good approximation of states of a stochastic process, as a polynomial function of the random variables. The example in particular had Legendre polynomials as the basis functions. This section highlights a procedure to find those bounds and improve them. As bounds only exist for compact support, the procedure only works when variables have a finite distribution.

Bounds' determination is done by a basis transformation from the existing bases (of PC expansion) to the Bernstein bases. The transformation allows the exploitation of Bernstein properties to establish bounds. Garloff [23] shows that these bounds can be improved if the domain of the Bernstein polynomials is subdivided. This subdivision to improve bounds is done using De Casteljau's algorithm. The algorithm can be found in Ref. [24].

### Methodology.

A polynomial function can be expressed in terms of other polynomial bases. The most common form used is the power bases, i.e.,
$1, ξ, ξ2, ξ3,…$
(29)

Other popular forms include the Bernstein basis or the set of orthogonal polynomials (like Legendre, Jacobi, Hermite, etc). Polynomials whose orthogonality domain is finite (Legendre, shifted Legendre, Jacobi,…) are used as bases in PC expansion of stochastic states with compact support. Hence, the first objective is a basis transformation from the orthogonal bases to the Bernstein bases.

A stochastic state expressed as a PC expansion
$xj(t,ξ)=∑i=0Nxji(t)Ψi(ξ)$
(30)
where xj is the jth state of the model, Ψi are the multivariate orthogonal bases (Legendre, Jacobi,…), and xji(t) are the corresponding coefficients of the bases now need to be expressed as
$xj(t,ξ)=∑i=0Nbji(t)Bi(ξ)$
(31)

where $Bi$ are the multivariate Bernstein polynomials with domain $ξ∈[0,1]m$ and bji(t) are their coefficients. This linear transformation (from Eqs. (30) to (31)) of the coefficients involves three stages.

The first stage requires Ψi (which has domain of orthogonality [a, b]) to be transformed to orthogonal bases $Ψ′i$ with orthogonality domain [0, 1] as the property of Bernstein polynomials to be utilized is satisfied only within that domain. Hence, the first transformation is given by
$xj(t,ξ)=∑i=0Nxji(t)Ψi(ξ)→xj(t,ξ)=∑i=0Nxji′(t)Ψi′(ξ)$
(32)
The second stage requires the transformation from $Ψ′i$ to the power bases Pi
$xj(t,ξ)=∑i=0Nxji′(t)Ψi′(ξ)→xj(t,ξ)=∑i=0Npji(t)Pi(ξ)$
(33)
The final transformation gives the desired polynomial expressed in terms of multivariate Bernstein bases
$xj(t,ξ)=∑i=0Npji(t)Pi(ξ)→xj(t,ξ)=∑i=0Nbji(t)Bi(ξ))$
(34)

Equations (32)(34) refer to linear transformations and vary according to the number of random variables, the original bases functions, and the order of truncation. For example, Legendre–Bernstein transformations and Jacobi–Bernstein transformation methods are provided by Farouki [25] and Rababah [26], respectively.

Multivariate Bernstein polynomial bases are derived from the tensor product of univariate Bernstein polynomials. Without loss of generality, the subscript j is dropped (to indicate any state member) for convenience from Eq. (31), and the equation is rewritten in terms of Bernstein polynomials as
$x(t,ξ)=∑ξ1,ξ2,…ξm=0d,d,…dbξ1,ξ2,…ξm(t)Bξ1,ξ2,…ξmd(ξ)$
(35)
where $bξ1,ξ2,…ξm$ are the Bernstein coefficients, $Bξ1,ξ2,…ξmd(ξ)=∏i=1mBξid(ξi)$ are the m-dimensional Bernstein polynomials and d is the degree of the univariate polynomials. Now, the range enclosing property of Bernstein polynomials over the box $ξ∈[0,1]m$
$x(t,ξ)⊆[minξ1,ξ2,…ξm=0d,d,…dbξ1,ξ2,…ξm,maxξ1,ξ2,…ξm=0d,d,…dbξ1,ξ2,…ξm]$
(36)
is used to immediately obtain bounds on the state expressions.
Equation (36) can also be expressed as a convex hull property by defining control points
${(ξx(t,ξ)):ξ∈[0,1]m}⊆conv{((b−a)*(ξ1d,ξ2d,…ξmd)bξ1,ξ2,…ξm(t)):ξ1,ξ2,…ξm=0,0,…0 to d,d,…d}$
(37)
where conv{M} denotes the convex hull of set M.

### Illustrative Example.

Solution to Eqs. (23)(25) gives the PC expansion of the first state. Analysis of just this one state is shown henceforth, i.e., the position of mass 1. Analysis for the other states can also be done in a similar manner
$y1=y10Ψ0+y11Ψ1+⋯+y15Ψ5$
(38)
The Legendre coefficients [y10, …, y15]T are transformed to Bernstein coefficients [b10, …, b15]T using the methodology described in Ref. [25] where a transformation matrix is developed to transform multivariate Legendre polynomial coefficients to multivariate Bernstein ones based on Eqs. (32)(34). Therefore, y1 is expressed as
$y1(t)=b10(t)B05(ξ)+⋯+b15(t)B55(ξ)$
(39)

Using the property mentioned in Eq. (36), at every instant in time, the maximum and minimum values of the state are obtained directly by observing the coefficients [b10,…, b15]T. The maximum values of the coefficients provide the upper bound while the minimum values provide the lower bound. Figure 3(a) shows that the state values determined from all the MC simulations of the model lie within the envelope provided by the bounds.

Slices taken from Fig. 3(a) at times t = 11, 17, and 29 are shown in Figs. 4(a)4(c). These time instants were selected to show a variety of convex hulls. The stars denote the control points derived from Eq. (37). The convex hull determined from the set of control points form a superset for all points lying on the black curve (curve of mass 1 position with varying k). Hence, finally, a deterministic bound is now available for the states of a stochastic process.

Similar to the one-dimensional (1D) case (one uncertain variable), a transformation can be used to convert the multivariate Legendre coefficients to Multivariate Bernstein coefficients [25]. The definitive bounds once again directly drop out of the coefficients and have been shown in Fig. 3(b).

Slices are once again taken, at times t = 17, 19, and 27. Since there are two uncertain parameters now, the variation of the state is shown as a surface plot. Using relation (37), new control points are generated. A convex hull is determined from the set of these control points and is shown to envelope the entire surface of the states (Figs. 5(a)5(c)), thus, proving the validity of the bounds.

## Determination of Interval Analysis Bounds

Interval analysis, first introduced by Moore [27], represents any real number (x) by an interval bounded by a lower limit and an upper limit (i.e., $x=[x¯,x¯]$). All mathematical operations between such quantities are done in terms of their intervals. For example, if $x=[x¯,x¯]$ and $y=[y¯,y¯]$, then $x+y=[x¯+y¯,x¯+y¯]$. Similar definitions for difference, multiplication, etc., are present, with the guarantee that the output variable will lie within the resulting interval.

For linear time invariant systems (class of systems being studied), parametric uncertainties can be mapped to an interval system matrix [A] where every element of [A] is an interval variable. The solution to an unforced linear time invariant system with nonuncertain initial conditions and an interval system matrix [A] is given by: $x(t)=e[A]tx(0)$. This expression requires the evaluation of another interval matrix e[A]t at every instant t, which is typically approximated by a truncated Taylor series expansion and a bounding truncation error [28].

Interval analysis suffers from the major drawback of overestimation of true intervals. In fact, calculating the true bounds of the interval matrix exponential (e[A]t) is a NP-hard problem [29]. Althoff et al. [28] calculates the exponential interval by determining exact bounds on the first three terms of the Taylor series expansion and overestimating the rest of the series (method 1). Truncation error is also considered. An improvement to this method is seen by re writing the Taylor expansion in the Horner format of polynomials accompanied by a scaling-squaring process (method 2) [29].

Both these methods allow determination of bounds in a continuous domain. An extension can be easily made to the discrete one. The motivation to do so lies in the fact that determining the matrix exponential at every instant in time can be computationally significantly expensive. In discrete domain, however, the matrix exponential needs to be evaluated just once. For a given sampling time T, the discrete state transition matrix (e[A]T) is calculated using similar methods as the continuous domain. Once the interval state transition matrix is known, the interval states are propagated using the rules of interval arithmetic matrix–vector product.

The bounds can be further improved if the entire interval is subdivided and the results from the analysis of each separate smaller interval are combined [30].

Figure 6 shows a comparison of the nature of bounds determined from each of these methods as well as the ones determined from Bernstein coefficients. Simulation parameters are the same as Sec. 3.2 with the stiffness interval k = [0.7, 1.3]. For the simulation, the stiffness interval was divided into four equal subintervals: [0.7, 0.85]; [0.85, 1]; [1, 1.15] and [1.15, 1.3] and bounds were combined from each interval. Continuous method 1 and both discrete methods (sampling time: T = 0.01) quickly become very conservative and thus have not been shown beyond times t = 4 and t = 6, respectively. Continuous method 2 performs the best among all the IA techniques. However, bounds from Bernstein coefficients are far superior. Hence, it makes sense to adopt the Bernstein bounds over the IA techniques for subsequent work.

## Splitting Bernstein Coefficients

Definite bounds were established from the extreme values of the Bernstein coefficients in Sec. 4. However, Figs. 4(a)5(c) make a visual point toward stating that the bounds yet have room for improvement. Existing bounds can be made tighter by splitting the Bernstein coefficients along each dimension.

Splitting of Bezier curves (parametric curves which are determined by Bernstein polynomials and lie within the convex hull of its control points) was first presented by French engineer de Casteljau [31]. His algorithm (famously known as de Casteljau's algorithm in computer graphics) is used to date to split the convex hull into segments. In our case, that reduces the conservative nature of the bounds.

Figure 4(c) (pertaining to example 1) shows a Bezier curve with its control points marked with asterisk. De Casteljau's algorithm is used to split this Bezier curve into two separate curves each with its own set of control points. The split can be made at any arbitrary point $(ξ=a:0 ⩽ a ⩽ 1)$.

The procedure for problems with one uncertain variable (1D) is explained prior to making the procedure more generic.

Consider the 1D control points (k, b1i) from Fig. 4(c), i.e., the Bezier curve at t = 29 (Table 1):

The lower and upper bounds for this curve is given by
$min(b1i)=0.4948 max(b1i)=0.5040$
(40)

Hence, the lower bound and upper bound for mass 1 position at time = 29 is 0.4948 and 0.5040, respectively.

On applying de Casteljau's algorithm, and splitting the curve at
$ξ=a=0.5; i.e., k=0.7+(1.3−0.7)a=1$
(41)
the new control points (Fig. 7(a)) obtained are listed in Table 2.
The lower and upper bounds for this curve is now given by
$min(b1i)=0.4979 and max(b1i)=0.5021$
(42)

The lower bound for mass 1 position increases and the upper bound decreases, thus improving the definitive bound. In this particular case, the range on the bounds see a decrease of 54.13% when the coefficients are split, indicating a substantial improvement.

Splitting of Bernstein surfaces or higher dimensional coefficients is slightly more involved than the 1D case. An explanation of the 2D case makes it simpler to understand the procedure for higher dimensional cases. The following discussion deals with the splitting of the Bernstein surface represented in Fig. 5(c).

Since, now we have a Bernstein surface (2D), the control points must be laid out in the form of a meshgrid (Table 3). Unsplit Bernstein coefficients at t = 27 (Table 3) shows all the Bernstein coefficients with horizontally varying ξ1 (i.e., k) and vertically varying ξ2 (i.e., c).

The bounds of the state are calculated by observing the maximum and the minimum value of the coefficients (from Table 3) where
$min(bξ1,ξ2)=0.4779 and max(bξ1,ξ2)=0.5068$
(43)

Hence, the lower and upper bounds of the state at time = 27 are 0.4779 and 0.5068, respectively, for unsplit control points.

Using De Casteljau's algorithm, the coefficients are split along ξ1 = 0.5 (i.e., k = 1) first to get the partially split control points. The coefficients so obtained are now further split; however, this time along ξ2 = 0.5 (i.e., c = 1) to obtain completely split coefficients (Table 4).

Bounds are determined in the same way from Table 4. The lower bound found is 0.4922 and the upper bound is 0.5068. Again, we see that there is an improvement in the bounds once the coefficients are split (Split bounds improve the range by 49.52%). The convex hull formed from the split control points is shown in Fig. 7(b) as opposed to unsplit control points in Fig. 5(c). For higher dimensional cases, the procedure for the 2D case is followed. The splitting is carried out separately along each dimension independently. The bounds as well as the convex hull are determined after the split control points (i.e., split Bernstein coefficients) have been calculated.

It should be noted that it is not necessary that the splitting be done along all the dimensions. Splitting, however, should be carried out along dimensions, which have the most influence over the states to substantially improve bounds.

Moreover, throughout the procedure, the splitting has been done once and for all cases at ξ = 0.5. Splitting the coefficients more than once is possible and does improve bounds. However, splitting the coefficients more than once greatly increases computational requirements considering that the computations have to be performed at all times. Splitting can also be done at values other than ξ = 0.5 and would yield different results. It would be ideal if the splitting could be done at an optimal point; however, determining the optimal point at every stage is once again expensive and a computational compromise is made.

## Controller Design

Determination of definite bounds on the states of a stochastic process leads to an obvious application in controller design. The information from the Bernstein coefficients can now be used to satisfy state constraints in the design of a controller robust to the uncertainties in the model.

The controller is designed in a discrete time setting of the system. A LP formulation is used (as the constraints can be reduced to linear equalities and inequalities) to solve for the desired control input. The LP problem formulation for linear dynamical systems as elaborated by Singh [15] was discussed in Sec. 2. The extension of the LP approach, which incorporates the Bernstein bounds is discussed next.

### Linear Programming Problem for Stochastic Processes.

The LP formulation for a stochastic process has a similar framework as that of the deterministic problem. However, the primary difference lies in the fact that even though the desired final states are known, the final state values for a stochastic process remains unknown.

To formulate the problem considering uncertainty, the residual energy of the final states is chosen to be the cost to be minimized. The residual potential energy is only a function of the position states and the residual kinetic energy is only a function of the velocity states. Hence, constraining the residual energy constrains the final state residues to become minimum.

However, the residual energy is a quadratic term and does not fit the requirements of an LP problem. Hence, the residual energy constraint is approximated to conform to LP rules.

For linear systems
$Mz¨+Cż+Kz=Du$
(44)
where M, C, and K are mass, damping, and stiffness matrices and D is the control influence matrix; the energy (E) at any time instant is given by
$E=12żTMż+12zTKz=12(Mż)T(Mż)+12(Kz)T(Kz)$
(45)
which is the sum of the kinetic and potential energies. It can be also visualized as the l2 norm of the square root of potential and kinetic energies. To make the residual energy constraint compatible with a LP problem, a linear approximation of Eq. (45) is assumed. This is done by using an l1 or a l norm instead of the l2 norm (which is essentially circumscribing or inscribing the l2 hypersphere with bounding hyperplanes).
Linear systems (i.e., Eq. (44)) can be written in state space as
${żz¨}︸ż=[0I−M−1K−M−1C]︸A′{zż}︸Z+[0M−1D]︸B′u$
(46)
After discretization, the model is given as
$Z(k+1)=AZ(k)+Bu(k)$
(47)
The terminal time error states are defined as
$X={x(Nt+1)ẋ(Nt+1)}={z(Nt+1)−zfż(Nt+1)−żf}$
(48)

where $zf$ and $żf$ are the desired final states.

A new set of states (at final time) are defined (which will be used to implement the residual energy constraint later) as
${y(Nt+1)ẏ(Nt+1)}={Kx(Nt+1)Mẋ(Nt+1)}$
(49)

where $y=[y1,y2,…yr]T, ẏ=[ẏ1,ẏ2,…ẏr]T$ and r is the dimension of z.

#### l∞ Formulation.

The final states, and hence the residual states, are a linear function of the control input. The new set of states (y and $ẏ$) defined by Eq. (49) are linear transformations of the residual states. Thus, a set of linear inequalities can be framed to solve for the minimax problem.

The final problem can be posed as

where p is the number of uncertain models selected over the space of uncertainty on which the residue is to be minimized, ulband uub are the bounds on the control and the State Constraints are the desired state restrictions during the control operation.

#### l1 Formulation.

The l1 problem can be formulated as
$minimizeu,ffsubject to[∑s=1rCsysi(Nt+1) ⩽ f]for i=1,2,…,pulb ⩽ u(k) ⩽ uub∀k=1,2,…Ntstate constraints∀k=1,2,…Nt$

where Cs = ±1; r is the dimension of z, p is the number of models selected within the region of uncertainty, and k the time index of the discrete system. As each Cs can have two values (±1), the total number of residual energy constraints becomes 2r.

### Linear Programming Problem With Bernstein Coefficients.

In Secs. 7.1.1 and 7.1.2, it was stated that the number of models used to form the constraints was p. Accurate representations of multivariate stochastic systems with sampling require a large number of such samples. The magnitude of this number exponentially increases with the dimensionality of the uncertainty [32]. Hence, the robustness of the controller is limited by the number of samples of the model chosen.

However, when the stochastic process has a Bernstein formulation, the control points of the Bernstein expansion provide absolute deterministic bounds on the states. Hence, a smart sampling method, which requires the model samples to be the control points (which make the convex hull), always makes the control design valid.

The case for using the unsplit control points is first presented before the split control points case is presented. Once again, the same example is used to show the theory discussed.

#### Illustrative Example.

In this section, the method to formulate the LP problem and solve it has been shown. Considering the model from Fig. 1, the dynamics are given by the matrices
$M=[5005], C=[1−1−11], K=[k−k−kk]$
(50)

For this example, it is once again assumed that the uncertainty in the model is one-dimensional. The uncertainty lies in the spring constant value given by Eq. (17).

To calculate the l1 or l form of the residual energy, the square root of K matrix is desired. Thus, a pseudo spring (with spring constant kp) is attached to the first mass to make the stiffness matrix positive definite, which is required when defining a residual energy cost function for the linear programming problem. Therefore
$Kh=[k+kp−k−kk]$
(51)

For all simulations, Kh is used as the effective stiffness matrix with the value of kp being 0.05.

As seen previously, in order to formulate the LP problem, a discrete setting is necessary. The whole procedure for PC expansion is followed to obtain the Legendre states in discrete format. After similar formulations to Eqs. (19)(22), followed by a Galerkin projection, a set of deterministic equations are derived similar to Eqs. (23)(25)
$ż10〈Ψ0,Ψ0〉=f10(z10,…,z45,u)$
(52)

$⋮$

$ż45〈Ψ5,Ψ5〉=f45(z10,…,z45,u)$
(53)
Equations (52) and (53) are discretized to yield
$Z(k+1)=AZ(k)+Bu(k)=AkZ(1)+∑i=1kAk−iBu(i)$
(54)
where $Z=[z10,…,z45]T$ are the Legendre coefficients. Therefore
$Z(Tf)=Z(Nt+1)=ANtZ(1)+∑i=1NtANt−iBu(i)$
(55)
The transformation from Legendre states to Bernstein states is linear and the transformation matrix is said to be represented by MN. Therefore, the terminal states in Bernstein form are
$ZB(Tf)=MNZ(Nt+1)=MNANtZ(1)+∑i=1NtMNANt−iBu(i)$
(56)

$or ZB(Tf)=MNANtZ(1)+[MNANt−1B…MNB][u(1)⋮u(Nt)]$
(57)
where $ZB$ is a vector of dimension (n(N + 1) = 24) containing all the Bernstein coefficients of the Bernstein basis functions obtained under the influence of the input u(k). The objective of the example is to find a control input, which can drive the system from its initial states ([0, 0, 0, 0]T) to its final states ([1, 1, 0, 0]T) while constraining the input to remain within the domain [−1 1] and constraining the relative displacement of the masses to remain within 0.2. From the desired final states, the desired Bernstein coefficients at the final time can be easily determined as
$ZBd(Tf)=[1,…(12)…,1,0,…(12)…,0]T$
(58)
Thus, the residual states are
$X(Tf)=[x1(Tf)x2(Tf)ẋ1(Tf)ẋ2(Tf)]=ZB(Tf)−ZBd(Tf)$
(59)

where $x1$ and $x2$ are vectors of dimension 6 having the Bernstein coefficients of the final residual positions of mass 1 and 2, respectively. Similarly, $ẋ1$ and $ẋ2$ hold the coefficients for their residual velocities.

Now, depending on the way in which the residual energy constraint is implemented (l1 or l), separate formulations of the LP problem can be made.

Defining new states similar to Eq. (49)
$y1(i)=Kh(1,1)(i)x1(i)+Kh(1,2)(i)x2(i)$
(60)

$y2(i)=Kh(2,1)(i)x1(i)+Kh(2,2)(i)x2(i)$
(61)

$y3(i)=M(1,1)ẋ1(i)+M(1,2)ẋ2(i)$
(62)

$y4(i)=M(2,1)ẋ1(i)+M(2,2)ẋ2(i)$
(63)
where i = 0, 1,…, p are the sample models and $xa(i)$ is the ith member of $xa$. In this framework, as stated earlier, the samples are selected to be the control points of the convex hull enveloping the uncertain region. Therefore, p = N = 5. $Kh(r,c)i$ refers to the rth row and cth column of the matrix $Kh$ defined for k(i) where k(i) is defined as
$k(i)=0.7+(1.3−0.7)iN$
(64)
l Formulation. For an l formulation, the nature of constraints is provided in Sec. 7.1.1. Since there are four states in the model, the constraint equations are
$−f ⩽ y1(i) ⩽ f; −f ⩽ y2(i) ⩽ f$
(65)

$−f ⩽ y3(i) ⩽ f; −f ⩽ y4(i) ⩽ f$
(66)
These constraints can be exercised with the help of an output matrix Cl∞. The constraints can therefore be written in a matrix format as
$±[Cl∞MNANt−1B … Cl∞MNB ∓124,1]︸H±[u(1)⋮u(Nt)f]︸z̃⩽±h1$
(67)
where $h1$ is the vector of constant terms derived from Eqs. (65) and (66). Hence, the final problem can be stated as
$minimizef=c̃Tz̃subject toHz̃ ⩽ hz̃min ⩽ z̃ ⩽ z̃max|relative mass position|⩽ 0.2$
(68)
where
$c̃T=[0…0 1], z̃=[u(1)…u(Nt) f]T$
(69)

$H=[H+H−], h=[h1−h1]$
(70)

$[umin(1)⋮umin(Nt)0] ⩽ z̃ ⩽ [umax(1)⋮umax(Nt)∞]$
(71)
l1 Formulation. Constraints for an l1 norm formulation (similar to Sec. 7.1.2) can be stated as
$−f ⩽ y1(i)±y2(i)±y3(i)±y4(i) ⩽ f$
(72)

Note that Eq. (72) is a shorthand for eight constraints. Once again, these constraints can be expressed through an output matrix Cl1. The final problem can be posed in a similar fashion as the l norm formulation described in Sec. 7.1.2.

On solving these LP problems, the resulting residual energy as a function of the uncertain stiffness for both the l1 and l can be calculated to illustrate the relative robustness of the solutions. Figure 8(a) shows the variation of residual energy with stiffness for both norms and the worst cost is comparable. Figures 8(b) and 8(c) show the residual state distributions over the entire region of uncertainty.

### Linear Programming Problem With Split Bernstein Coefficients.

Using the previously illustrated example, the methodology to apply the split Bernstein coefficients to the LP problem is shown in this section.

Initially, the order of PC expansion chosen was (N = 5). Hence, each state was represented by (N + 1 = 6) coefficients. Since there are four states in our chosen problem, the total number of coefficients were ((N + 1) × 4 = 24). As the Bernstein coefficients are split, the Bernstein coefficients representing each state increases to (2N + 1 = 11) and the total number of coefficients become (11 × 4 = 44).

Following the procedure described previously, the desired set of Bernstein coefficients at the final time (Tf) are now given by:
$ZBd(Tf)=[1,…(22)…,1,0,…(22)…,0]T$
(73)

where $ZBd(Tf)∈ℝ44$.

Consider a matrix MB(a), which when multiplied with a vector containing the unsplit Bernstein coefficients of all the states, gives a vector containing all the split Bernstein coefficients (split at the point a)
$Bs=MB(a)×Bus$
(74)

where $Bs$ and $Bus$ are split and unsplit coefficients of the same states, respectively. As the initial PC order was N = 5 in the problem, $MB∈ℝ4(2N+1)×4(N+1)$ and is used to enforce the constraints previously discussed.

The terminal states in unsplit Bernstein form (as given by Eq. (56)) can now be rewritten in split Bernstein form as
$ZB(Tf)=MBMNANtZ(1)+[MBMNANt−1B…MBMNB][u(1)⋮u(Nt)]$
(75)

where $ZB(Tf)∈ℝ44$.

The residual states, thus, become
$X(Tf)=[x1(Tf)x2(Tf)ẋ1(Tf)ẋ2(Tf)]T=ZB(Tf)−ZBd(Tf)$
(76)
where $x1$ and $x2$ are vectors of dimension 11 having the Bernstein coefficients of the final residual positions of mass 1 and 2, respectively. Similarly, $ẋ1$ and $ẋ2$ hold the coefficients for their velocities.
New states are once again defined by Eqs. (60)(63). As the number of control points have increased on splitting the Bernstein coefficients, the number of samples increase to p = 2N + 1 = 11. All the Bezier curves (i.e., the residual state curves with respect to stiffness) were split at their midpoints (i.e., at k = 1 or ξ = 0.5). Hence, k(i) for this scheme of implementation is defined as
$k(i)=0.7+(1.3−0.7)i2N+1 ∀i=0,1,…,p=11$
(77)
The rest of the formulation remains almost identical. Considering Cl as the output matrix (depending on the l1 or the l formulation), the inequality constraints are given by
$±[ClMBMNANt−1B… ClMBMNB,∓1]︸H±[u(1)⋮u(Nt)f]︸z̃⩽±h1$
(78)

where $h1$ is the vector of constant terms derived from the constraint equations. The final problem can be summarized by Eqs. (68)(71).

On solving the split LP problem, the control (obtained from l) is used to make MC simulations of the relative displacement of the masses (Fig. 9(a)) and is shown to lie within the constraints ($⩽0.2$) as well as within the envelope of Bernstein bounds.

Figure 8(a) shows the residual energy sensitivity, and Figs. 8(b) and 8(c) show the residual state distributions. Comparisons with the unsplit Bernstein control show that the split residual energy is consistently lower than the unsplit counterpart. Figures 9(b) and 9(c) show a comparison between the unsplit and split methods based on residual energy sensitivity. The dotted curves correspond to the split method while the solid lines correspond to the unsplit method. The difference in color corresponds to results obtained from different orders of PC expansion.

Once again, it is evident that in the case of the split method, the residual energy has lower values at the edges (i.e., worst case values). Also, as expected, with an increase in the order of the PC expansion, the worst case residual energy improves.

## Conclusion

This paper proposes to exploit the Bernstein Polynomials to determine the bounds of states of uncertain dynamical systems. A two mass spring system is used to illustrate the development of a set of deterministic differential equations after parameterizing the time-invariant model parameter uncertainties using polynomial chaos series. The coefficient of the Bernstein polynomials defines a convex hull, which permits determination of the upper and lower bounds of the uncertain evolution of the system states. The splitting of the Bernstein coefficients allows the development of a tighter convex hull. Finally, a rest-to-rest maneuver for a system with uncertain stiffness is used to illustrate the design of a state constrained controller, which is robust to model parameter uncertainties.

## Funding Data

• National Science Foundation, Directorate for Engineering (CMMI 1537210).

## References

References
1.
Singh
,
T.
, and
,
S. R.
,
1993
, “
Robust Time-Delay Control
,”
ASME J. Dyn. Syst. Meas. Control
,
115
(
2
), pp.
303
306
.
2.
Singer
,
N. C.
, and
Seering
,
W. P.
,
1990
, “
Preshaping Command Inputs to Reduce System Vibrations
,”
ASME J. Dyn. Syst. Meas. Control
,
112
(1), pp.
76
82
.
3.
Singh
,
T.
, and
,
S. R.
,
1994
, “
Robust Time-Optimal Control: Frequency Domain Approach
,”
J. Guid. Control Dyn.
,
17
(
2
), pp.
346
353
.
4.
Singh
,
T.
,
2002
, “
Minimax Design of Robust Controllers for Flexible Structures
,”
J. Guid. Control Dyn.
,
25
(
5
), pp.
868
875
.
5.
Liu
,
Q.
, and
Wie
,
B.
,
1992
, “
Robust Time-Optimal Control of Uncertain Flexible Spacecraft
,”
J. Guid. Control Dyn.
,
15
(
3
), pp.
597
604
.
6.
Liu
,
S.
,
Garimella
,
P.
, and
Yao
,
B.
,
2005
, “
Adaptive Robust Precision Motion Control of a Flexible System With Unmatched Model Uncertainties
,”
American Control Conference
(
ACC
), Portland, OR, June 8–10, pp.
750
755
.
7.
Singhose
,
W.
,
Banerjee
,
A. K.
, and
Seering
,
W. P.
,
1997
, “
Slewing Flexible Spacecraft With Deflection Limiting Input Shaping
,”
AIAA J. Guid. Control Dyn.
,
20
(
2
), pp. 291–298.
8.
Vossler
,
M.
, and
Singh
,
T.
,
2010
, “
Minimax State/Residual-Energy Constrained Shapers for Flexible Structures: Linear Programming Approach
,”
ASME J. Dyn. Syst., Meas. Control
,
132
(
3
), p. 031010.
9.
Bhattacharya
,
R.
,
2015
, “
A Polynomial Chaos Framework for Designing Linear Parameter Varying Control Systems
,” American Control Conference (
ACC
), Chicago, IL, July 1–3, pp. 409–414.
10.
Kim
,
K.-K.
, and
Braatz
,
R.
,
2012
, “
Generalized Polynomial Chaos Expansion Approaches to Approximate Stochastic Receding Horizon Control With Applications to Probabilistic Collision Checking and Avoidance
,” IEEE Conference on Control Applications (
CCA
), Dubrovnik, Croatia, Oct. 3–5, pp.
350
355
.
11.
Singh
,
T.
,
Singla
,
P.
, and
Konda
,
U.
,
2010
, “
Polynomial Chaos Based Design of Robust Input Shapers
,”
ASME J. Dyn. Syst. Meas. Control
,
132
(
5
), p.
051010
.
12.
Terejanu
,
G.
,
Singla
,
P.
,
Singh
,
T.
, and
Scott
,
P.
,
2010
, “
Approximate Propagation of Both Epistemic and Aleatory Uncertainty Through Dynamic Systems
,”
13th International Conference on Information Fusion
(
ICIF
), Edinburgh, UK, July 26–29, pp. 1–8.
13.
Monti
,
A.
,
Ponci
,
F.
, and
Lovett
,
T.
,
2005
, “
A Polynomial Chaos Theory Approach to Uncertainty in Electrical Engineering
,” 13th International Conference on Intelligent Systems Application to Power Systems (
ISAP
), Arlington, VA, Nov. 6–10, pp.
7
10
.
14.
Nandi
,
S.
,
Migeon
,
V.
,
Singh
,
T.
, and
Singla
,
P.
,
2016
, “
State Constrained Controller Design for Uncertain Linear Systems Using Polynomial Chaos
,” American Control Conference (
ACC
), Boston, MA, July 6–8, pp. 2005–2010.
15.
Singh
,
T.
,
2009
,
Optimal Reference Shaping for Dynamical Systems: Theory and Applications
,
CRC Press
,
Boca Raton, FL
.
16.
Singh
,
T.
,
2008
, “
Minimax Input Shaper Design Using Linear Programming
,”
ASME J. Dyn. Syst. Meas. Control
,
130
(
5
), p. 051010.
17.
Wiener
,
N.
,
1938
, “
The Homogeneous Chaos
,”
Am. J. Math.
,
60
(
4
), pp.
897
936
.
18.
Cameron
,
R. H.
, and
Martin
,
W. T.
,
1947
, “
The Orthogonal Development of Non-Linear Functionals in Series of Fourier-Hermite Functionals
,”
Ann. Math.
,
48
(
2
), pp.
385
392
.
19.
Ghanem
,
R. G.
, and
Spanos
,
P. D.
,
1991
,
Stochastic Finite Elements: Aspectral Approach
,
Springer
,
New York
.
20.
Xiu
,
D.
, and
,
G. E.
,
2002
, “
The Wiener–Askey Polynomial Chaos for Stochastic Differential Equations
,”
SIAM J. Sci. Comput.
,
24
(
2
), pp.
619
644
.
21.
Garloff
,
J.
,
2003
, “
The Bernstein Expansion and Its Applications
,”
,
25
(
27
), pp.
80
85
.http://www-home.htwg-konstanz.de/~garloff/BeExAppl.doc
22.
Ray
,
S.
, and
Nataraj
,
P.
,
2009
, “
An Efficient Algorithm for Range Computation of Polynomials Using the Bernstein Form
,”
J. Global Optim.
,
45
(
3
), pp.
403
426
.
23.
Garloff
,
J.
,
1993
, “
The Bernstein Algorithm
,”
Interval Comput.
,
2
(
6
), pp.
154
168
.https://interval.louisiana.edu/reliable-computing-journal/1993/interval-computations-1993-2-pp-154-168.pdf
24.
Berg
,
M. D.
,
Cheong
,
O.
,
Kreveld
,
M. V.
, and
Overmars
,
M.
,
2008
,
Computational Geometry: Algorithms and Applications
,
3rd ed.
,
Springer-Verlag TELOS
,
Santa Clara, CA
.
25.
Farouki
,
R.
,
2000
, “
Legendre-Bernstein Basis Transformations
,”
J. Comput. Appl. Math.
,
119
(
1–2
), pp.
145
160
.
26.
Rababah
,
A.
,
2004
, “
Jacobi-Bernstein Basis Transformation
,”
Comput. Methods Appl. Math.
,
4
(
2
), pp.
206
214
.https://pdfs.semanticscholar.org/d12c/9d3e0d9d78f0097da7d23e3cb18ff51e65af.pdf
27.
Moore
,
R. E.
,
1966
,
Interval Analysis
, Vol.
4
,
Prentice Hall
,
Englewood Cliffs, NJ
.
28.
Althoff
,
M.
,
Stursberg
,
O.
, and
Buss
,
M.
,
2007
, “
Reachability Analysis of Linear Systems With Uncertain Parameters and Inputs
,”
46th IEEE Conference on Decision and Control
(
CDC
), New Orleans, LA, Dec. 12–14, pp.
726
732
.
29.
Goldsztejn
,
A.
,
2009
, “
On the Exponentiation of Interval Matrices
,” preprint
arXiv:0908.3954
.https://arxiv.org/abs/0908.3954
30.
Stolfi
,
J.
, and
De Figueiredo
,
L.
,
2003
, “
An Introduction to Affine Arithmetic
,”
Trends Appl. Comput. Math.
,
4
(
3
), pp.
297
312
.http://www.sbmac.org.br/tema/seletas/docs/v4_3/101_01summary.pdf
31.
De Casteljau
,
P.
,
1959
, “
Outillages méthodes calcul
,” Andr e Citro en Automobiles SA, Paris, France.
32.
Stroud
,
A. H.
, and
Secrest
,
D.
,
1966
,