Abstract

In this work, we consider the problem of learning a reduced-order model of a high-dimensional stochastic nonlinear system with control inputs from noisy data. In particular, we develop a hybrid parametric/nonparametric model that learns the “average” linear dynamics in the data using dynamic mode decomposition with control (DMDc) and the nonlinearities and model uncertainties using Gaussian process (GP) regression and compare it with total least-squares dynamic mode decomposition (tlsDMD), extended here to systems with control inputs (tlsDMDc). The proposed approach is also compared with existing methods, such as DMDc-only and GP-only models, in two tasks: controlling the stochastic nonlinear Stuart–Landau equation and predicting the flowfield induced by a jet-like body force field in a turbulent boundary layer using data from large-scale numerical simulations.

1 Introduction

Control of high-dimensional nonlinear dynamical systems, such as turbulent flows, which are expensive to model due to their large state spaces, has led to the need for approximate models that are computationally tractable and amenable to standard control algorithms. Simulations and experiments of such complex and uncertain dynamical systems typically yield large spatiotemporal data that can be used to learn tractable reduced-order models suitable for control.

Data-driven model reduction of systems with high-dimensional state spaces has typically been performed using proper orthogonal decomposition (POD) [1]. POD computes the singular value decomposition (SVD) of snapshots of the high-dimensional state (e.g., flowfield) and identifies an optimal subspace containing the most energetic modes on which the high-dimensional state space is projected [2]. For systems with control inputs, POD variants, such as balanced proper orthogonal decomposition [3], have also been proposed, although balanced proper orthogonal decomposition requires knowledge of the underlying equations of the system. Dynamic mode decomposition (DMD) [4,5] is an attractive alternative for modeling the spatiotemporal evolution of data, such as fluid flows, from time-resolved snapshots of the high-dimensional system. Extensions of DMD have included systems with control inputs [6,7], noise-aware [8,9], and sparsity-promoting variants [10], among others.

Dynamic mode decomposition-based approaches often fail to capture the nonlinearities in the underlying dynamics due to the assumption of linearity of the DMD models. For that reason, Koopman operator theory—which is viewed as a generalization of DMD—has been extensively explored both in the fluid dynamics [11] and the controls community [12,13], with DMD-based [14,15], kernel-based [16], and deep learning-based [17,18] approximations of the Koopman operator proposed. Similar modeling efforts have also led to new methods such as sparse identification of nonlinear dynamics [19] and operator inference for model reduction [20].

Gaussian process (GP) regression [21] is a type of nonparametric regression model describing distributions over functions conditioned on the training data. They are ideal for learning arbitrary nonlinear stochastic dynamics due to their flexibility and inherent ability to provide uncertainty estimates capturing both process and measurement noise, as well as model uncertainties. GP regression and its scalable variants [2225] have been successfully used for a number of dynamical systems modeling [26,27] and control tasks [2830]. In the context of high-dimensional systems, GP regression has been used in modeling the POD coefficients for systems with varying parameters [3134] as well as the reduced-order dynamics after POD or auto-encoder-based order reduction [35,36].

In this brief, we propose a hybrid dynamic mode decomposition with control (DMDc) + GP model for learning the nonlinearities and model uncertainties in the reduced-order data that DMDc alone fails to capture. In addition, total least-squares DMD [9] is extended to systems with control inputs (tlsDMDc) to fairly compare the proposed method with a noise-aware, linear-only DMD-based approach that accounts for control inputs.

In Sec. 2, we first introduce DMDc and extend total least-squares DMD [9] to systems with control inputs. Then, we introduce the general framework of Gaussian process regression and the process of training a DMDc + GP reduced-order model. In Sec. 3, we demonstrate the advantages of the proposed models on two tasks: modeling and controlling the nonlinear stochastic Stuart–Landau equation from noisy data and predicting the wall-normal velocity field induced by a jet-like body force field in a turbulent boundary layer.

2 Method

2.1 Dynamic Mode Decomposition With Control.

Assume that the unknown stochastic nonlinear and, possibly, high-dimensional system we want to model has discrete-time dynamics of the form
(1)

where yny is the state, unu is the control input, wny is the independent and identically distributed Gaussian white noise, and f(·) is the nonlinear operator that propagates the state y(k) by one time-step.

Our goal is to identify a reduced-order model of the underlying dynamics when Eq. (1) is unknown and only a limited number of noisy experimental or numerical data is available, as is typical in fluid dynamics applications.

In particular, given a set of p training data tuples {(y(i),u(i),f(y(i),u(i),w(i)))}1p, where y(i),i=1,,p, are not necessarily sequential, the data can be arranged as
(2a)
(2b)
(2c)

2.1.1 Model Order Reduction.

A common way to reduce the dimensionality of the data when ny1 is to project the high-dimensional state y(k) onto the POD modes given by the SVD of the data matrix Y=UΣVT, where the columns of matrix Uny×p are the orthonormal eigenvectors of YYT or POD modes arranged by their energy content, i.e., their singular value, the columns of Vp×p are the orthonormal eigenvectors of YTY, and Σp×p is the diagonal matrix containing the singular values of Y arranged by their magnitude.

Projecting the high-dimensional snapshots y(k) on the range space of the matrix UPODny×nx formed by the first (most energetic) nx POD modes corresponding to the nx largest singular values of Y is a common choice in model reduction methods that focuses the modeling efforts on the most important modes of the high-dimensional system and ignores the least energetic (and, typically, noisy) ones. The high-dimensional state can then be approximated as
(3)
where x(k)nx is the amplitude vector of the POD modes at time-step k. In general, x(k) is approximated in the least-squares sense as x(k)=UPODTy(k). The snapshot matrices (2a) and (2b) are also reduced as

where X,X are the POD mode amplitude matrices for the training data (2a) and (2b).

2.1.2 Dynamic Mode Decomposition With Control.

In order to capture the linear part of the controlled dynamical system, the POD mode amplitudes x(k) are modeled as a discrete-time linear state space system of the form
(4)
where Anx×nx and Bnx×nu are the state and control transition matrices, and e(·) is the linearization error from the unmodeled nonlinear and process noise part of the dynamics. The linear part of the dynamics can be computed by solving the following least-squares minimization problem:
(5)
whose minimizer is given by
(6)

where denotes the Moore–Penrose inverse. Now, the linear part of the dynamics can be modeled by setting e(·)=0 in Eq. (4), which is the approach that DMDc follows [6]. The full-dimensional snapshot y(k) at time-step k can then be approximated from Eq. (3).

2.1.3 Total Least-Squares Dynamic Mode Decomposition With Control.

In Ref. [9], total least-squares dynamic mode decomposition (tlsDMD) was proposed in order to account for the presence of measurement and process noise in the data. Following Ref. [9], we start by assuming that the snapshots (2a) and (2b) can be decomposed in a mean and noise part as
where Y¯,Y¯ are the mean snapshots, and EY,EY are the noise and modeling errors e(·) stacked similarly to Eqs. (2a) and (2b). After projection on the POD modes, we get
According to Ref. [9], the least-squares minimization approach of Eq. (5) in DMD (and, consequently, DMDc) accounts only for the noise EX in the plus-one time-step data X, leading to a bias in the estimate of the dynamics that depends on EX. Alternatively, one can use total least-squares DMD to account for noise in both matrices X and X. The approximation of the dynamics can be expressed as
(7)
and the error in both components can be minimized simultaneously by solving the least-squares minimization problem
(8)
Equation (7) can be reformulated as
(9)
and the solution to Eq. (8) can be computed using the truncated SVD
where only the first nx+nu singular values are kept, leading to an unbiased estimate of A and B
(10)

The above is an extension of tlsDMD to systems with control inputs (tlsDMDc).

2.2 Gaussian Process Regression.

Once we model the linear dynamics of the unknown system with DMDc or tlsDMDc and we obtain matrices A and B, we can then learn the nonlinear and process noise terms e(·) of the dynamics (4) that DMDc (and tlsDMDc) fails to capture. With A and B known, we can estimate the linear approximation error of each snapshot as
(11)

While DMDc and tlsDMDc assume that the error term e(·) in Eq. (4) is zero, here we attempt to model the error with GP regression trained on the data in Eq. (11). In particular, we seek to model the error term e(x(k),u(k),w(k)) in Eq. (4) with a nonparametric Gaussian process regression model as follows.

2.2.1 Exact Gaussian Process Inference.

We start by considering a single component of the vector function e(·) which, for clarity, we will refer to as e(·). The input to this function at time-step k is the concatenation of the state x(k) and control input u(k) at that time-step, i.e., z(k)=[xT(k)uT(k)]Tnz, with nz=nx+ny. If we have noisy observations ϵi of the unknown scalar-valued function e(·):nz at known inputs zi, for all Z={zi}i=1p, we collect these observations in a vector ɛ.

Let the measurement likelihood p(ɛi|e(zi)) be zero-mean Gaussian and let e be the (unknown) vector containing the values of e(·) at the points Z. We introduce a Gaussian prior e(z)N(e(z)|m(z),k(z,z)), where m(·):n is the mean function (typically chosen to be the zero function) and k(·,·):nz×nz is the kernel function (typically, a squared exponential) that measures the closeness between two input points and specifies the smoothness and continuity properties of the underlying unknown function e(·).

The prior over the entire vector e can now be written as
(12)

with the mean vector defined as [m(Z)]i=m(zi) and the covariance [k(Z,Z)]ij=k(zi,zj).

The joint density of vectors ɛ (known) and e (unknown) is
(13)
With Gaussian likelihood p(ɛ|e,Z)=N(ɛ|e,σɛ2I), the marginal likelihood
(14)
is analytically computed and the hyperparametersΘ={θm,θk,σɛ} that define the Gaussian process mean, kernel, and likelihood functions can be found by minimizing the negative log-likelihood of the training data
(15)
Prediction of ɛ* on a new state-input pair z* is done by conditioning on the training data
where

2.2.2 Multiple Outputs.

So far, the error ei we have considered has been a scalar. In the general case, the error in Eq. (4) will be a vector einx. We can train the hyperparameters of an exact GP using the estimated error snapshots ɛ in Eq. (11) and defining the matrix E as the matrix containing the function values e(zi) at the ith row. The latent functions are now ed(·):nz, for d=1,,nx, and an independent exact GP is learned for each component of the error vector e(·) in Eq. (4). Alternatively, one could train a multitask GP regression model [37] to learn similarities in the outputs of the GP or use scalable variational GP regression [25] to decouple the inference cost from the size of the training data.

2.3 Dynamic Mode Decomposition With Gaussian Process Correction.

In the hybrid DMDc + GP method, the error term e(·) in Eq. (4) is modeled with GP regression. The training steps are:

  1. Perform model order reduction using POD (optional).

  2. Compute A and B matrices to capture the “average” linear dynamics in Eq. (4) with DMDc.

  3. Train an exact GP to learn the nonlinear and noisy terms e(·) in Eq. (4).

One could use either DMDc or tlsDMDc for learning the linear part of the dynamics. However, as it is demonstrated in the numerical experiments, both choices perform similarly, with DMDc + GP having a slight advantage over tlsDMDc + GP. The use of GP regression for correcting the DMDc predictions offers a number of advantages, such as flexibility and uncertainty awareness. In particular, if we evaluate the dynamics away from the training dataset, the DMDc model will take over, while the uncertainty of the GP inference will increase, indicating less confidence in the GP predictions.

3 Numerical Experiments

3.1 Stuart–Landau Equation.

We start by demonstrating the proposed tlsDMDc and DMDcGP methods on the stochastic Stuart–Landau equation (considered to be a proxy for the flow oscillations behind a cylinder [38]), which is a nonlinear system with discrete-time dynamics in polar coordinates given by

where r(k) is the radius and θ(k) the angle at time-step k, ur(k) and uθ(k) are the control inputs, and wr(k),wθ(k)N(0,σ) are independent and identically distributed Gaussian noise terms. The parameters in this experiment are dt = 0.01, μ=0.1, β = 1, and γ = 1, while σ varies from 0 to 0.1.

3.1.1 Collecting Data.

The state space is encoded as x(k)=[r(k)sin(θ)cos(θ)]T3 in order to capture the periodic behavior of the angle θ(k). We assume that we have perfect state measurements and the only noise source is the process noise. We collect data for 13,000 time-steps and use the first 10,000 time-steps for training and the rest 3000 for testing. For the training split of the data, the control inputs are periodic with increasing frequency, in order to excite the different modes of the system. For the test part, the inputs are again periodic with increasing, but lower, frequency. The train/test split is shown in Fig. 1. The GP uses a squared exponential kernel and is trained on every fifth data point, in order to keep the inference cost low.

Fig. 1
Stuart–Landau equation. Train/test data split. Control input: —–, state: —–.
Fig. 1
Stuart–Landau equation. Train/test data split. Control input: —–, state: —–.
Close modal

3.1.2 Model Evaluation.

We train five different models (DMDc, tlsDMDc, DMDcGP, tlsDMDcGP, and GP-only) on the training split of the data and evaluate on the test split. In order to test the predictive performance of the models on systems with noise, we evaluate each model on three different noise settings: σ = 0 (no noise), σ=0.05 (low noise), and σ=0.1 (high noise). Evaluation is performed as follows: first, we select 64 uniformly distributed states from the test split as initial conditions; then, we simulate the state dynamics forward for N = 256 time-steps; finally, we compute the average L2 norm of the state prediction error as a percentage of the actual state, i.e.,
(16)

The results are shown in Table 1. First, we notice that tlsDMDc tends to perform better than DMDc in all noise levels. Second, we see a big improvement in the prediction errors when a GP is introduced, with DMDcGP performing slightly better than tlsDMDcGP in all noise levels. Third, a GP-only model (without a linear DMDc or tlsDMDc part) has good performance in the presence of noise, with the caveat that when the GP approaches a location of the state and input space that is away from the training data, the predictions collapse to zero, pushing the rest of the predictions off (as seen in the zero-noise case, where the error is significantly large). For demonstration purposes, we also experiment with a more challenging 3000 time-step prediction using tlsDMDc and DMDcGP, as shown in Fig. 2.

Fig. 2
Stuart–Landau equation. Long-term state predictions of 3000 time-steps on the test dataset. Exact: noisy line, predicted: smooth line. (a) tlsDMDc and (b) DMDcGP.
Fig. 2
Stuart–Landau equation. Long-term state predictions of 3000 time-steps on the test dataset. Exact: noisy line, predicted: smooth line. (a) tlsDMDc and (b) DMDcGP.
Close modal
Table 1

Stuart–Landau equation

ModelσDMDctlsDMDcDMDcGPtlsDMDcGPGP
eL2%019.26.60.30.352.6
0.0519.56.52.52.613.5
0.119.86.65.17.06.3
ModelσDMDctlsDMDcDMDcGPtlsDMDcGPGP
eL2%019.26.60.30.352.6
0.0519.56.52.52.613.5
0.119.86.65.17.06.3

L2 error for a prediction horizon of N = 256, averaged over 100 initial conditions.

Minimum errors per case are indicated with bold.

3.1.3 Model Predictive Control.

We further test the use of the learned models on a model predictive control task, where the mean state is required to follow a given trajectory {xdes(0),,xdes(T)} under the presence of process noise. We formulate the following optimal control problem:

where the terms ΔAk,ΔBk, and dk result from the linearization of the Gaussian process [30] at time-step k, the inputs are constrained to be between –0.2 and 0.2, and the initial condition xk at each solution of the optimal control problem is an exact measurement of the state (alternatively, it could be a state estimate derived from Kalman filtering a noisy partial state measurement). The actuation and state costs are chosen to be R=103I2×2 and Q=I3×3, respectively, the receding horizon is N = 50, and the optimal control problem is solved at each time-step as a quadratic program with inequality constraints [39].

We run the model predictive controller for 2000 time-steps, with a desired trajectory as shown in Fig. 3. The control task is executed with high noise (σ=0.1) in order to demonstrate the robustness of the learned DMDcGP model. Using the tlsDMDc model leads to large tracking errors (Fig. 3(a)), compared to DMDcGP where, after a few time-steps, the state x(k) ends up close to the desired trajectory (Fig. 3(b)).

Fig. 3
Stuart–Landau equation. Trajectory tracking model predictive control with noise σ=0.1, using the learned tlsDMDc and DMDcGP models. 1st and 2nd row: Optimal input. 3rd-6th row: state x(k): —–, desired state xdes: – –. (a) tlsDMDc and (b) DMDcGP.
Fig. 3
Stuart–Landau equation. Trajectory tracking model predictive control with noise σ=0.1, using the learned tlsDMDc and DMDcGP models. 1st and 2nd row: Optimal input. 3rd-6th row: state x(k): —–, desired state xdes: – –. (a) tlsDMDc and (b) DMDcGP.
Close modal

3.2 Near-Wall Jet in a Turbulent Boundary Layer.

Next, we test the proposed tlsDMDc and DMDcGP methods on a more challenging model order reduction task for a high-dimensional system. In particular, we want to model the wall-normal velocity field that a jet in a turbulent boundary layer induces. Such a model is useful for model-based turbulent flow control tasks, where the cost of simulations is prohibitively large (e.g., 36,000 CPU hours to collect the present dataset) for real-time control [40,41]. We perform large eddy simulations of a turbulent boundary layer at a Reynolds number based on the momentum thickness of about Reθ=2000. The near-wall jet is modeled as a body force with Gaussian distribution in space, a 45deg pitch angle toward the wall and in the direction of the flow, and magnitude that is controlled by a scalar control input, u(k)[0,1]. We perform a set of ten different large eddy simulations for pulsed inputs as shown in Fig. 4 and ensemble-average the data collected from these simulations, in order to minimize the effect of the background turbulence on the data and focus on the mean effect that the jet has on the flow. The high-dimensional state is the wall-normal velocity at a grid of size 61×16×15 around the force field, yielding a high-dimensional state y of size ny = 14,640.

Fig. 4
Jet in a turbulent boundary layer. Control input for generating the training and test dataset.
Fig. 4
Jet in a turbulent boundary layer. Control input for generating the training and test dataset.
Close modal

3.2.1 Training.

First, we reduce the dimensionality of the data by projecting the high-dimensional states y onto the first nx = 5 POD modes, which have been observed to capture the main flow structures in the data. Then, we model the POD mode amplitudes using both tlsDMDc and DMDcGP, computed on the train split shown in Fig. 4.

3.2.2 Prediction.

The task here is to predict the flowfield over the next 250 time-steps for a pulse input that lasts a different amount of time (800 time-steps) than the pulses that were used in the training split (400 and 1200 time-steps). This problem is particularly challenging since the dynamics are transient. If we look at the POD mode amplitudes predicted by the tlsDMDc model in Fig. 5, we notice that although the predicted amplitudes for the dominant modes are closely tracking their corresponding best POD approximation, the less dominant modes are not well approximated. The latter leads to tlsDMDc underestimating both the downwash (flow toward the wall) and the upwash (flow away from the wall) downstream of the domain (Fig. 6(a)). On the contrary, DMDcGP not only approximates the amplitude of these weaker modes better but its predicted flowfield (Fig. 6(b)) closely matches both the closest POD approximation (Fig. 6(c)) as well as the exact (ensemble-averaged) flowfield (Fig. 6(d)). The mean state prediction errors (in the L2-norm sense) are given in Table 2.

Fig. 5
Jet in a turbulent boundary layer. Reduced-order state predictions (—–) versus the projection of the exact flowfield onto the POD modes (– –).
Fig. 5
Jet in a turbulent boundary layer. Reduced-order state predictions (—–) versus the projection of the exact flowfield onto the POD modes (– –).
Close modal
Fig. 6
Jet in a turbulent boundary layer. Wall-normal velocity field induced by the jet after a pulse input. Isosurfaces of flow moving toward (inner isosurfaces) and away (outer isosurfaces) from the wall. The DMDcGP approach follows the evolution of the average jet pulse closer than tlsDMDc. (a) tlsDMDc, (b) DMDcGP, (c) best POD approximation, and (d) exact.
Fig. 6
Jet in a turbulent boundary layer. Wall-normal velocity field induced by the jet after a pulse input. Isosurfaces of flow moving toward (inner isosurfaces) and away (outer isosurfaces) from the wall. The DMDcGP approach follows the evolution of the average jet pulse closer than tlsDMDc. (a) tlsDMDc, (b) DMDcGP, (c) best POD approximation, and (d) exact.
Close modal
Table 2

Jet in a turbulent boundary layer

ModelDMDctlsDMDcDMDcGPtlsDMDcGPGP
eL2%32.426.76.38.18.5
ModelDMDctlsDMDcDMDcGPtlsDMDcGPGP
eL2%32.426.76.38.18.5

L2 error for a prediction horizon of N = 250 in the test split.

Minimum errors per case are indicated with bold

4 Conclusion

We presented an extension of the noise-aware total least-squares dynamic mode decomposition to systems with control inputs and a hybrid approach combining dynamic mode decomposition with control and Gaussian process regression for learning reduced-order models for high-dimensional stochastic nonlinear systems. Both approaches were shown to yield improved results in prediction and control tasks over existing methods. Future work will leverage these hybrid models in flow control applications, such as in Refs. [40] and [41].

Acknowledgment

The authors would like to acknowledge support by the National Science Foundation Award Nos. 2129494 and 2052811 and the Texas Advanced Computing Center. Alexandros Tsolovikos acknowledges support by the A. Onassis Foundation Scholarship.

Funding Data

  • Division of Chemical, Bioengineering, Environmental, and Transport Systems (Award No. 2129494; Funder ID: 10.13039/100000146).

  • Division of Civil, Mechanical and Manufacturing Innovation (Award No. 2052811; Funder ID: 10.13039/100000147).

References

1.
Lumey
,
J. L.
,
1970
,
Stochastic Tools in Turbulence
,
Elsevier Science
,
New York
.
2.
Sirovich
,
L.
,
1987
, “
Turbulence and the Dynamics of Coherent Structures. I. Coherent Structures
,”
Q. Appl. Math.
,
45
(
3
), pp.
561
571
.10.1090/qam/910462
3.
Willcox
,
K.
, and
Peraire
,
J.
,
2002
, “
Balanced Model Reduction Via the Proper Orthogonal Decomposition
,”
AIAA J.
,
40
(
11
), pp.
2323
2330
.10.2514/2.1570
4.
Rowley
,
C. W.
,
Mezić
,
I.
,
Bagheri
,
S.
,
Schlatter
,
P.
, and
Henningson
,
D. S.
,
2009
, “
Spectral Analysis of Nonlinear Flows
,”
J. Fluid Mech.
,
641
, pp.
115
127
.10.1017/S0022112009992059
5.
Schmid
,
P. J.
,
2010
, “
Dynamic Mode Decomposition of Numerical and Experimental Data
,”
J. Fluid Mech.
,
656
, pp.
5
28
.10.1017/S0022112010001217
6.
Proctor
,
J. L.
,
Brunton
,
S. L.
, and
Kutz
,
J. N.
,
2016
, “
Dynamic Mode Decomposition With Control
,”
SIAM J. Appl. Dyn. Sys
t.,
15
(
1
), pp.
142
161
.10.1137/15M1013857
7.
Tsolovikos
,
A.
,
Bakolas
,
E.
,
Suryanarayanan
,
S.
, and
Goldstein
,
D.
,
2021
, “
Estimation and Control of Fluid Flows Using Sparsity-Promoting Dynamic Mode Decomposition
,”
IEEE Control Syst. Lett.
,
5
(
4
), pp.
1145
1150
.10.1109/LCSYS.2020.3015776
8.
Hemati
,
M. S.
,
Rowley
,
C. W.
,
Deem
,
E. A.
, and
Cattafesta
,
L. N.
,
2017
, “
De-Biasing the Dynamic Mode Decomposition for Applied Koopman Spectral Analysis of Noisy Datasets
,”
Theor. Comput. Fluid Dyn.
,
31
(
4
), pp.
349
368
.10.1007/s00162-017-0432-2
9.
Dawson
,
S.
,
Hemati
,
M. S.
,
Williams
,
M. O.
, and
Rowley
,
C. W.
,
2016
, “
Characterizing and Correcting for the Effect of Sensor Noise in the Dynamic Mode Decomposition
,”
Exp. Fluids
,
57
(
3
), pp.
1
19
.10.1007/s00348-016-2127-7
10.
Jovanović
,
M. R.
,
Schmid
,
P. J.
, and
Nichols
,
J. W.
,
2014
, “
Sparsity-Promoting Dynamic Mode Decomposition
,”
Phys. Fluids
,
26
(
2
), p.
024103
.10.1063/1.4863670
11.
Mezić
,
I.
,
2013
, “
Analysis of Fluid Flows Via Spectral Properties of the Koopman Operator
,”
Annu. Rev. Fluid Mech.
,
45
, pp.
357
378
.10.1146/annurev-fluid-011212-140652
12.
Korda
,
M.
, and
Mezić
,
I.
,
2018
, “
Linear Predictors for Nonlinear Dynamical Systems: Koopman Operator Meets Model Predictive Control
,”
Automatica
,
93
, pp.
149
160
.10.1016/j.automatica.2018.03.046
13.
Abraham
,
I.
, and
Murphey
,
T. D.
,
2019
, “
Active Learning of Dynamics for Data-Driven Control Using Koopman Operators
,”
IEEE Trans. Rob.
,
35
(
5
), pp.
1071
1083
.10.1109/TRO.2019.2923880
14.
Williams
,
M. O.
,
Hemati
,
M. S.
,
Dawson
,
S. T.
,
Kevrekidis
,
I. G.
, and
Rowley
,
C. W.
,
2016
, “
Extending Data-Driven Koopman Analysis to Actuated Systems
,”
IFAC-PapersOnLine
,
49
(
18
), pp.
704
709
.10.1016/j.ifacol.2016.10.248
15.
Li
,
Q.
,
Dietrich
,
F.
,
Bollt
,
E. M.
, and
Kevrekidis
,
I. G.
,
2017
, “
Extended Dynamic Mode Decomposition With Dictionary Learning: A Data-Driven Adaptive Spectral Decomposition of the Koopman Operator
,”
Chaos: Interdiscip. J. Nonlinear Sci.
,
27
(
10
), p.
103111
.10.1063/1.4993854
16.
Williams
,
M. O.
,
Rowley
,
C. W.
, and
Kevrekidis
,
I. G.
,
2015
, “
A Kernel-Based Approach to Data-Driven Koopman Spectral Analysis
,”
J. Comput. Nonlinear Dyn.
, 2(2), pp.
247
252
.10.3934/jcd.2015005
17.
Lusch
,
B.
,
Kutz
,
J. N.
, and
Brunton
,
S. L.
,
2018
, “
Deep Learning for Universal Linear Embeddings of Nonlinear Dynamics
,”
Nat. Commun.
,
9
(
1
), pp.
1
10
.10.1038/s41467-018-07210-0
18.
Yeung
,
E.
,
Kundu
,
S.
, and
Hodas
,
N.
,
2019
, “
Learning Deep Neural Network Representations for Koopman Operators of Nonlinear Dynamical Systems
,”
2019 American Control Conference
, Philadelphia, PA, July 10–12 , pp.
4832
4839
.10.23919/ACC.2019.8815339
19.
Brunton
,
S. L.
,
Proctor
,
J. L.
, and
Kutz
,
J. N.
,
2016
, “
Discovering Governing Equations From Data by Sparse Identification of Nonlinear Dynamical Systems
,”
Proc. Natl. Acad. Sci.
,
113
(
15
), pp.
3932
3937
.10.1073/pnas.1517384113
20.
Benner
,
P.
,
Goyal
,
P.
,
Kramer
,
B.
,
Peherstorfer
,
B.
, and
Willcox
,
K.
,
2020
, “
Operator Inference for Non-Intrusive Model Reduction of Systems With Non-Polynomial Nonlinear Terms
,”
Comput. Methods Appl. Mech. Eng.
,
372
, p.
113433
.10.1016/j.cma.2020.113433
21.
Rasmussen
,
C. E.
,
Williams
,
C. K. I.
,
2006
, “
Gaussian Processes for Machine Learning
,”
The MIT Press
,
Cambridge, MA
.
22.
Quinonero-Candela
,
J.
, and
Rasmussen
,
C. E.
,
2005
, “
A Unifying View of Sparse Approximate Gaussian Process Regression
,”
J. Mach. Learn. Res.
,
6
, pp.
1939
1959
.10.5555/1046920.1194909
23.
Titsias
,
M.
,
2009
, “
Variational Learning of Inducing Variables in Sparse Gaussian Processes
,”
Artificial Intelligence and Statistics
, Clearwater Beach, FL, Apr. 16–18, pp.
567
574
.https://proceedings.mlr.press/v5/titsias09a.html
24.
Hoffman
,
M. D.
,
Blei
,
D. M.
,
Wang
,
C.
, and
Paisley
,
J.
,
2013
, “
Stochastic Variational Inference
,”
J. Mach. Learn. Res.
,
14
(
5
),
1303
1347
.10.48550/arXiv.1206.7051
25.
Hensman
,
J.
,
Fusi
,
N.
, and
Lawrence
,
N. D.
,
2013
, “
Gaussian Processes for Big Data
,”
Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence
, Bellevue, WA, Aug. 11–15, pp.
282
290
.https://auai.org/uai2013/prints/papers/244.pdf
26.
Grimes
,
D. B.
,
Chalodhorn
,
R.
, and
Rao
,
R. P.
,
2006
, “
Dynamic Imitation in a Humanoid Robot Through Nonparametric Probabilistic Inference
,”
Robotics: Science and Systems
,
Philadelphia, PA
, Aug. 16–19, pp.
199
206
.10.15607/RSS.2006.II.026
27.
Ko
,
J.
,
Klein
,
D. J.
,
Fox
,
D.
, and
Haehnel
,
D.
,
2007
, “
Gaussian Processes and Reinforcement Learning for Identification and Control of an Autonomous Blimp
,”
Proceedings 2007 IEEE International Conference on Robotics and Automation
, Rome, Italy, Apr. 10–14, pp.
742
747
.10.1109/ROBOT.2007.363075
28.
Pan
,
Y.
, and
Theodorou
,
E. A.
,
2015
, “
Data-Driven Differential Dynamic Programming Using Gaussian Processes
,” 2015 American Control Conference (
ACC)
, Chicago, IL, July 1–3, pp.
4467
4472
.
29.
Hewing
,
L.
,
Kabzan
,
J.
, and
Zeilinger
,
M. N.
,
2020
, “
Cautious Model Predictive Control Using Gaussian Process Regression
,”
IEEE Trans. Control Syst. Technol.
,
28
(
6
), pp.
2736
2743
.10.1109/TCST.2019.2949757
30.
Tsolovikos
,
A.
, and
Bakolas
,
E.
,
2021
, “
Cautious Nonlinear Covariance Steering Using Variational Gaussian Process Predictive Models
,”
IFAC-PapersOnLine
,
54
(
20
), pp.
59
64
.10.1016/j.ifacol.2021.11.153
31.
Xiao
,
M.
,
Breitkopf
,
P.
,
Filomeno Coelho
,
R.
,
Knopf-Lenoir
,
C.
,
Sidorkiewicz
,
M.
, and
Villon
,
P.
,
2010
, “
Model Reduction by CPOD and Kriging
,”
Struct. Multidiscip. Optim.
,
41
(
4
), pp.
555
574
.10.1007/s00158-009-0434-9
32.
Guo
,
M.
, and
Hesthaven
,
J. S.
,
2018
, “
Reduced Order Modeling for Nonlinear Structural Analysis Using Gaussian Process Regression
,”
Comput. Methods Appl. Mech. Eng.
,
341
, pp.
807
826
.10.1016/j.cma.2018.07.017
33.
Chang
,
Y.-H.
,
Zhang
,
L.
,
Wang
,
X.
,
Yeh
,
S.-T.
,
Mak
,
S.
,
Sung
,
C.-L.
,
Jeff Wu
,
C.
, and
Yang
,
V.
,
2019
, “
Kernel-Smoothed Proper Orthogonal Decomposition-Based Emulation for Spatiotemporally Evolving Flow Dynamics Prediction
,”
AIAA J.
,
57
(
12
), pp.
5269
5280
.10.2514/1.J057803
34.
Ortali
,
G.
,
Demo
,
N.
, and
Rozza
,
G.
,
2022
, “
A Gaussian Process Regression Approach Within a Data-Driven POD Framework for Engineering Problems in Fluid Dynamics
,”
Math. Eng.
,
4
(
3
), pp.
1
16
.10.3934/mine.2022021
35.
Masuda
,
A.
,
Susuki
,
Y.
,
Martínez-Ramón
,
M.
,
Mammoli
,
A.
, and
Ishigame
,
A.
,
2019
, “
Application of Gaussian Process Regression to Koopman Mode Decomposition for Noisy Dynamic Data
,” e-print
arXiv:1911.01143
.10.48550/arXiv.1911.01143
36.
Maulik
,
R.
,
Botsas
,
T.
,
Ramachandra
,
N.
,
Mason
,
L. R.
, and
Pan
,
I.
,
2021
, “
Latent-Space Time Evolution of Non-Intrusive Reduced-Order Models Using Gaussian Process Emulation
,”
Phys. D: Nonlinear Phenom.
,
416
, p.
132797
.10.1016/j.physd.2020.132797
37.
Bonilla
,
E. V.
,
Chai
,
K.
, and
Williams
,
C.
,
2007
, “
Multi-Task Gaussian Process Prediction
,”
Advances in Neural Information Processing Systems
, Vancouver, BC, Dec. 3–4.https://papers.nips.cc/paper_files/paper/2007/file/66368270ffd51418ec58bd793f2d9b1b-Paper.pdf
38.
Noack
,
B. R.
,
Afanasiev
,
K.
,
Morzyński
,
M.
,
Tadmor
,
G.
, and
Thiele
,
F.
,
2003
, “
A Hierarchy of Low-Dimensional Models for the Transient and Post-Transient Cylinder Wake
,”
J. Fluid Mech.
,
497
, pp.
335
363
.10.1017/S0022112003006694
39.
Borrelli
,
F.
,
Bemporad
,
A.
, and
Morari
,
M.
,
2017
,
Predictive Control for Linear and Hybrid Systems
,
Cambridge University Press
,
Cambridge, UK
.
40.
Tsolovikos
,
A.
,
Suryanarayanan
,
S.
,
Bakolas
,
E.
, and
Goldstein
,
D.
,
2021
, “
Model Predictive Control of Material Volumes With Application to Vortical Structures
,”
AIAA J.
,
59
(
10
), pp.
4057
4070
.10.2514/1.J060413
41.
Tsolovikos
,
A.
,
Jariwala
,
A.
,
Suryanarayanan
,
S.
,
Bakolas
,
E.
, and
Goldstein
,
D.
,
2023
, “
Separation Delay in Turbulent Boundary Layers Via Model Predictive Control of Large-Scale Motions
,”
Phys. Fluids
,
35
(
11
), p.
115118
.10.1063/5.0169138