Abstract

A probabilistic approach is needed to address systems with uncertainties arising in natural processes and engineering applications. For computational convenience, however, the stochastic effects are often ignored. Thus, numerical integration routines for stochastic dynamical systems are rudimentary compared to those for the deterministic case. In this work, the authors present a method to carry out stochastic simulations by using methods developed for the deterministic case. Thereby, the well-developed numerical integration routines developed for deterministic systems become available for studies of stochastic systems. The convergence of the developed method is shown and the method's performance is demonstrated through illustrative examples.

1 Introduction

Natural processes are inevitably uncertain, and systems in engineering commonly have uncertainties. To capture this stochasticity, one needs a probabilistic approach. Thus, stochastic models are widely used in physics (e.g., Gardiner [1]), engineering (e.g., Wirsching et al. [2]), and biology (e.g., Wilkinson [3]). Uncertain parameters are classically related to manufacturing imperfections, finite measurement resolution, or incomplete data, whereas random loads arise in complex environments due to wind, ocean waves, seismic excitations, or road roughness. However, for computational convenience, stochasticity is often ignored in studies of associated systems. Thus, a variety of methods exist to simulate deterministic systems, while the same is not true for carrying out simulations of stochastic systems. In particular, for high dimensional systems, methods are still in their infancy.

The response of stochastic dynamical systems is fully prescribed by the time evolution of the probability density function (PDF). If the stochasticity is generated by a Wiener process, then the PDF is governed by the Fokker-Planck equation, a partial differential equation (e.g., Risken [4]). Exact solutions have been obtained by, for example, Soize [5], Caughey [6], and Lin and Cai [7]. To obtain these results one requires a balance between dissipative terms and white noise perturbations. This balance, however, is hardly met in realistic systems (Lin and Cai [7]).

In the absence of exact solutions to the Fokker–Planck equation, it can be solved with numerical methods. In principle, space and/or time can be discretized to obtain a finite-dimensional dynamical system. Examples include methods based on finite differences (e.g., Pichler et al. [8]), the path integral method (e.g., Wehner and Wolfer [9] or Yu et al. [10]) and a finite element discretization formulated by Spencer and Bergman [11]. However, Masud and Bergman [12] point out that the computational burden is so significant, that the applications of those methods are limited to two or three-dimensional systems. In fact, the memory requirements of such methods generally grow exponentially with the dimensions of the considered dynamical system.

The most common and effective approach to approximate solutions to stochastic differential equations are discrete-time approximations (cf. Kloeden and Platen [13]). With this approach, the computational costs and memory requirements grow only polynomially with respect to number of dimensions. However, for example, as remarked by Rüemelin [14], numerical time integration schemes derived to approximate solutions of deterministic ordinary differential equations do not carry over to the stochastic setting in a straightforward manner. An underlying reason is that additional terms arise when solutions to stochastic differential equations are expanded in a Taylor series. These terms lead to the so-called Wagner-Platen formula (cf. Wagner and Platen [15] or Milstein [16]). Therefore, numerical integration of stochastic differential equations requires fundamentally different algorithms.

Schurz through his chapter in Ref. [17], Kloeden and Platen [13], and the introductory treatment by Platen [18] offer an extensive list of available integration schemes. The most prominent and basic ones are the Euler–Maruyama scheme and Milstein's method (e.g., Milstein Ref. [16]). Higher order schemes include the stochastic Runge–Kutta in which a predictor-corrector scheme is employed similar to the deterministic equivalent to enhance numerical stability (Schurz in Ref. [17]). Moreover, Boyce [19] developed a scheme with adaptive step size control. Milstein and Tret'yakov [20,21] present integration methods for stochastic differential equations with small noise terms. Such a setting is especially relevant for engineering applications, wherein the stochastic part can often be considered as small or weak. However, the computational expense of such higher-order schemes can be significant (e.g., Schurz in Ref. [17]), so much so, that these costs outweigh their benefits (cf. Mannella [22]).

In terms of the current state of the art, the available numerical methods to solve the Fokker–Plank equation suffer from the curse of dimensionality, whereas the discrete-time approximations of stochastic differential equations are often limited to the stochastic equivalent of the forward Euler scheme. However, in many applications, accurate and effective approximations of solutions to the deterministic part already require intricate numerical integration routines employing, for example, adaptive step-size control (cf. Gear [23]), predictor-corrector schemes (cf. de-Jalon and Bayo [24]), and customized methods such as Newmark's method for structural dynamics (cf. Géradin and Rixen [25]). Unfortunately, these well-developed routines are not applicable in a stochastic setting.

Here, the authors propose an algorithm to extend any deterministic numerical integrator to a stochastic setting. To that end, they proceed as follows. In many applications, especially in engineering, the noise intensity is small and one can perform a parameter expansion (cf. Sec. 3). In this setting, the authors formulate a small noise integrator (SNI), which relies on an appropriate resampling of distributions along sample paths (cf. Sec. 4.1). Within this algorithm, any deterministic integrator can be used to approximate the sample paths of stochastic dynamical systems. After proving the strong convergence of the SNI, the algorithm performance is demonstrated by considering various nonlinear mechanical systems with up to one hundred degrees-of-freedom in Sec. 4.2. Furthermore, in Sec. 5.1, a Gaussian kernel is presented to rigorously deduce a smooth distribution from a finite number of samples and the corresponding performance is shown for nonlinear mechanical systems (cf. Sec. 5.2).

2 System Setting

The authors consider the general dynamical system
x˙=f(x,t)+σb(x,t),0<σ1,xN
(1)
where the vector x denotes the states and the integer N is the numbers of degrees-of-freedom. The parameter σ in Eq. (1) is small and scales the stochastic excitation defined in differential form as
db(x,t)=B(x,t)dW=m=1MBm(x,t)dWm,B:N×N×M
(2)

where W1,W2,,WM denote uncorrelated one-dimensional standard Wiener processes (e.g., Kloeden and Platen [13]). The m-th column of the matrix B(x,t) prescribes the direction in which the m-th Wiener process acts. The directions can generally depend on the states as indicated in Eq. (2). Such multiplicative noise commonly arises, for example,, in micro-electromechanical devices (MEMS, cf. Vig and Yonkee [26]). The parameter σ can be considered to be representative of small uncertainities that arise in various settings, including high precision manufacturing, measurements, and environmental modeling.

The time dependence of the deterministic part f(x,t) and the stochastic part B(x,t) is general. Thus, system (1) includes autonomous (i.e., time-invariant) systems as well as systems with time-periodic, quasi-periodic, and even more general, possibly aperiodic, time dependence. Moreover, it is assumed that the matrix B(x,t) and the vector f(x,t) are at least twice continuously differentiable with respect to the spatial coordinate x.

This study is motivated by, but not limited to, the general nonlinear mechanical system
Mq¨+Cq˙+Kq+S(q,q˙,t)=σg(q,q˙,t),0<σ1,qN*,dg(q,q˙,t)=m=1Mgm(q,q˙,t)dWm
(3)
where q is the vector of coordinates and the integer N* denotes the numbers of degrees-of-freedom. The mass MN*×N* is assumed to be invertible, whereas the damping CN*×N* and stiffness KN*×N* matrix are assumed to be general matrices. In many engineering applications the aforementioned matrices, however, are assumed to be positive definite (cf. Géradin and Rixen [25] or Balachandran and Magrab [27]). The vector S(q,q˙,t) includes nonlinear terms depending on positions and velocities as well as time varying terms such as external and/or parameteric excitation. Introducing the state-space coordinate x:=[q,q˙], the notation
A:=[0IM1KM1C]2N*×2N*,G(q,q˙,t):=[0M1S(q,q˙,t)]2N*
(4)
and defining the columns of the matrix B as
Bm(q,q˙,t):=[0M1gm(q,q˙,t)],m=1,,M
(5)

the mechanical system (3) can be reformulated in the form of Eq. (1) with f(x,t)=Ax+G(x,t).

As customary with the treatment of stochastic differential equations, for the forthcoming development, the stochastic dynamical system is most conveniently formulated as the differential equivalent of system (1); that is
dx=f(x,t)dt+σB(x,t)dW,x(t=t0)=x0,0<σ1
(6)

where x0 denotes a deterministic initial condition.

3 Small Noise Expansion

Similar to approximate solutions obtained through perturbation analysis in the deterministic case (e.g., Verhulst [28] or Nayfeh [29]), approximate solutions to system (6) can be obtained through an expansion in the small parameter σ. The convergence of such a series can be guaranteed by the following result due to Blagoveshchenskii [30]:

Theorem 3.1. Assume thatf(x,t)andB(x,t)and their partial derivatives with respect to the coordinatexup to order two are bounded and Lipschitz continuous fort0<t<t1. Then, there existsσ0>0such that
x(t)=x0(t)+σx1(t)+σ2r(t,σ),for all0σσ0
(7)
holds, whereby the remainderr(t,σ)is bounded in the mean square sense; that is
E[sup0σσ0t0<t<t1|r(t,σ)|2]K<
(8)

Proof. This is a restatement of a Theorem by Blagoveshchenskii [30] for the current setting of the paper. Related versions have alo been stated by Freidlin and Wentzell [31] and Kaszás and Haller [32]. ◻

Remark 3.1. Polynomial nonlinearities appearing in common nonlinear oscillator prototypes such as Duffing's equation (cf. Kovacic and Brennan [33]) or the van der Pol oscillator (e.g., Guckenheimer and Holmes [34]) are neither bounded nor globally Lipschitz continuous. These nonlinearities generally grow unbounded for infinite states. Hence, Theorem 3.1 is not applicable in those cases. On the other hand, realistic systems, do not allow for arbitrarily large states. Thus, Eq. (1) is necessarily only valid for states inside a bounded domain; that is, the model's domain of validity. Thus, for example, one can follow an argument of Breunung [35] and modify Eq. (1) outside its domain of validity such that the requirements of Theorem 3.1 are met.

After substituting the expansion (7) into Eq. (6) and equating terms of equal order in σ, the result is the following hierarchy of equations for the first two orders
O(1):dx0=f(x0,t)dt,x0(t0)=x0
(9)
O(σ):dx1=xf(x0,t)x1dt+B(x0,t)dW,x1(t0)=0
(10)
The zeroth order (9) is simply the deterministic limit of Eq. (6) (σ0). It can be solved or approximated by standard methods available for deterministic systems such as numerical time integration. Moreover, the stochastic process x0+σx1 is Gaussian (cf. Blagoveshchenskii [30] or Freidlin and Wentzell [31]) and the linear stochastic differential Eq. (10) can be solved in closed form. To this end, the flow map; that is, the map mapping the initial condition x0 at the time t = t0 to their position at time t is denoted by Ft0t(x0). The gradient of Ft0t(x0) with respect to the initial condition is denoted by DFt0t(x0). The linearized flow map DFt0t(x0) is also the fundamental solution to the deterministic part of the first order Eq. (10), i.e., it satisfies
ddtDFt0t(x0)=xf(x0(t),t)DFt0t(x0),DFt0t0(x0)=I
(11)
With this notation, the first two moments of the stochastic process (10) are given by the explicit formulae
E[x1(t)]=0,E[x1(t)(x1(t))]=t0tDFt0s(x0)B(x0,s)[DFt0s(x0)B(x0,s)]ds=:Σ(t;x0,t0)
(12)

which have been obtained by van Kampen [36]. Caughey [6] and Risken [4] present solutions for the special case of a constant Jacobian xf(x0,t)=A1 in Eq. (10).

In the following, the probability density of the stochastic process (10) is obtained. The matrix Σ(t;x0,t0) defined in Eq. (12) is positive semidefinite. Thus, it admits the decomposition Σ(t;x0,t0)=UΛU, where the matrix U is orthogonal and the matrix Λ is diagonal containing the LN positive eigenvalues of Σ(t;t0,x0). The pseudo-inverse of Λ is defined elementwise by Λ̃jl1:=δjl/Λll for 1j,lL and Λ̃jl1:=0 otherwise. With this notation, the pseudo-inverse of Σ(t;t0,x0) is given by
Σ̃(t;x0,t0)1:=UΛ̃1U
(13)
Moreover, the pseudo-determinant of Σ(t;t0,x0) is defined as
|Σ(t;x0,t0)|:={Λ11Λ22ΛLL,L1,1,otherwise
(14)
With the definition of the pseudo-inverse (13) and determinant (14), the time varying probability density of the stochastic process (10) is given by
p1(x,t;x0,t0)=1(2π)L|Σ(t;x0,t0)|exp(12xΣ̃(t;x0,t0)1x)
(15)

which generally depends on the solution to the first order x0.

In summary, the solutions to system (6) can be approximated as
x(t)=Ft0t(x0)+σx1(t;x0,t0)+O(σ2),x1(t;x0,t0)p1(x,t;x0,t0)
(16)
where the symbol ∼ indicates that the random variable x1(t;x0,t0) has the distribution p1(x,t;x0,t0) (cf. Eq. (15)). Moreover, the probability density function of x0+σx1; that is, the first order approximation of solutions to system (6), is given by
pσ(x,t;x0,t0)=1σ(2π)L|Σ(t;x0,t0)|exp(12σ2(xFt0t(x0))Σ̃(t;x0,t0)1(xFt0t(x0)))
(17)

Thus, the PDF can be approximated by a Gaussian distribution whereby the mean is determined by solution to the deterministic limit Ft0t(x0) and the variance is given in Eq. (12). The authors emphasize that both, the mean and variance, are defined through purely deterministic equations and hence, they can be computed without relying on stochastic integration methods.

Before continuing with the development, it is revealing to evaluate the validity of approximation (16) based on Theorem 3.1. To this end, the classical Duffing equation
q¨+cq˙+kq+κq3=asin(Ωt)+σf,df=dW
(18)
with the dimensionless parameters
c=0.02,k=1,κ=0.5,a=0.1
(19)

is considered. First, the deterministic limit (σ0) is analyzed. Varying the excitation frequency Ω in the vicinity of the natural frequency, the frequency response curve is computed with the numerical continuation package coco [37] and shown in Fig. 1(a). For the forcing frequency Ω=1.2, two stable periodic responses exist for system (18). After selecting the initial condition x0 to be at the orbit with higher amplitude, the variances (12) are computed and depicted in Fig. 1(b). For comparison, 104 approximations of solutions to the stochastic differential Eq. (18) are calculated with the Euler–Maruyama scheme (e.g., Kloeden and Platen [13]) with the noise intensity σ=0.01. Subsequently, the variances are calculated from the Euler–Maruyama samples.

Fig. 1
Deterministic and stochastic solutions to Duffing's Eq. (18) with parameters (19): (a) frequency response of the deterministic limit of system (18) and (b) comparison of the computed variances (12) with variances computed from 104 Euler–Maruyama approximations of Eq. (18) (σ = 0.01).
Fig. 1
Deterministic and stochastic solutions to Duffing's Eq. (18) with parameters (19): (a) frequency response of the deterministic limit of system (18) and (b) comparison of the computed variances (12) with variances computed from 104 Euler–Maruyama approximations of Eq. (18) (σ = 0.01).
Close modal

From Fig. 1(b), it is evident that the approximation (16) is only accurate for about a quarter of the period T. After half a period, the computed variances (12) differ significantly from the results obtained through Monte Carlo sampling. This discrepancy is not an artifact of the obtained numerical approximation, as increasing the sample size or decreasing the step size in the Euler–Maruyama approximation does not alter Fig. 1(b). Furthermore, Fig. 1(b) is not in contradiction with Theorem 3.1, according to which, for any final time t1 there exists some σ0 such that the series (7) converges and the remainder is bounded in the mean square sense (cf. estimate (8)). It can be said that Fig. 1(b) solely indicates that for t1=T and σ=0.01, the series (7) either does not converge or the remainder is unacceptably large. However, for t1=T/4, the approximation (16) is found to be acceptable. This observation is in line with experience from deterministic dynamical systems. In this setting, Nayfeh [29] has noted that the straightforward expansions in the form of Eq. (7) are often limited to small time intervals and more sophisticated methods need to be employed to obtain solutions valid over larger time intervals.

4 Approximation of Sample Paths

In the previous section, it was demonstrated that the explicit formula (16) works well for small enough times, but the formula's accuracy deteriorates significantly for longer time spans. To address this shortcoming, the small noise integrator (SNI) is proposed in Sec. 4.1. Subsequently, the integrator's performance is evaluated on nonlinear mechanical systems with up to one hundred degrees-of-freedom.

4.1 Small Noise Integrator.

To overcome the small time horizon of the approximation (16), the Algorithm 1 is proposed next. After fixing a time-step τ and providing an initial condition x0 at t0, the distribution (15) for t=τ is computed and then this distribution is sampled to generate the new initial condition xτ according to Eq. (16). Then, the algorithm is restarted with the initial condition xτ to generate a new sample x2τ. By repeating these two steps, samples are generated at later time instances xkτ.

Table

Algorithm 1: Small noise integrator (SNI)

  Result:x(Kτ;x0,t0)
Set time step τ and provide initial condition x0;
K = 1;
  whilek < Kdo
  Solve Eq. (9) to obtain the deterministic solution F(k1)τkτ(xk1);
  Compute the variance (12) ;
  Sample the distribution p1(x,kτ;xk1,(k1)τ) (cf. Eq. (15)) to obtain
  x1(kτ;xk1,(k1)τ);
  xk=F(k1)τkτ(xk1)+σx1(kτ;xk1,(k1)τ);
  k=k+1;
  end
  Result:x(Kτ;x0,t0)
Set time step τ and provide initial condition x0;
K = 1;
  whilek < Kdo
  Solve Eq. (9) to obtain the deterministic solution F(k1)τkτ(xk1);
  Compute the variance (12) ;
  Sample the distribution p1(x,kτ;xk1,(k1)τ) (cf. Eq. (15)) to obtain
  x1(kτ;xk1,(k1)τ);
  xk=F(k1)τkτ(xk1)+σx1(kτ;xk1,(k1)τ);
  k=k+1;
  end

The SNI is motivated by the fact that the small noise expansion is accurate only for a limited time span. Thus, by selecting τ to be within this time span, one can ensure that the approximation (16) is accurate. Repeated sampling removes the limited time horizon of the small noise expansion and the following Lemma guarantees the accuracy of the SNI 1 for longer time spans:

Lemma 4.1. Assume that the flow map of the deterministic limit (9) is approximated with an accuracy not less thanO(τ). Then the SNI 1 strongly converges to the sample paths of the stochastic system (6) with orderO(τ12); that is, the following holds
E[|x(kτ)xkSNI|]<Cτ,0kK
(20)

wherex(kτ)denotes a solution to system (6),xkSNIis an approximation obtained by the SNI 1, and C is a finite constant.

Proof. The above Lemma 4.1 is proven by showing that for system (6) the SNI 1 converges to the Euler–Maruyama approximation for a small enough time-step τ. Then, the convergence follows from well-established convergence results (e.g., Kloeden and Platen [13] or Schurz's chapter in Ref. [17]). The details are presented in Appendix  A. ▪

Remark 4.1. Notably, the convergence result of Lemma 4.1 does not depend on the size of the noise intensity σ. More specifically, in Appendix  A only a parameter expansion in the time-step τ is performed and no restrictions on the noise intensity σ are imposed. For small enough time-step τ, the SNI 1 resembles the Euler–Maruyama scheme, which is valid for arbitrarily large σ. Hence, the SNI 1 will also converge for large σ.

Remark 4.2. As utilized in the Euler–Maruyama scheme and also shown by Risken [4], solutions to Eq. (6) can be approximated by a Gaussian process on small time scales regardless of the size of the noise intensity. With the SNI 1 one arrives at a similar conclusion and exploits the convergence of the expansion (7) for small noise intensities. Within the SNI 1 a non-Gaussian distribution arises on longer time scales when multiple sample paths are computed.

If the directions along which the Gaussian white noise act are constant; that is, Bm(x,t)=Bm for m=1,,M<N, one can reduce the computational costs of obtaining the variance (12). Instead of computing the linearized flow map DFt0t(x0), which requires N integrations of the equations of variation, the new variable V(t):=DFt0t(x0)B is introduced. Then, the variance (12) can be reformulated as
Σ(t;t0,x0)=t0tV(t)V(t)ds
(21)
and differentiation of V(t) yields
V˙=ddt(DFt0t(x0)B)=ddt(DFt0t(x0))B=xf(x,t)DFt0t(x0)B=xf(x,t)V,V(t0)=B
(22)

Thus, it suffices to compute the matrix V which requires only M integrations of the linearized flow.

The fact, that for small times solutions of system (6) can be approximated by a Gaussian process is well-known (see, e.g., Risken [4] or Freidlin and Wentzell [31]) and has also been exploited in the construction of solution schemes that repeatedly resample an arising Gaussian distribution in a similar manner as the SNI 1. For example, Sun and Hsu [38] introduce a short-time Gaussian approximation for the cell mapping method or Yu et al. [10] and Wehner and Wolfer [39] use a Gaussian transition probability density function in the formulation of a path integral solution to the Fokker–Planck equation. In these schemes, however, the mean and variance of the Gaussian approximation are obtained differently from the SNI 1. Wehner and Wolfer [39] utilize the short time propagator also used in the Euler–Maruyama scheme and Sun and Hsu [38] and Yu et al. [10] rely on moment equations with heuristic closure schemes. Contrarily, the SNI 1 utilizes the rigorously deduced mean and variance (16) for the arising Gaussian processes.

For further understanding, the SNI 1 can be compared with the classical Euler–Maruyama scheme which applied to system (6) yields
xke=xk1e+τf(xk1e,(k1)τ)+στB(xk1e,(k1)τ)ΔW
(23)

where ΔW denotes the increments of the m-dimensional of the normalized Wiener process. In this setting, the SNI 1 replaces the forward Euler scheme of the deterministic part in the Euler–Maruyama scheme (xk1e+τf(xk1e,(k1)τ)) by the deterministic flow map F(k1)τkτ(xk1) at each time-step. Moreover, instead of sampling the normal distribution with zero mean and variance τB(xk1e,(k1)τ)[B(xk1e,(k1)τ)] the variance (12) is sampled for the stochastic increment in the SNI 1 at each time-step.

Although the order of convergence of the SNI is only one-half, which is the order of the classic Euler–Maruyama scheme, the SNI 1 has inherent advantages making it an appealing alternative to existing techniques to solve Eq. (6) numerically. First, the SNI requires only sampling of normal distributions and hence it does not require sampling of intricate distributions as some competing method (e.g., Schurz's chapter in Ref. [17]). Second, the SNI allows for the use of deterministic integration routines. More specifically, the flow-map Ft0t(x0) can be approximated by any deterministic numerical integrator. This is especially appealing in applications, where accurate and effective approximations of solutions to the deterministic limit of Eq. (6) already require sophisticated numerical integration routines. For example, the forward Euler approximation of the deterministic flow map in the Euler–Maruyama scheme or Milstein's scheme (e.g., Kloeden and Platen [13]) can be unstable for simple linear equations (see, e.g., Butcher [40]).

Most importantly, the SNI's performance is drawn from not only the limit of an asymptotically small time-step, but this algorithm is also accurate for large time steps, given that the noise terms are sufficiently small. Thus, it is anticipated that the time-step τ can be chosen considerably larger than in the available numerical integration routines for stochastic differential equations.

Overall, a possible increase in speed and accuracy gained by using a deterministic integrator for system (1) is counteracted by the necessity of computing the linearized flow map DFt0t(x0) and the variance (12). In special cases, this burden can be reduced by computing the matrix V instead (cf. Eq. (22)). These observations lead to the conclusion that the SNI 1 is superior to other stochastic integration methods if the deterministic limit is challenging enough such that the speed up gained by the deterministic numerical integration routines is larger than the computational effort of computing the linearized flow map DFt0t(x0) and the variance (12). In the next section, it is shown that this can be the case, through many examples.

4.2 Numerical Investigations.

As a benchmarking study, the SNI 1 is compared to Euler–Maruyama (EM) approximations of solutions to Eq. (6) (e.g., Kloeden and Platen [13]). Although many higher-order schemes are available (cf. Schurz's overview in Ref. [17]), as reported by Mannella [22], the performance gain can be insignificant at the expense of additional computational expense. Thus, for a first comparison, the widely-used Euler–Maruyama method is used2,3. Within the SNI 1 the following numerical routines are used. Equations (9) and (10), and in turn Eq. (22), are solved using Matlab's ode45 routine. Moreover, the integral to compute the variance (12), respectively Eq. (21), is approximated by using the trapezoidal rule.

For an initial demonstration of the accuracy of the SNI, the Duffing system (18) with parameter values (19) and excitation frequency Ω=1.2 is investigated for two different choices of the time-step τ. For the first choice, τ=T/5 is selected, and subsequently, for the second choice, it is increased to T for comparison. The results of the SNI are shown in Fig. 2. While the distribution obtained with the SNI 1 with time-step τ=T/5 shown in Fig. 2(a) matches the distribution from the EM-approximations, the SNI does not yield an accurate approximation for the time-step τ=T (cf. Fig. 2(b)). The accuracy can be quantified by computing the first two statistical moments for all three distributions. In Fig. 2(a), the mean and standard deviation of the EM-approximation and the SNI agree up to an accuracy of 103, whereas the standard deviations differ by more than 0.01 in Fig. (2(b)). With the SNI 1, one computes the 103 samples shown in Fig. 2 about 20-times faster than the Euler–Maruyama approximation.

Fig. 2
Deterministic and stochastic solutions to Duffing's Eq. (18) with parameters (19): (a) SNI-algorithm 1 with τ = T∕5 and (b) SNI-algorithm 1 with τ = T
Fig. 2
Deterministic and stochastic solutions to Duffing's Eq. (18) with parameters (19): (a) SNI-algorithm 1 with τ = T∕5 and (b) SNI-algorithm 1 with τ = T
Close modal

In Fig. 3(a), the authors show the convergence of the SNI relative to the step size τ. The errors in the first two statistical moments, that is the mean and variance, computed with the SNI 1 are compared to those associated with from an Euler–Maruyama approximation with step size of τ=T/(2·105). For the selected step sizes, the mean computed with the SNI 1 remains accurate with an error of the order of about 103. This accuracy is also recognizable in Fig. 2. For both step sizes τ=T and τ=T/5, the sample populations are centered at the same phase space location indicating that both distributions have a similar mean. However, the accuracy of sample variance computed with the SNI 1 increases significantly between τ=T/2 and τ=T/3. This observation is also discernible in Fig. 2. The sample population computed with the SNI 1 for the step size τ=T spans an evidently larger area than the converged sample distribution (cf. Fig. 2(b)). The same numerical convergence analysis is also performed for the Euler–Maruyama scheme and shown in Fig. 3(a). A converged result from the Euler–Maruyama scheme requires a significantly higher number of time steps per period compared to that for SNI 1.

Fig. 3
Convergence analysis for the Duffing Eq. (18) with parameters (19): (a) convergence of the SNI 1 for varying step size τ and (b) Convergence of the Euler–Maruyama approximation for varying step size τ
Fig. 3
Convergence analysis for the Duffing Eq. (18) with parameters (19): (a) convergence of the SNI 1 for varying step size τ and (b) Convergence of the Euler–Maruyama approximation for varying step size τ
Close modal
The performance of the SNI 1 is further demonstrated in a series of numerical experiments on an array of N* coupled oscillators shown in Fig. 4. The equation of motion of the nth mass is
mnq¨n+cnq˙n+knqn+κnqn3+sn1(qnqn1)+sn(qnqn+1)=fnfn=ansin(Ωt)+σbn,dbn=gndW1,n=1,,N*
(24)
where q0qN*+10 and s0=sN*=0 holds. The springs with stiffness sn induce coupling between the individual degrees-of-freedom of system (24). The quantity an is the amplitude of the deterministic harmonic component in the nth forcing, and the quantity gn is the level of the noise term W1 in the nth coordinate. It is noted that considering the oscillator array (24) with a single mass (N*=1) yields the Duffing Eq. (18) investigated in Figs. (2) and (3). Next, identical parameters for each oscillator are chosen with the following dimensionless values
mn=1,kn=1,sn=0.1,cn=0.02,κn=0.5,an=0.1,gn=1,n=1,..,N*
(25)

and the noise intensity is set to σ=0.01.

Fig. 4
Oscillator array with cubic nonlinearities κ and linear coupling sn
Fig. 4
Oscillator array with cubic nonlinearities κ and linear coupling sn
Close modal

The excitation frequency Ω is set to 1.2 and for each oscillator, the same initial condition x0 shown in Fig. 1(a) is selected. Then, the sample paths of the oscillator array (24) are approximated for 1, 10, and 100 forcing periods. To ensure that the Euler approximation yields the correct deterministic limit, the step size of the EM-approximation is decreased until it matches the solution of Matlab's ode45 with an error less than 103. For the SNI 1, τ=T/5 is selected and Matlab's ode45 algorithm is used to solve Eqs. (6) and (22). For both approximations, 103 sample paths are obtained and the first two statistical moments are compared. It is observed that in all cases there is an agreement with an accuracy of 103. The run-time comparison of the EM-approximation and the SNI 1 is shown in Table 1.

Table 1

Run-time comparison for the SNI 1 with τ=T/5 and the Euler–Maruyama approximation for the oscillator array (24)

Final time t = TFinal time t=10TFinal time t=100T
# dof (N*)EM: Steps per TSpeed-upEM: Steps per TSpeed-upEM: Steps per TSpeed-up
14×104216×104321×10547
26×104328×104452×10595
58×104371×105524×105194
101×105462×1051956×105259
202×105723×1051156×105175a
503×105464×105458×105102a
1004×105216×105321×10662a
Final time t = TFinal time t=10TFinal time t=100T
# dof (N*)EM: Steps per TSpeed-upEM: Steps per TSpeed-upEM: Steps per TSpeed-up
14×104216×104321×10547
26×104328×104452×10595
58×104371×105524×105194
101×105462×1051956×105259
202×105723×1051156×105175a
503×105464×105458×105102a
1004×105216×105321×10662a

The computations have been performed using matlab 2020a installed on a Windows PC with Intel Xeon CPU E5-2687 W @ 3.1 GHz and 64 GB RAM.

The sample size is 103 and the convergence of the two methods is ensured by computing the first two statistical moments. The excitation frequency Ω is 1.2.

a

Due to the excessive computation time of the Euler–Maruyama scheme (more than one day), the run-time for the full sample size is estimated by extrapolating the run-time for one sample.

Overall, from Table 1, it can be discerned that a significant speed-up of the SNI 1 is obtained in comparison to the classical Euler–Maruyama scheme. In all cases, the speed-up is at least one order of magnitude. Especially large is the speed-up for oscillator arrays with 5 to 50 oscillators. Moreover, the efficiency of the SNI 1 increases with the time span, which makes it an appealing choice for long time horizons.

To further demonstrate the versatility of the SNI 1, the excitation frequency is increased to Ω=1.9 and for each oscillator, the initial condition is selected to be on the upper stable branch depicted in Fig. 1(a). At this frequency, Matlab's ode45 algorithm can be used to compute the stable periodic orbit effectively, whereas the step size of the primitive EM-approximation needs to be decreased significantly to converge to a stable periodic orbit of the deterministic limit. Proceeding as previously described for the stochastic simulations, the SNI 1 yields a significant computational gain compared to the Euler–Maruyama scheme (cf. Table 2).

Table 2

Run-time comparison for the SNI 1 with τ=T/5 and the Euler–Maruyama approximation for the oscillator array (24)

Ω=1.9Random parameters
Final time t = TFinal time t = TFinal time t=10T
# dof (N*)EM: Steps per TSpeed-upEM: Steps per TSpeed-upEM: Steps per TSpeed-up
12×105876×104213×10589
23×1051302×104162×105161
55×1051562×104132×105163
108×1053035×104275×105270
201×1062575×105604×106488a
502×1062705×106221×107250a
1003×1061535×10575×106201a
Ω=1.9Random parameters
Final time t = TFinal time t = TFinal time t=10T
# dof (N*)EM: Steps per TSpeed-upEM: Steps per TSpeed-upEM: Steps per TSpeed-up
12×105876×104213×10589
23×1051302×104162×105161
55×1051562×104132×105163
108×1053035×104275×105270
201×1062575×105604×106488a
502×1062705×106221×107250a
1003×1061535×10575×106201a

The computations have been performed using MATLAB 2020a installed on a Windows PC with Intel Xeon CPU E5-2687 W @ 3.1 GHz and 64 GB RAM.

(i) Ω=1.9 and uniform parameters (25) and (ii) random parameters (cf. also Appendix  B). The sample size is 103 and the convergence of the two methods is ensured by computing the first two statistical moments.

a

Due to the excessive computation time of the Euler–Maruyama scheme (more than one day), the run-time for the full sample size is estimated by extrapolating the run-time for one sample.

To verify that the performance of the SNI 1 is independent of specific parameter values (25), the parameter values of system (24) are assigned randomly as described in Appendix  B. Also for this numerical experiment the SNI 1 is found to outperform the Euler–Maruyama approximation (cf. Table 2). Overall, from Table 2, one can discern a speed-up of about two orders in magnitude. Moreover, for longer simulation times an even higher speed-up of the SNI 1 is expected, since the small step size to be used for the EM-approximation will increase the computational burden excessively.

Although the SNI 1 relies on a repeated sampling of Gaussian distributions, the resulting sample distribution is not necessarily Gaussian. To demonstrate this, the oscillator array (24) with two masses N*=2 and a coupling spring stiffness kc=0.025 is considered. For these parameter values, four stable periodic orbits exist. For one of there orbits, both masses oscillate with a high amplitude (i.e., the high amplitude orbit), whereas along the low amplitude orbit, the displacements of both masses are low. Moreover, two localized modes exist. Along one of the localized modes, the first mass oscillates considerably, whereas the amplitude of the second mass is low. For the other localized mode, the energy distribution is reversed; that is, the amplitude of the first mass is low and the second mass oscillates with a high amplitude. Such energy localization phenomena have been previously investigated, for example, Sievers and Takeno [41], Vakakis and Cetinkaya [42] and Dick et al. [43]. Recently, there has been an interest to study the effects of noise on such energy localization (e.g., Perkins et al. [44] and Balachandran et al. [45]). Selecting the high amplitude orbit as an initial condition, 103 sample paths for 100 periods have been computed with the SNI 1. The sample locations are shown in Fig. 5. The projections show that some samples remained in the neighborhood of the high amplitude orbit, whereas other realizations have escaped toward the low amplitude periodic orbit. Overall, the distribution shown in Fig. 5 is clearly non-Gaussian.

Fig. 5
Samples of the oscillator array (24) with localized modes after 100 periods
Fig. 5
Samples of the oscillator array (24) with localized modes after 100 periods
Close modal

5 Approximation of the Probability Density Function

A straightforward method to compute probability densities is to discretize the state space into volumes, compute sample paths (e.g., with the SNI 1 or the EM-approximation) and then count the number of realizations within each volume. The result of this Monte Carlo approach is an invariably nonsmooth approximation of the probability density function. This requires a high number of samples to accurately represent the PDF. To overcome these shortcomings a more effective method is proposed in Sec. 5.1. This approach relies on the rigorously deduced Gaussian kernel (12) to smoothly approximate the probability density function. Next, the performance of the proposed method is examined through examples in Sec. 5.2.

5.1 Gaussian Kernel Approximation.

To approximate the probability density function of system (6) at t=Kτ, it is assumed that R realizations xr(Kτ)=xr(Kτ;x0,t0) of the stochastic process (6) have been computed. At time t=(K1)τ, the sample locations are given by xr((K1)τ)=xr((K1)τ;x0,t0). Now advancing the sample population from time t=(K1)τ to the final time t=Kτ, each individual sample results in the generation of the Gaussian distribution
pr(x,Kτ;xr((K1)τ),(K1)τ):=pσ(x,Kτ;xr((K1)τ),(K1)τ)
(26)
where the distribution pσ is defined in Eq. (17). Averaging the distributions (26) over the samples yields
p(x,Kτ;x0,t0)1Rr=1Rpr(x,Kτ;xr((K1)τ),(K1)τ)
(27)

that is, a Gaussian kernel approximation (GKA) to the probability density at the final time. Equation (27) is an approximation of the probability density p(x,Kτ;x0,t0) obtained by a sum of Gaussian distributions centered at F(N1)τNτ(xr((K1)τ)) with variance (12), which generally differ for each realization r. Hence, it can also be viewed as kernel smoothening method (e.g.,, Friedman et al. [46]). Compared to the usual adhoc chosen kernel distributions, the kernel (27) is rigorously chosen based on the Gaussian kernel (12). Moreover, since (27) is the standard Monte Carlo estimator, the corresponding convergence can be guaranteed under appropriate assumptions; that is, small enough τ and small enough σ.

5.2 Numerical Investigations.

In the following, the time-varying probability density function of the stochastic process (6) is computed with the Gaussian kernel approximation (27) and compared with the probability density obtained by using the straight forward Monte Carlo simulations, wherein the state space is discretized into volumes and the number of realizations in each volume is counted.

For a first comparison, the single degree-of-freedom oscillator (18) with the parameters (19) is reinvestigated. The initial condition is depicted in Fig. 1(a) and the solutions of the stochastic process (18) are approximated via the SNI 1 for one period. In Fig. 6, the authors show the obtained probability density functions for various sample sizes. For 105 samples, the probability density function of both methods, Monte Carlo and GKA (27) match very well. Reducing the sample size in the GKA to include only 103 samples, still yields a PDF, which matches the PDF obtained at convergence with 105 samples. However, with a reduced number of samples in the Monte Carlo method, there is a significant deviation from the converged results.

Fig. 6
Comparisons of computed probability density function with the proposed Gaussian kernel approximation (27) and Monte Carlo sampling for different sample sizes for Duffing's Eq. (18) with parameters (19)
Fig. 6
Comparisons of computed probability density function with the proposed Gaussian kernel approximation (27) and Monte Carlo sampling for different sample sizes for Duffing's Eq. (18) with parameters (19)
Close modal

To quantify the superior convergence of the GKA, the L2-error relative to Monte Carlo simulations with 106 samples is visualized for both approximations and various sample sizes. The relative error of the GKA shown in Fig. 7 is about two orders of magnitude smaller than that of the Monte Carlo approximation. Only for the largest sample size (Ns=105), the error of the GKA is one order of magnitude less than that of the Monte Carlo simulations. However, the error with the Monte Carlo simulations with 105 samples is still larger than the error with the GKA with only 103 samples. Thus, in this example, the sample size of the GKA-approximation can be reduced by two orders of magnitude compared to the Monte Carlo method.

Fig. 7
Relative error of Gaussian kernel approximation (27) and Monte Carlo approximation for Duffing's Eq. (18) with parameters (19)
Fig. 7
Relative error of Gaussian kernel approximation (27) and Monte Carlo approximation for Duffing's Eq. (18) with parameters (19)
Close modal

Before proceeding with a high dimensional oscillator array (24), it is insightful to emphasize the enormous sample size needed for the Monte Carlo simulations to obtain a converged PDF. For the one degree-of-freedom oscillator considered in Figs. 6 and 7, about 105 samples were necessary to compute a converged PDF with the Monte Carlo method. Thus, an estimated number of 105N* samples would be required to obtain a converged PDF for an N*-oscillator array (24). Even for three oscillators in system (24), the computational effort necessary to generate such a high number of samples is overwhelming4.

To verify whether one obtains the correct results with the GKA-approximation (27) in higher dimensions, an perturbation scheme is employed. For the weak coupling spring stiffness sn, the time evolution the oscillator array (24) can be estimated with N uncoupled oscillators; that is, the uncoupled limit sn0. More precisely, it is assumed that the coupling spring stiffness sn is of order O(σ2). Then, for the uniform parameters (25) each oscillator qj and q˙j are identical up to order O(σ2). Thus, in this limit, it is sufficient to simulate a single degree-of-freedom oscillator to infer about the full oscillator array (24).

In Fig. 8, the authors depict the PDF of the oscillator array (24) with three masses N =3 along the two-dimensional plane where the positions and velocities of the second and third mass are at their sample mean; that is, q2=q¯2,q3=q¯3,q˙2=q˙¯2 and q˙2=q˙¯2, where the bar indicates the mean values. Both approximations match with an relative L2-error of less than 0.01. It is emphasized, that for the Monte Carlo simulations approximate symmetries of system (24) are explicitly exploited, whereas no such reduction technique has been employed for the GKA5. Similar to the single degree-of-freedom case, with the GKA (27), one converges to a significantly lower number of samples than with the Monte Carlo sampling.

Fig. 8
Comparison of the computed probability density function via the proposed Gaussian kernel approximation (GKA) (27) and Monte Carlo sampling at the uncoupled limit for the oscillator array (24) with parameters (25) and a sample size of 105
Fig. 8
Comparison of the computed probability density function via the proposed Gaussian kernel approximation (GKA) (27) and Monte Carlo sampling at the uncoupled limit for the oscillator array (24) with parameters (25) and a sample size of 105
Close modal

Finally, the two-degree-of-freedom system with localized modes from Sec. 4.2 is reinvestigated to verify whether samples escaped from the high amplitude orbit to the other stable periodic orbits. With the two-dimensional projection shown in Fig. 5, one invariably ignores the other directions and hence such projections are of limited use to infer about an escape from the high amplitude orbit. Accordingly, measuring distances of samples to the periodic orbits is more appropriate. For each periodic orbit, the distance is a random variable, whose PDF can be approximated with the GKA. The arising four PDFs are shown in Fig. 9 for 105 samples. The peak in Fig. 9 is due to the initial condition, whose distance to the localized modes and the low amplitude orbit is about 2.7. The distances to the high amplitude orbit and the low amplitude orbit (blue and red lines in Fig. 9) confirm the impression from Fig. 5 that realizations do not stay close to the high amplitude orbit and escape toward the low energy orbit.

Fig. 9
Distances of samples to the four stable periodic orbits of the oscillator array (24) with localized modes after 100 periods
Fig. 9
Distances of samples to the four stable periodic orbits of the oscillator array (24) with localized modes after 100 periods
Close modal

6 Conclusions

In this work, the small noise expansion (cf. Eq. (7)) is exploited to propose a method to obtain approximations for the following: (i) the sample paths of the stochastic dynamical system (1) and (ii) the associated time-varying probability density function. With the formulated small noise integrator 1, one removes the limited time horizon of the small noise expansion, by using an appropriate resampling of the Gaussian distribution (12) along sample paths. For the SNI 1, one only requires the solution to deterministic differential equations, which means that deterministic integration routines can be used in the stochastic setting. Additionally, the authors have proven convergence of the proposed algorithm, which notably also holds for arbitrarily large noise intensities.

The computational benefit of the proposed SNI is examined with a series of coupled oscillator arrays with up to one hundred degrees-of-freedom, including a randomly parameterized array. Compared to the standard Euler–Maruyama scheme, a speed-up of two orders of magnitude is observed for a wide range of investigated systems. Especially for longer simulation times, the computational gain of the SNI is significant.

Moreover, the Gaussian kernel approximation (27) yields a justified approach to recover a smooth probability density function from samples, without relying on adhoc kernel choices or interpolation. With this method, the necessary sample size to accurately approximate the probability density function can be reduced drastically. For a single degree-of-freedom oscillator, the number of samples can be reduced by three orders of magnitude compared to established Monte Carlo methods. For a three degree-of-freedom system, the GKA (27) is found to yield an accurate PDF, while a computation with Monte Carlo methods is simply infeasible with reasonable computational resources. Thus, the GKA (27) opens up a new horizon to compute PDFs for higher dimensional systems beyond the current limitation to two or three dimensions.

While the SNI 1 can be used to extend the validity of the straightforward expansion (7) to longer time intervals, it does not yield a steady-state distribution, which can be of interest in applications. The underlying theory 3.1, unfortunately, is fundamentally restricted to finite time intervals. Thus, an extension of the underlying theory as well as computational algorithms is desirable to compute a statistical steady-state, if it exists.

The probability density function, a time-varying scalar quantity in a usually high dimensional space, is often difficult to understand, visualize or access. In many applications, the quantities of interest, for example, the exceedance probability, escape times, or frequency of occurrence, can, in principle be answered by computing the PDF, but this might not always be the most efficient approach. Thus, it would be of interest to tailor the method presented in such specific contexts.

Acknowledgment

The authors gratefully acknowledge the support of the National Science Foundation through Grant No. CMMI 1760366 and the associated data science supplements. Additionally, T.B. is thankful to Lautaro Clienti and Abdulrahman Alofi for encouraging discussions throughout this work.

Funding Data

  • National Science Foundation (Division of Civil, Mechanical and Manufacturing Innovation) (Grant No.: CMMI 1760366; Funder ID: 10.13039/100000147).

Data Availability Statement

The data that supports the findings of this study is openly available online.6

Conflict of Interest

The authors have no conflicts to disclose.

Footnotes

2

The Matlab provided Euler-Maruyama approximation is not utilized. Instead, a self-written code, available at github.com/tbreunung/SNI, is employed. For the cases tested the self-written code outperforms the Matlab provided code. Moreover, various optimizations of the Euler-Maruyama scheme summarized by Higham [47] (especially vectorization) have been tested and a version was selected that seemed optimal in terms of speed and memory requirements for the purposes of this study.

3

It is noted that for the investigated systems, the first order equivalent of systems (18) and (24), the matrix B (cf. Equation (6)) does not depend on the coordinates. Hence, the Euler-Maruyama scheme is equivalent to Milstein's scheme (e.g., Kloeden and Platen [13]). Thus, it convergences to the sample path of system (24) with order one.

4

To give an idea of the computational effort, the following is mentioned: the storage required to store 1015 samples with double precision floating point numbers is 48 petabytes (=1015·3·2·8 bytes). With the current advances in high-performance computing and data storage systems, such simulations may not be impossible but one requires a serious commitment in computing power and data storing capabilities, which is beyond the scope of this paper.

5

Without using the approximate symmetries, not a single realization out of the 105 samples ended up in the discretized state space volume used to compute the results for Fig. 8. Hence, the Monte Carlo approximation for the PDF is identically zero.

Appendix A: Convergence Proof of the SNI 1

In the following, it is shown that the SNI 1 has the same order of convergence as the Euler–Maruyama scheme. The Euler–Maruyama approximation to system (10) with step size τ is given by
xke=xk1e+τf(xk1e,(k1)τ)+στB(xk1e,(k1)τ)ΔW
(A1)
where ΔW denotes the increments of the m-dimensional of the normalized Wiener process; that is, they are independent and identically normal-distributed random variables with zero mean and variance one. First, the variance (12) is expanded for small τ yielding
Σ(kτ;xk1,(k1)τ)=(k1)τkτDF(k1)τs(xk1)B(F(k1)τs(xk1),s)[DF(k1)τs(xk1)B(F(k1)τs(xk1),s)]ds=τDF(k1)τ(k1)τ(xk1)B(F(k1)τ(k1)τ(xk1),(k1)τ)[DF(k1)τ(k1)τ(xk1)B(F(k1)τ(k1)τ(xk1),(k1)τ)]+O(τ2)=τB(xk1,(k1)τ)[B(xk1,(k1)τ)]+O(τ2)
(A2)
where the integral is approximated by the values of the integrand at time s=(k1)τ. Equation (A2) reveals that the variance of distribution (15) is τB(xk1,(k1)τ)B(xk1,(k1)τ). The sampling of the normal distribution with variance τB(xk1,(k1)τ)B(xk1,(k1)τ) is equivalent to scaling the M columns of τB(xk1,(k1)τ) by samples drawn from the standard distribution. Thus, one time-step of the SNI can be written as
xkSNI=F̃(k1)τkτ(xk1SNI)+στB(xk1SNI,(k1)τ)ΔW+O(τ)
(A3)
where F̃nτ(k1)τ(xk) denotes an appropriate approximation to the flow map of the deterministic system (9). The error of the SNI has the following upper bound
E[|x(kτ)xkSNI|]E[|x(kτ)xke|]+E[|xkexkSNI|]E[|x(kτ)xke|]+|xk1e+τf(xk1e,(k1)τ)F̃kτ(k1)τ(xk1SNI)|+O(τ)
(A4)
wherein Eqs. (A1) and (A3) have been used. By adding and subtracting the flow map F(k1)τkτ(x((k1)τ)), the arising error can be split into three parts
E[|x(kτ)xkSNI|]E[|x(kτ)xke|]error of stochastic Euler approximation+|F̃(k1)τkτ(xk1SNI)Fkτ(k1)τ(x((k1)τ))|approximation error of deterministic flow map+|F(k1)τkτ(x((k1)τ))(xk1e+τf(xk1e,(k1)τ))|error of deterministic explicit Euler scheme+O(τ)Cτ
(A5)

where the following is used: (i) the convergence result of the stochastic Euler–Maruyama approximation (A1) to the solutions of Eq. (10) (e.g., Kloeden and Platen [13]), (ii) the assumption that the approximated flow map F̃kτ(n1)τ(xkSNI) is O(τ) close to the true flow map F̃kτ(n1)τ(xkSNI) (cf. assumption in Lemma 4.1), and iii) the convergence of the deterministic forward Euler scheme. The required estimate for Lemma 4.1 is given in Eq. (A4).

Appendix B: Random Parameter Selection for Oscillator Array (24)

To assign values for the parameters of the oscillator array (24) either the standard distribution N(μ,σN) with mean μ and variance σN or the uniform distribution U[a,b] with a denoting the minimum and b the maximum value are sampled. The parameter values are drawn from the following distributions:
mnU[0.5,2],knU[0.5,2],snU[0.05,0.2],cnU[0.01,0.03]κnU[0,0.1],fjN(0,0.1),gjN(0,1),ΩU[1,2]
(A6)

The distributions (A6) are selected such that the linear unforced limit of the oscillator array (24) (κ0 and σ0) is a weakly damped oscillator customary in the structural dynamics literature (cf. Géradin and Rixen [25]). The parameter values for the numerical experiments presented in Table 2 are available online.6

References

1.
Gardiner
,
C. W.
,
2004
,
Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences
, Springer (Series in Synergetics 13), 3rd ed.,
Springer
,
Berlin
.
2.
Wirsching
,
P. H.
,
Paez
,
T. L.
, and
Ortiz
,
K.
,
2006
,
Random Vibrations: Theory and Practice
,
Dover Publication
Mineola, NY.
3.
Wilkinson
,
D. J.
,
2009
, “
Stochastic Modelling for Quantitative Description of Heterogeneous Biological Systems
,”
Nat. Rev. Genet.
,
10
(
2
), pp.
122
133
.10.1038/nrg2509
4.
Risken
,
H.
,
1996
, “
Fokker-Planck Equation
,”
The Fokker-Planck Equation
,
Springer
, Berlin, Heidelberg, pp.
63
95
.
5.
Soize
,
C.
,
1994
,
The Fokker-Planck Equation for Stochastic Dynamical Systems and Its Explicit Steady State Solutions
, Vol.
17
,
World Scientific
, Singapore.
6.
Caughey
,
T.
,
1971
, “
Nonlinear Theory of Random Vibrations
,”
Advances in Applied Mechanics
, Vol.
11
,
Elsevier
, Amsterdam, The Netherlands, pp.
209
253
.
7.
Lin
,
Y.
, and
Cai
,
G.
,
2004
, “
Probabilistic Structural Dynamics: Advanced Theory and Applications
,”
McGraw-Hill Engineering Reference
,
McGraw-Hill
,
New York
.
8.
Pichler
,
L.
,
Masud
,
A.
, and
Bergman
,
L. A.
,
2013
, “
Numerical Solution of the Fokker–Planck Equation by Finite Difference and Finite Element Methods-a Comparative Study
,”
Computational Methods in Stochastic Dynamics
,
Springer
, Berlin, Heidelberg, pp.
69
85
.
9.
Wehner
,
M. F.
, and
Wolfer
,
W.
,
1983
, “
Numerical Evaluation of Path-Integral Solutions to Fokker-Planck Equations
,”
Phys. Rev. A
,
27
(
5
), pp.
2663
2670
.10.1103/PhysRevA.27.2663
10.
Yu
,
J.
,
Cai
,
G.
, and
Lin
,
Y.
,
1997
, “
A New Path Integration Procedure Based on Gauss-Legendre Scheme
,”
Int. J. Non-Linear Mech.
,
32
(
4
), pp.
759
768
.10.1016/S0020-7462(96)00096-0
11.
Spencer
,
B.
, and
Bergman
,
L.
,
1993
, “
On the Numerical Solution of the Fokker-Planck Equation for Nonlinear Stochastic Systems
,”
Nonlinear Dyn.
,
4
(
4
), pp.
357
372
.10.1007/BF00120671
12.
Masud
,
A.
, and
Bergman
,
L.
,
2005
, “
Solution of the Four Dimensional Fokker-Planck Equation: Still a Challenge
,”
Icossar
, Vol.
2005
,
Citeseer
, Rotterdam, pp.
1911
1916
.
13.
Kloeden
,
P. E.
, and
Platen
,
E.
,
1999
,
Numerical Solution of Stochastic Differential Equations
(Applications of Mathematics 23) etc, [3rd corrected printing] ed.,
Springer
,
Berlin
.
14.
Rüemelin
,
W.
,
1982
, “
Numerical Treatment of Stochastic Differential Equations
,”
SIAM J. Numer. Anal.
,
19
(
3
), pp.
604
613
.10.1137/0719041
15.
Wagner
,
W.
, and
Platen
,
E.
,
1978
,
Approximation of Ito Integral Equations
,
Akademie Der Wissenschaften Der DDR. Zentralinstitut Für Mathematik Und Mechanik
, Berlin, Heidelberg.
16.
Milstein
,
G. N.
,
1994
,
Numerical Integration of Stochastic Differential Equations
, Vol.
313
,
Springer Science & Business Media
, Dordrecht, The Netherlands.
17.
Kannan
,
D.
, and
Lakshmikantham
,
V.
,
2001
,
Handbook of Stochastic Analysis and Applications
,
CRC Press
, New York.
18.
Platen
,
E.
,
1999
, “
An Introduction to Numerical Methods for Stochastic Differential Equations
,”
Acta Numerica
,
8
, pp.
197
246
.10.1017/S0962492900002920
19.
Boyce
,
W. E.
,
1978
, “
Approximate Solution of Random Ordinary Differential Equations
,”
Adv. Appl. Probability
,
10
(
1
), pp.
172
184
.10.2307/1426724
20.
Milstein
,
G. N.
, and
Tret'yakov
,
M. V.
,
1997
, “
Numerical Methods in the Weak Sense for Stochastic Differential Equations With Small Noise
,”
SIAM J. Numer. Anal.
,
34
(
6
), pp.
2142
2167
.10.1137/S0036142996278967
21.
Milstein
,
G. N.
, and
Tretyakov
,
M. V.
,
2013
,
Stochastic Numerics for Mathematical Physics
,
Springer Science & Business Media
, Berlin, Heidelberg.
22.
Mannella
,
R.
,
2002
, “
Integration of Stochastic Differential Equations on a Computer
,”
Int. J. Mod. Phys. C
,
13
(
09
), pp.
1177
1194
.10.1142/S0129183102004042
23.
Gear
,
C. W.
,
1971
,
Numerical Initial Value Problems in Ordinary Differential Equations
(Series in automatic computation),
Prentice-Hall
, Englewood Cliffs, NJ.
24.
De Jalon
,
J. G.
, and
Bayo
,
E.
,
2012
,
Kinematic and Dynamic Simulation of Multibody Systems: The Real-Time Challenge
,
Springer Science & Business Media
, New York.
25.
Géradin
,
M.
, and
Rixen
,
D. J.
,
2014
,
Mechanical Vibrations: Theory and Application to Structural Dynamics
,
Wiley
, Hoboken, NJ.
26.
Vig
,
J. R.
, and
Kim
,
Y.
,
1999
, “
Noise in Microelectromechanical System Resonators
,”
IEEE Trans. Ultrason., Ferroelectr., Freq. Control
,
46
(
6
), pp.
1558
1565
.10.1109/58.808881
27.
Balachandran
,
B.
, and
Magrab
,
E. B.
,
2018
,
Vibrations
,
Cambridge University Press
, Cambridge, UK.
28.
Verhulst
,
F.
,
2005
,
Methods and Applications of Singular Perturbations
,
Springer
, New York.
29.
Nayfeh
,
A. H.
,
2008
,
Perturbation Methods
,
Wiley
, Weinheim, Germany.
30.
Blagoveshchenskii
,
Y. N.
,
1962
, “
Diffusion Processes Depending on a Small Parameter
,”
Theory Probab. Its Appl.
,
7
(
2
), pp.
130
146
.10.1137/1107013
31.
Freidlin
,
M. I.
, and
Wentzell
,
A. D.
,
1998
,
Random Perturbations of Dynamical Systems, Pages
,
Springer
, Heidelberg, pp.
15
43
.
32.
Kaszás
,
B.
, and
Haller
,
G.
,
2020
, “
Universal Upper Estimate for Prediction Errors Under Moderate Model Uncertainty
,”
Chaos: Interdiscip. J. Nonlinear Sci.
,
30
(
11
), p.
113144
.10.1063/5.0021665
33.
Kovacic
,
I.
, and
Brennan
,
M. J.
,
2011
,
The Duffing Equation: Nonlinear Oscillators and Their Behaviour
,
Wiley
, Hoboken, NJ.
34.
Guckenheimer
,
J.
, and
Holmes
,
P.
,
2002
,
Nonlinear Oscillations, Dynamical Systems, and Bifurcations of Vector Fields
(2002 Applied Mathematical Sciences), Corr. 7th printing ed., Vol.
42
,
Springer
,
New York
.
35.
Breunung
,
T.
,
2021
, “
Existence of Quasi-Periodic Responses in Quasi-Periodically Forced Nonlinear Mechanical Systems
,”
Nonlinear Dyn.
,
105
(
3
), pp.
1977
2004
.10.1007/s11071-021-06665-z
36.
Van Kampen
,
N. G.
,
1976
, “
The Expansion of the Master Equation
,”
Adv. Chem. Phys.
,
34
, pp.
245
309
.10.1002/9780470142530.ch5
37.
Dankowicz
,
H.
, and
Schilder
,
F.
,
2013
,
Recipes for Continuation
,
SIAM
, Philadelphia, PA.
38.
Sun
,
J. Q.
, and
Hsu
,
C. S.
,
1990
, “
The Generalized Cell Mapping Method in Nonlinear Random Vibration Based Upon Short-Time Gaussian Approximation
,”
ASME J. Appl. Mech.
,
57
(
4
), pp.
1018
1025
.10.1115/1.2897620
39.
Wehner
,
M.
, and
Wolfer
,
W.
,
1987
, “
Numerical Evaluation of Path-Integral Solutions to Fokker-Planck Equations. iii. time and Functionally Dependent Coefficients
,”
Phys. Rev. A
,
35
(
4
), pp.
1795
1801
.10.1103/PhysRevA.35.1795
40.
Butcher
,
J. C.
,
2016
,
Numerical Methods for Ordinary Differential Equations
,
Wiley
, Hoboken, NJ.
41.
Sievers
,
A.
, and
Takeno
,
S.
,
1988
, “
Intrinsic Localized Modes in Anharmonic Crystals
,”
Phys. Rev. Lett.
,
61
(
8
), pp.
970
973
.10.1103/PhysRevLett.61.970
42.
Vakais
,
A. F.
, and
Cetinkaya
,
C.
,
1993
, “
Mode Localization in a Class of Multidegree-of-Freedom Nonlinear Systems With Cyclic Symmetry
,”
SIAM J. Appl. Math.
,
53
(
1
), pp.
265
282
.10.1137/0153016
43.
Dick
,
A.
,
Balachandran
,
B.
, and
Mote
,
C.
,
2008
, “
Intrinsic Localized Modes in Microresonator Arrays and Their Relationship to Nonlinear Vibration Modes
,”
Nonlinear Dyn.
,
54
(
1–2
), pp.
13
29
.10.1007/s11071-007-9288-0
44.
Perkins
,
E.
,
Kimura
,
M.
,
Hikihara
,
T.
, and
Balachandran
,
B.
,
2016
, “
Effects of Noise on Symmetric Intrinsic Localized Modes
,”
Nonlinear Dyn.
,
85
(
1
), pp.
333
341
.10.1007/s11071-016-2688-2
45.
Balachandran
,
B.
,
Breunung
,
T.
,
Acar
,
G.
,
Abdulrahman
,
A.
, and
Yorke
,
J.
,
2022
, “
Dynamics of Circular Oscillator Arrays Subjected to Noise
,”
Nonlinear Dyn.
,
108
(
1
), pp.
1
14
.10.1007/s11071-021-07165-w
46.
Friedman
,
J.
,
Hastie
,
T.
,
Tibshirani
,
R
, et al.,
2001
,
The Elements of Statistical Learning
(Series in Statistics), Vol.
1
,
Springer
,
New York
.
47.
Higham
,
D. J.
,
2001
, “
An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations
,”
SIAM Rev.
,
43
(
3
), pp.
525
546
.10.1137/S0036144500378302