Abstract
This paper studies a sparse reconstruction-based approach to learn time–space fractional differential equations (FDEs), i.e., to identify parameter values and particularly the order of the fractional derivatives. The approach uses a generalized Taylor series expansion to generate, in every iteration, a feature matrix, which is used to learn the fractional orders of both, temporal and spatial derivatives by minimizing the least absolute shrinkage and selection operator (LASSO) operator using differential evolution (DE) algorithm. To verify the robustness of the method, numerical results for time–space fractional diffusion equation, wave equation, and Burgers' equation at different noise levels in the data are presented. Finally, the methodology is applied to a realistic example where underlying fractional differential equation associated with published experimental data obtained from an in vitro cell culture assay is learned.
1 Introduction
Fractional differential equations (FDEs) involve fractional derivatives. In contrast to traditional differential equations that model the rate of change of a system based on integer-order derivatives, FDEs generalize this concept. Fractional-order derivatives can provide more accurate descriptions of physical phenomena that exhibit nonlocal and non-Markovian behavior [1,2]. For example, many natural phenomena, such as the diffusion of particles in porous media [3], exhibit subdiffusive or super-diffusive behavior that cannot be accurately captured by traditional integer-order differential equations. FDEs can help to provide more accurate descriptions of these phenomena, which can be useful in a range of scientific and engineering applications. Additionally, FDEs have been used in the modeling of complex systems such as visco-elastic materials [4], non-Newtonian fluids, and fractal structures, among others [5,6]. Multiterm fractional differential equations involve more than one fractional derivative. These equations are commonly used to model complex systems that exhibit multiple time scales and memory effects, such as in mechanics [7]. Basset equation [8] and Bagley Torvik equation [7] are examples of multiterm fractional differential equations. Discovering (“learning”) these equations requires a deep understanding of the theory, experimental data and underlying phenomena associated with the dynamics being modeled.
Data-driven discovery of differential equations is a groundbreaking approach that leverages computational techniques and large datasets to uncover hidden mathematical relationships governing complex systems. In the field of learning differential equations, two main approaches are commonly employed: learning with known form and learning with unknown form. When dealing with known forms of differential equations, Brunton et al. [9] explored nonlinear systems through sparse identification. However, when implementing this approach in higher dimensions, computational limitations arise due to increased costs. To address the challenges of learning with known form of differential equations, researchers have turned to neural networks called physics-informed neural networks. These networks have been utilized to learn the parameters of differential equations with known forms. However, training these networks incurs a significant increase in computational costs. More recently, the work of Ren et al. [10] has emerged, focusing on learning differential equations with unknown form. This approach allows the discovery of the underlying form of differential equations directly from the data, providing a more flexible and exploratory framework for modeling complex systems.
Motivated by the wide-ranging applications of FDEs in diverse areas, researchers have begun to investigate methods for learning these types of equations too. Data-driven discovery of FDEs poses a challenge due to the complexity of the integral operators defining fractional order derivatives. Therefore, selecting an appropriate machine learning technique is crucial. In the learning-with-unknown-form approach, the objective is to solve differential equations when the specific form or structure is not known beforehand. This approach emphasizes exploration to find solutions or approximations without relying on predetermined mathematical expressions. Schaeffer [11] employed the Sparse reconstruction technique, utilizing Taylor expansion, to learn the partial differential equations (PDEs) using numerical optimization. Specifically, they utilized the Douglas-Rachford algorithm in their process. However, applying this algorithm to time and space fractional derivatives is challenging due to the variability in the feature matrix and the resulting nonlinearity of the system of equations. To address this issue, Singh et al. [12] introduced the use of differential evolution (DE) for minimizing the least absolute shrinkage and selection operator (LASSO) operator in order to learn time fractional partial differential equations (fPDE). Their method effectively learned the temporal Caputo fractional order and the underlying fPDE from given data, even in the presence of different levels of noise. This marked the first application of differential evolution for learning time fractional partial differential equations.
Differential evolution has gained widespread usage in solving optimization problems across various domains, including image processing [13], energy conservation [14], text classification [15], and electromagnetics [16]. Building upon the work of Singh et al. [12], we extend the application of differential evolution to the realm of time–space fractional differential equations where the aim is to learn the specific structure of the FDE. In related research on learning known-form-FDEs, Gulian et al. [17] employed Gaussian process regression to learn space fractional differential equations, Pang et al. [18] utilized physics-informed neural networks to learn time–space fractional differential equations, with a specific focus on fractional advection–diffusion equations and Rudy et al. [19] addressed learning equations using finite difference and polynomial approximation; however, this approach yielded poor performance when confronted with noisy data; in addition, their use of neural networks was restricted to specific types of fPDEs. Notably, there is currently no published result on learning time–space fractional differential equations directly from given data when the form of the FDE is not known. This highlights the gap in existing research and motivates the investigation into learning of time–space fractional differential equations using differential evolution. By leveraging the power of this optimization algorithm, we aim to overcome the limitations and address the challenges associated with learning these complex equations from data.
In this study, the research conducted by Singh et al. [12] is extended to address the problem of time–space fractional diffusion equation, wave equation, fractional Burgers' equation and an underlying fractional differential equation associated with measured data. One unique aspect of the approach is dynamic behavior of the feature matrix, which undergoes changes after each iteration. Differential evolution together with sparse reconstruction based approach proved to be particularly effective in handling the updates of the feature matrix.
This paper is structured as follows: Sec. 2 provides an overview of the sparse reconstruction technique applied to time–space fractional differential equations, utilizing the Taylor series approach and a feature matrix in every iteration. In Sec. 3, the application of differential evolution is discussed, along with modifications made to the algorithm. Section 4 presents the numerical experiments and simulation results, showcasing the effectiveness of the proposed approach through various examples. Section 5 offers concluding remarks and outline future directions for research.
2 Methods
2.1 Sparse Reconstruction of Time-Space Fractional Differential Equations.
where is the regularization parameter in the l1 regularization. Note that l1 regularization promotes sparsity by imposing a constant preference (the derivative of the regularization term with respect to the parameter) for small parameter values. In that sense, feature selection is built into the l1 regularization technique. In l2 regularization, on the other hand, that preference for smaller parameter values itself becomes small as parameters approach zero. l2 regularization therefore imposes parameter values close to 0 but not exactly 0.
When dealing with time-fractional differential equations, the feature matrix depends only on t but here it also depends on the space fractional orders which adds some complexity. Differential evolution deals with that efficiently by updating the feature matrix after each iteration by generating space fractional orders.
2.2 Differential Evolution.
To find minimizers of (2.6), DE is used. DE is an iterative optimization algorithm developed by Storn and Price [20] designed to solve nonlinear optimization problems. It has a unique advantage over other optimization methods, such as Gradient Descent, as it does not require the objective function to be differentiable which vastly facilitates the numerical treatment of (2.6). This algorithm is used to find minimizers , a) of the optimization problem (2.6) as follows:
Initially, the population matrix ( denotes the space consisting of matrices of dimension N × M with each element from R) sampling N = 50 sets of parameters αi, βi, γi, () is assembled. If , then M = 12 otherwise M = 13. Each parameter is sampled from a uniform distribution in a specific interval, e.g., . This differs from traditional DE, where typically the same admissible interval is used for all parameters. Let ( is the row vector of M elements with each element from R) denote the parameter set αi, βi, γi, , i.e., are the rows of the population matrix where denotes the generation.
Then, the following steps are iterated until a stopping condition is reached:
The standard mutation and crossover steps of DE are applied. In the mutation process, for each parameter set termed target vector in the population matrix, three other parameter sets of the same generation g are selected randomly. The component-wise difference of the first two, multiplied by a scalar weight, is added to the third one which yields the so-called donor vector . In every iteration, the weight or scaling factor is sampled from a uniform distribution in the interval . Every component of the donor vector is compared to the lower bound of its admissible interval. If a given component of is smaller than the lower bound, then this value of the donor vector is replaced by its lower bound. Analogously, if any component of the donor vector is larger than its upper bound, it is replaced by the upper bound.
- In the crossover step, a trial vector is obtained for each target vector in the population matrix. For this step, let () be sampled from the uniform distribution on the unit interval, Jr be an integer value sampled from the uniform distribution of integers between 1 and M, and Cp is the recombination probability (in this study we use ). Then, the trial vector is given by
For every target vector and every trial vector , the feature matrix is calculated using the exact solution at the points (tj, xi). This differs from the case of time-fractional differential equations, where the feature matrix doesn't change between iterations due to the absence of space-fractional derivatives.
- Using the LASSO operator (2.6), the error values and for all target vectors and all trial vectors are calculated. Based on these error values, the parameter sets are as follows:
The next iteration is started by going back to 1 unless a threshold condition is reached. In the case of non-noisy data, iterations are stopped as soon as at least for one parameter set the error value is below a given threshold value; otherwise, with noisy data, the procedure after 1000 iterations is stopped.
From the last generation g, the parameter set with lowest approximation error is selected. It contains the fractional orders and coefficients determining the shape of the learned equation. By following these steps, the DE algorithm is utilized to effectively learn the desired fractional orders and coefficients, and to recover the original form of the equation. Figure 1 summarizes the above procedure.
3 Results
To illustrate the effectiveness of the proposed method (discussed in Secs. 2.1 and 2.2), numerical examples are presented. For the first example, an equation with spatial derivatives of Riemann–Liouville (RL) type is considered, while in the second example, an equation involving Riesz fractional derivative in space is considered. By considering different types of spatial fractional derivatives in these two examples, the robustness of the proposed method across various noise levels is assessed.
The third example is more intricate since it includes nonlinear terms in the equation. This is used to explore a larger number of spatial fractional orders, as compared to the preceding examples. In the last example, the data describing the Fisher-KPP equation are considered and the underlying differential equation is learned using the approach described in Secs. 2.1 and 2.2. Note that in first three numerical test cases, the regularization parameter is λ = 100 while in the last two examples, regularization parameter is chosen. All simulations are conducted using MATLAB on a 12th Gen Intel(R) Core(TM), i5-1235 U, 1.30 GHz processor, with 16 GB RAM. These computational resources are chosen to ensure efficient and reliable execution of the simulations.
3.1 Time–Space Fractional Diffusion Equation.
where and the coefficients ai are calculated as in Eq. (2.4). The quantities and in Eq. (3.2) are RL fractional derivatives where is the exact solution of Eq. (3.1). In the numerical discretization on the uniform grid tj, xi, the values where ξi is normally distributed with mean 0 and variance are calculated. It represents random noise in observing discrete values of the solution on the grid (xi, tj) and in numerically evaluating fractional derivatives to setup the feature matrix . It is evident from Sec. 2.1 that the feature matrix changes in every iteration. To test the method, the exact solution is used to generate data points on a uniform grid of 200 × 200 points. By applying the method to the generated data at different noise levels, the algorithm succeeds in determining the values of the time and space fractional orders, as well as the form of the diffusion equation represented by the coefficients a. Table 1 provides a summary of the learned values at different noise levels. At all noise levels, the learning algorithm correctly identifies the form of (3.1), namely, the order of the time derivative being 0.5 and the spatial derivative being of order 1.5.
σ (noise) | α | β | γ | a (rounded to five decimal places) |
---|---|---|---|---|
0 | 0.49018 | 1.50170 | 1.50170 | |
0.05 | 0.45720 | 1.52330 | 1.52330 | |
0.10 | 0.45699 | 1.52350 | 1.52350 |
σ (noise) | α | β | γ | a (rounded to five decimal places) |
---|---|---|---|---|
0 | 0.49018 | 1.50170 | 1.50170 | |
0.05 | 0.45720 | 1.52330 | 1.52330 | |
0.10 | 0.45699 | 1.52350 | 1.52350 |
The obtained results illustrate the effectiveness and robustness of the method for learning the time–space fractional diffusion equation, even in the presence of noise where noise is incorporated on the left side of (3.2).
3.2 Time–Space Fractional Wave Equation.
where represents time and represents space coordinates. The constant c represents the wave speed. Here, and . The right-hand side of the equation includes the term g(x, t), which represents a nonhomogeneous forcing function. The specific form of the forcing function depends on the particular problem or system under consideration. Some applications lie in symmetry breaking [24] and in anomalous transport [25].
Note that here . Similar to the approach in Sec. 4.1, the aim is to learn the structure of the PDE within the class of Eq. (2.4) where and in the feature matrix are interpreted as Riesz derivatives. Precise Riesz-type fractional derivatives of order in Eq. (2.4) of the exact solution of Eq. (3.4) are given by which are used to generate the feature matrix in every iteration of the differential evolution algorithm.
Noise . | α . | β . | γ . | a (rounded to five decimal places) . |
---|---|---|---|---|
0 | 1.45780 | 1.50220 | 1.50220 | |
0.05 | 1.40290 | 1.50620 | 1.50620 | |
0.10 | 1.40440 | 1.50620 | 1.50620 |
Noise . | α . | β . | γ . | a (rounded to five decimal places) . |
---|---|---|---|---|
0 | 1.45780 | 1.50220 | 1.50220 | |
0.05 | 1.40290 | 1.50620 | 1.50620 | |
0.10 | 1.40440 | 1.50620 | 1.50620 |
Fig. 2 shows the contour plot of the exact solution and solution of learned time space fractional wave Eq. (3.5). This numerical example demonstrates that the algorithm is effective at learning the symmetric Riesz fractional derivative in addition to nonsymmetric fractional derivatives of Riemann–Liouville and Caputo type.
3.3 Time-Space Fractional Burgers' Equation.
Here, the equation incorporates fractional derivatives with respect to both time t and space x where , , and . The parameters ν and λ represent the coefficients for diffusion and flux, respectively. The function g(x, t) denotes a source or forcing term. The time–space fractional Burgers' equation finds applications in various fields where the dynamics of fractional diffusion processes is involved.
where . An exact solution of (3.7) is when .
Noise . | α . | β . | γ . | a (rounded to five decimal places) . |
---|---|---|---|---|
0 | 0.49985 | 0.51122 | 1.48790 | |
0.05 | 0.45320 | 0.52300 | 1.46299 | [0,0,0,0.10089,0.03764,0,0,0.95299,0,0] |
0.10 | 0.45098 | 0.52486 | 1.46010 | [0,0,0,0.10120,0.03781,0,0,0.95105,0,0] |
Noise . | α . | β . | γ . | a (rounded to five decimal places) . |
---|---|---|---|---|
0 | 0.49985 | 0.51122 | 1.48790 | |
0.05 | 0.45320 | 0.52300 | 1.46299 | [0,0,0,0.10089,0.03764,0,0,0.95299,0,0] |
0.10 | 0.45098 | 0.52486 | 1.46010 | [0,0,0,0.10120,0.03781,0,0,0.95105,0,0] |
The obtained results strongly indicate the robustness of the method for learning even nonlinear time–space fractional equations.
3.4 Time–Space Fractional Fisher–Kolmogorov– Petrovsky–Piskunov Equation.
The solution of Eq. (3.10) is “learned simulated solution.” Table 4 summarizes the learned parameters of Fisher–KPP equation for different levels of noise. The identified fractional orders α and γ are very close to 1 and 2 (classical orders) although the algorithm runs over a range of values for and . The learned value of β is close to 1, indicating that the fractional derivative of order β with respect to x exists but since the coefficient of ux is 0, this derivative is not playing any role. The values of D and r are correctly identified and noninvolved terms have coefficient 0 when rounded to 2 decimal places. Figure 3 shows that the simulated solution of (3.9) and the solution of the Fisher–KPP equation with the learned parameters (learned simulated solution) are very close. These results claim that the learning method is very accurate in determining the Fisher–KPP equation (3.9) even in the case when the exact solution is not known and a discrete solution is provided on the grid.
Noise | α | β | γ | a (rounded to five decimal places) |
---|---|---|---|---|
0 | 0.99999 | 0.99999 | 1.99999 | |
0.05 | 0.96453 | 0.97775 | 1.96988 | [0,0.09980,0,0.01698,−0.10776,0,0,0,0,0] |
0.10 | 0.96200 | 0.97639 | 1.96380 | [0,0.09899,0,0.01652,−0.10834,0,0,0,0,0] |
Noise | α | β | γ | a (rounded to five decimal places) |
---|---|---|---|---|
0 | 0.99999 | 0.99999 | 1.99999 | |
0.05 | 0.96453 | 0.97775 | 1.96988 | [0,0.09980,0,0.01698,−0.10776,0,0,0,0,0] |
0.10 | 0.96200 | 0.97639 | 1.96380 | [0,0.09899,0,0.01652,−0.10834,0,0,0,0,0] |
3.5 Learning a Fractional Fisher–Kolmogorov–Petrovsky– Piskunov Equation From Data.
Equation (3.11) is numerically simulated using the finite difference method where the spatial fractional derivative is approximated using the fractional trapezoidal method due its higher order of accuracy as compared to the L2 scheme [31]. The (classical) time derivative is approximated using the forward difference approach. As initial and boundary values, we use the values given by the smoothened data. The resulting solution of Eq. (3.11) is what we call “learned solution from the smoothened data,” which is shown in Fig. 4. It compares well with the data and we therefore conclude that the algorithm is successful in learning the equation underlying this phenomenon.
4 Discussion and Conclusion
In this study, we have proposed a novel approach for learning time–space fractional differential equations using a combination of sparse reconstruction and differential evolution. The objective of the research is to accurately capture the dynamics of time–space fractional diffusion, wave, and Burgers' equations, both in the presence and absence of noise. Through numerical experiments, the effectiveness and accuracy of the proposed approach is demonstrated. The combination of sparse reconstruction techniques helped to efficiently extract relevant features from the given data, while the application of differential evolution facilitated the optimization process for determining the parameters of the fractional differential equations. The learned models are able to accurately capture the intricate dynamics of time–space fractional systems, even in the presence of noisy input data. A real-world example is also included and the underlying differential equation is learned. The solution to this equation approximates the data well although we do not attempt to minimize the error, but the residual of the underlying equation when evaluated at the (smoothened) data. The advantage of this approach is that we never have to solve numerically the forward problem. This suggests that the proposed approach is well-suited for real-world scenarios where data may be subject to noise and uncertainty.
Looking ahead, there are several promising directions for future research. First, the approach can be extended to higher dimensions, allowing for the modeling and analysis of complex systems in multiple spatial dimensions. Second, the integration of additional constraints and regularization techniques could further enhance the accuracy and stability of the learned models. Third, the applicability of the methodology can be extended to other types of real-world data as well. Fourth, the concept of learning the structure of fractional models underlying given data should also be evaluated through classical error minimization in combination with inverse problem regularization techniques. Finally, in the presence of high noise in real world data, proper smoothing techniques need to be implemented in both dimensions to calculate time and space fractional orders of derivatives, which is a notable challenge that we wish to overcome in the near future.
In conclusion, the research demonstrates the effectiveness of the sparse reconstruction and differential evolution approach in learning time–space fractional differential equations. The ability to accurately learn and analyze time–space fractional diffusion, wave, and Burgers' equations opens up new avenues for applications in diverse fields, such as physics, engineering, and finance. The findings presented in this study contribute to the field of fractional calculus and provide a solid foundation for further advancements in the understanding and modeling of fractional systems.
Funding Data
Department of Science and Technology, India (Grant No. SR/FST/MS-1/2019/45; Funder ID: 10.13039/501100001409).
Data Availability Statement
The authors attest that all data for this study are included in the paper.