In the present study, a general probabilistic design framework is developed for cyclic fatigue life prediction of metallic hardware using methods that address uncertainty in experimental data and computational model. The methodology involves: (i) fatigue test data conducted on coupons of Ti6Al4V material, (ii) continuum damage mechanics (CDM) based material constitutive models to simulate cyclic fatigue behavior of material, (iii) variance-based global sensitivity analysis, (iv) Bayesian framework for model calibration and uncertainty quantification, and (v) computational life prediction and probabilistic design decision making under uncertainty. The outcomes of computational analyses using the experimental data prove the feasibility of the probabilistic design methods for model calibration in the presence of incomplete and noisy data. Moreover, using probabilistic design methods results in assessment of reliability of fatigue life predicted by computational models.

## Introduction

Recent advances in computational simulations of materials and mechanical systems have increased the employment of simulation-based engineering design that relies on computational models to predict component performances. Such approach allows for decision-making prior to the availability of physical prototypes and provides insights into certain phenomena for which experimental observations are not available due to measurement limitations. Despite these benefits, the expected question that arises is the reliability of such decision-making as a means of predicting physical reality. This is an important question, as there is uncertainty in every phase of the design process of engineering systems [1,2] including manufacturing imprecision, variation of material properties, and incomplete or lack of experimental measurements. Such uncertainties might have significant impacts on design performance and dealing with them is one of the emerging challenges in product development and is the main bottle neck in simulation-based engineering design. Conventional (deterministic) design methods are inadequate to cope with uncertainties and results in invalid decision-making and design with unknown reliability or unknown risk. Probabilistic engineering design, on the other hand, relies on probabilistic and statistical methods to assess reliability within a design criterion. This allows for probabilistic risk assessment to be built into the design process and provides means for the quantification of uncertainties and their mitigation in the design stage. In this regard, probabilistic design methods are essential for modern engineering that enables solving complex engineering problems in the face of uncertainty.

The prediction of life of mechanical components such as turbine engines experiencing cyclic loading is of vital importance to aircraft propulsion systems and their mission capability. Fatigue cracking and failure of turbine engine components are typically initiated in the areas of high mechanical stress. Surface treatments, such as laser peening, enable engineering of protective prestress into these high stress areas, and have been proven to be of great benefit to extend component lifetimes and mitigate the safety-related risk of high cycle fatigue. Continuum damage mechanics (CDM) [3,4] provides an effective approach for characterizing the fatigue damage evolution and enables the prediction of life of components due to various conditions, such as initial damage or surface treatments like shot peening or laser peening. Despite recent development in fatigue models [5,6], in many cases, fatigue life predicted by computational models is orders of magnitude different from the test or field observation. Such differentiation stems from the probabilistic nature of fatigue along with uncertainties that are introduced in geometry, loading, boundary condition, material properties, and modeling itself. One source of uncertainty is the dependency of the initiation and growth of a fatigue crack on the randomness of mechanical properties and material's microstructure [7–9]. Moreover, a large number of tests are required to establish full spectrum of fatigue strength of a material. Acquiring such experimental data is costly and might not be always accessible. Furthermore, the available experimental measurements might be contaminated with noise introduced by experimental devices or material variabilities. On the other hand, continuum damage mechanics computational models are intrinsically based on simplifying assumption of physical reality and incapable to entirely characterizing the complex fatigue phenomena. Hence, addressing such uncertainties is crucial for the fatigue life prediction at the design stage that enables reliability and risk assessment of predicted lifetime and open the potential for reduced maintenance and replacement costs of modern engine components, enhanced safety, and increased performance.

A large number of investigators have addressed the stochastic fatigue phenomena in the literature [10,11]. Such efforts involve developing probabilistic techniques to model the fatigue crack propagation under constant or random loading [12,13] as well as studying the randomness of the fatigue damage accumulation process caused by random distribution of stress amplitude, number of cycles at a stress amplitude, and fatigue resistance of material [14–17]. Naderi et al. [18] simulated fatigue damage and life scatter of metallic components using continuum elastoplastic damage model. The progressive fatigue failure was simulated by applying random properties to the finite element domain. Bahloul et al. [19] used a probabilistic approach to predict fatigue life of cracked structure repaired by high interference fir bushing. In the context of a probabilistic approach, Zhu et al. [20] studied the influence of material variability on the multiaxial low cycle fatigue of notched components by using the Chaboche plasticity model and the Fatemi–Socie criterion. Kwon et al. [21] estimated the fatigue life below the constant amplitude fatigue threshold of steel bridges using a probabilistic method on the basis of a bilinear stress life approach. Zhu et al. [22,23] developed a Bayesian framework for probabilistic fatigue life prediction and uncertainty modeling of aircraft turbine disk alloys. They also quantified the uncertainty of material properties and model uncertainty resulting from choices of different deterministic models in a low cyclic fatigue regime. Recently, Babuska et al. [24] proposed a systematic approach for model calibration, model selection, and model ranking with reference to stress-life data drawn from fatigue experiments performed on 7075-T6 aluminum alloys. The stress-life models are fitted to the data by maximum likelihood methods and life distribution of the alloy specimen are estimated. The models are also compared and ranked by classical measures of fit based on information criteria. Moreover, Bayesian approach is considered that provides posterior distributions of the model parameter.

The present study develops a computationally feasible stochastic methodology for prediction and design of material components experiencing cyclic fatigue. In this regard, we select a nonlinear continuum damage theory for fatigue analysis with Ti6Al4V as the subject material of interest. Global sensitivity analysis is conducted to determine important model parameters influencing fatigue life prediction. To provide information for model calibration, fatigue tests are performed on coupons of Ti6Al4V material at different stress levels. In order to factor the inherent uncertainties in data and model inadequacy into a statistical analysis for the calibration of the system, a Bayesian framework is also developed for defining, updating, and quantifying uncertainties in the model, the experimental data, and the target quantities of interest. The calibrated model is then utilized for computational life prediction of hardware under uncertainty. In this regard, an illustrative inverse design scenario is taken into account in which surface treatment is considered as a design variable. The optimal design variable and its reliability for a target performance (lifetime) expected from the component is assessed.

This paper is structured as follows: A summary of the fatigue damage model and laser peening modeling is presented in Sec. 2. Section 3 describes the computational methods employed for uncertainty treatment. This involves global sensitivity analysis that is performed on the model as well as Bayesian framework for statistical calibration of the model against experimental data with quantified uncertainties. In Sec. 4, the results of sensitivity analysis, the experimental results used in the statistical analyses, and statistical calibration of fatigue model are presented. Computational prediction, design decision making under uncertainty, along with an illustrative design example are also discussed in this section. A summary and concluding remarks are given in Sec. 5.

## Fatigue Analysis Using Continuum Damage Mechanics

In the CDM approach, the mechanical behavior of damaged material is described by a set of constitutive and damage evolution equations. Plasticity and damage are tightly coupled in the CDM formulation. At any instant for a given load increment, plasticity increment depends on the current damage state and in turn, the damage increment depends on the current plasticity state. For the case of high cycle fatigue, damage is always very localized and occurs at a scale (microscale) much smaller than the scale of representative volume element (RVE) (mesoscale), where the state of the material remains essentially elastic. The damage in metal often initiates in localized grains and grain boundaries or at inclusions where stress concentration builds up to cause plastic flow at a few localized grains, thus plasticity and damage occurs in a very localized region inside the RVE, but there is no overall plasticity at the RVE scale. To account for the plasticity and damage interaction in the CDM model for the case of high cycle fatigue loading, a two-scale model first proposed by Lemaitre [25] is followed in this paper.

*σ*, the total strain as

_{ij}*ϵ*, elastic strain as $\u03f5ije$, and plastic strain as $\u03f5ijp$. The values at the microscale are denoted by an upper script

_{ij}*μ*. A finite element analysis is performed to obtain the stress

*σ*at a location of interest due to remote loading, and the corresponding total strain

_{ij}*ϵ*and elastic strain $\u03f5ije$ are computed. For high cycle fatigue loading, the plastic strain $\u03f5ijp$ is normally zero due to remote loading. If the material is subjected to laser or shot peening, then the resulting plastic strain $\u03f5ijp$ is nonzero at the mesoscale. Next, a scale transition law based on the localization laws of Eshelby and Kroner is utilized to derive the corresponding micro elastic strain $\u03f5ij\mu e$ and micro plastic strain $\u03f5ij\mu p$ from the strains at the meso level. The Eshelby–Kroner localization laws [26,27] are written as

_{ij}*D*refers to the deviatoric part and the subscript

*H*to the hydrostatic part. The constants in the localization laws are given as

*C*is the plastic modulus; and damage strength

_{y}*S*, damage exponent

*s*, and critical damage

*D*are material parameters. The quantity

_{c}*Y*is the energy release rate term and is given in terms of the stress components as

*σ*is obtained by determining the local stress at the point of interest due to a reference remote load, from finite element computation. Next, for a given remote load history, the local stresses are scaled appropriately, and the local stress history is obtained. The residual strain $\u03f5ijp$ (due to laser peening or other technique) is a constant value obtained from a separate analysis. Each load cycle is broken up into a number of load increments and meso strain increment is obtained as $\Delta \u03f5=\u03f5n+1\u2212\u03f5n$. The incremental micro strains are computed by assuming that they are completely elastic from the meso strain increments. If the new state remains within the yield surface, then the assumption of purely elastic micro strains is acceptable and we move to the next load increment. On the other hand, if there is plastic yielding, then the incremental micro plastic strain is computed and the incremental micro elastic strain is corrected so that the material state returns to the yield surface. The incremental damage is calculated from incremental micro plastic strain and added to the accumulated damage and the process continues until the total damage crosses a threshold value which indicates the initiation of a crack. The number of cycles for the damage to reach the critical value is the fatigue crack initiation life of the component under high cyclic fatigue loading. The details of obtaining incremental micro plastic strain and incremental micro damage can be found in Ref. [25].

_{ij}## Uncertainty Treatments and Probabilistic Design

The treatment of uncertainty in predictive modeling involves three distinct processes: (1) the statistical inverse process in which the probability densities of random model parameters and modeling errors in the theoretical structure of computational model are estimated using measurement data; (2) the statistical forward process involving propagation of input uncertainties through the computational model to quantify the uncertainties in quantity of interests (QoI's); and (3) ultimately using the stochastic model with quantified uncertainties due to both modeling and measurement errors for control or design decision making under uncertainty.

In this section, we describe the methods and techniques we have employed for uncertainty treatment in cyclic fatigue life prediction. These involve the propagation of uncertainty through the forward model and solution process using Monte Carlo methods, global parameter sensitivity analysis, and Bayesian methods for model calibration with quantified uncertainty.

### Forward Uncertainty Propagation and Sensitivity Analysis.

In order to illustrate our proposed probabilistic design, we use the notion of *black-box model*. In this setting, the continuum damage constitutive relation presented in Sec. 2 along with boundary and initial conditions is cast into an abstract form of the computational (forward) model. In a black-box (input–output) model of a system, the underlying character or physics of the relations involved as well as numerical solutions are hidden. This notion is used in this section to lay down a general framework of uncertainty quantification in computational models. It is shown later that the nonintrusive probabilistic methods require only the input and output of the model for forward and inverse uncertainty assessment. In this regard, the probabilistic design method presented in this section is general and can be implemented for any material model.

As shown in Fig. 1, the inputs of a computational model can be categorized to model parameters (vector of $\theta $) and design variables (vector of $\xi $). Model parameters are the inputs that cannot be controlled, while design variables are the ones that can be controlled and changed to improve the design. Given these inputs, the computational model evaluates the output or response variable $Y(\theta ,\xi )$. The goal of constructing and solving the computational model is to compute specific QoI that can be determined from model output *Y*.

For the cyclic fatigue life prediction, the model parameters can be geometry of component and material parameters of the continuum damage model (indicating strength, microstructure, and in-service damage) and the design variables are residual stress (can be controlled through surface treatments), operational load, and usage cycles. Output response in this case is the damage variable that will be used to assess and predict the QoI, life of the hardware due to fatigue damage.

The presence of inevitable uncertainties in modeling and experimental data results in the deviation of computational prediction from physical reality. The main source of uncertainties is the variability of input parameters such as manufacturing imprecision in geometry, variation of material properties, noises in experimental measurements used for damage model calibration and validation [28–30]. The variability of these factors cannot be reduced in an industrial setting although it causes response variation and may result in catastrophic events when the product fails. Accounting for such uncertainties in fatigue life prediction involves two main challenges: (1) modeling the effect of uncertainty in input parameters and (2) assessing the effect of uncertainty on life prediction of the hardware. In order to cope with these uncertainties, we make use of probability theory. In this setting, model parameters and design variables are not deterministic; they are random variables characterized by probability density functions (PDFs). Therefore, the forward model is a stochastic problem in which we have $\theta $ and $\xi $ as PDFs, the response variable *Y* will become a random variable. Second challenge involves developing techniques to propagate these uncertainties through models and provide statistical solution for life prediction. Commonly used methods to propagate these random variables through models include the large family of Monte Carlo methods. Monte Carlo methods lead to a collection of independent realizations of the forward model to solve. One of the main reasons that these methods are so popular in practice is that they are completely nonintrusive, requiring only an existing deterministic solver. The convergence of the basic Monte Carlo method is guaranteed under very weak assumptions and independent of the dimension of the parameter space. Probability distribution of *Y* obtained from Monte Carlo simulation indicates the sensitivity of response to the variation of inputs and level of confidence of the computational prediction.

The core computational kernel of stochastic algorithms based on the Monte Carlo is the repeated evaluation of the forward model with different parameters. In the context of fatigue damage life prediction models that are described by a system of partial differential equations, it is usually the solution of the deterministic forward model, the map between a single realization of the stochastic parameter and a quantity of interest in the output of a model that dominates the overall cost of solving the statistical problem.

In the context of forward propagation of uncertainty, parameter sensitivity analysis can be considered as one of the most useful techniques in probabilistic design. It is the study of how the change in input factors of a model, qualitatively and quantitatively, affects the variation and uncertainty in the model outputs. As shown in Sec. 4, such analysis determines the most effective model parameters and reduce the computational cost of statistical calibration and aids in robust design of the system. A qualitative way to visualize the sensitivity of parameters is to produce scatter plots [31] which are clouds of points indicating the model output versus each parameter, constructed by random variations in all parameters. The parameters with significant contribution to the output are the ones with distinct pattern in the scatter-plot cloud. Among different quantitative techniques for sensitivity analysis of model output, variance-based methods [32–34] have been proven to be effective and well suited for practical applications. In these methods, the sensitivity of the output to an input variable is measured by the amount of (conditional) variance in the output caused by that specific input.

#### Variance-Based Sensitivity Analysis.

*Y*=

*f*(

*θ*

_{1},

*θ*

_{2},…,

*θ*) with

_{k}*k*uncertain input factors

*θ*, in which

_{i}*Y*is the model output and

*f*is a square integrable function. Using the Hoeffding decomposition of

*f*[33,34] and conditional expectations of the model, $E(Y|\theta i)$, one can derive the following decomposition for the output variance,

*V*(

*Y*):

In Eq. (5), *θ _{i}* is the

*i*th factor, $\theta \u223ci$ indicates the matrix of all factors except

*θ*, $V\theta i(E\theta \u223ci(Y|\theta i))$ and $V\theta j(E\theta \u223cj(Y|\theta j))$ are called the first-order effects, and $V\theta i\theta j(E\theta \u223cij(Y|\theta i,\theta j))$ is the joint effect of (

_{i}*θ*;

_{i}*θ*) on

_{j}*Y*. In $V\theta i(E\theta \u223ci(Y|\theta i))$, the internal expectation represents the mean of

*Y*taken over all possible values of $\theta \u223ci$ while

*θ*is fixed, and the outer variance is taken over all possible values of

_{i}*θ*. Therefore, $V\theta i(E\theta \u223ci(Y|\theta i))$ is the expected variance reduction when

_{i}*θ*is fixed.

_{i}*θ*and

_{i}*θ*. Decomposing the variance by conditioning with respect to all factors but

_{j}*θ*leads to

_{i}where $E\theta \u223ci(VXi(Y|\theta \u223ci))$ is the residue variance of *Y* for fixed *θ _{i}* and $V\theta \u223ci(E\theta i(Y|\theta \u223ci))$ denotes the expected reduction in variance if all values other than

*θ*are fixed. The total effect $STi$ measures the contribution of factor

_{i}*θ*to the output variation. Small values of the total effect index imply that

_{i}*θ*can be fixed at any value within its range of variability (uncertainty) without appreciably affecting the output.

_{i}### Statistical Inverse Theory.

Predictive computational modeling and simulation-based design requires experimental data to be integrated into computational models in order to calibrate the models and assess their validity. However, the measurement data (e.g., fatigue test) involve material variability and might be incomplete and contaminated with noise, i.e., error in data. Moreover, the computational model (e.g., continuum damage model) is constructed based on modeling assumptions and is an imperfect characterizations of reality, i.e., error in model. The problem of overriding importance is to characterize in a worthwhile way all of these uncertainties, to track their propagation through the solution, and to ultimately determine and quantify the uncertainty in the target QoIs. Deterministic inverse methods for model calibration, based on optimization techniques, are limited in characterizing the uncertainties in data and computational model. In this regard, we employ Bayesian approaches, based on concurrent treatments of statistical inverse analysis, such as the ones described in Refs. [37–41]. The main hypothesis of this theory is that of subjective probability; the parameters $\theta $, the observational data **D**, and the theoretical model are not deterministic; they are random variables or processes characterized by PDF's.

**D**obtained from a set of experiments. Within the context of fatigue life prediction, the calibration process would refer to the identification of the damage model parameters from sets of fatigue test. The solution to the statistical calibration problem, following Bayesian approach, is the posterior PDF $\pi post(\theta |D)$ for the model parameters, updated from a prior PDF $\pi prior(\theta )$ as (see, e.g., Ref. [42])

*π*

_{prior}represents initial knowledge or information we may have about parameters $\theta $ before observing the data. In structural mechanics and from engineering perspective, one might simply assume an appropriate range of parameters from prior experiments and analyses on similar material and system. Likelihood PDF $\pi like(D|\theta )$ encapsulates assumptions about the discrepancy between the values

**D**that are measured and the values that can be computed with the computational model $Y(\theta )$. One common approach to defining a likelihood function is based on additive total error (data noise and model inadequacy) hypothesis. Assuming total error being Gaussian random variables of zero mean, the form of likelihood can be postulated as

where **C** is a covariance matrix, **D** denotes experimental data, and $Y(\theta )$ is the model output.

## Results and Discussions

The uncertainty treatment methods described in Sec. 3 is employed to integrate experimental data and computational fatigue damage model in order to assess the reliability of computational prediction and corresponding design decision making. Due to high computational cost of Bayesian inference, model sensitivity analysis is initially performed in order to discard irrelevant parameters from statistical calibration. Bayesian calibration is then conducted to statistically infer the most important parameters of the model using the information provided by fatigue test data. Ultimately, the calibrated model with quantified uncertainty is employed to make predictions about the fatigue strength of the material in the defined loaded stress state. The process of reliability assessment of the computational model prediction is depicted in Fig. 2.

### Sensitivity Analysis of Fatigue Damage Model.

In order to determine the relative level of importance of each parameter with respect to others, quantitative sensitivity analysis is conducted on fatigue damage model. For sensitivity scenario, a specimen without surface treatment and under completely reverse stress is taken into account. The amplitude of alternating applied stress is assumed to be 50 ksi and the number of load cycles before failure is considered as model output *Y*. With respect to this observable, the scatterplots for some of the damage model parameters are shown in Figs. 3 and 4. These plots were obtained by sampling the parameters from given distributions and computing the model output at each sample. The initial distributions of parameters are assumed to be uniform and in the range of reported values for Ti6Al4V material in the literature [18,25,43–46] and are presented in Table 1.

The qualitative sensitivity analysis obtained from scatter plots is then confirmed by computing total sensitivity indices (7). In the vast majority of applications, it is not trivial to analytically calculate the total-effect sensitivity indices. Moreover, Monte Carlo method for estimating the conditional variances requires multidimensional integrals in the input factors space and it is computationally intractable. Saltelli [31,35,47,48] proposed a Monte Carlo scheme to determine the indices with a minimized computational cost of estimating multidimensional integrals. Such method is a generalization of the original approach proposed by Sobol' [33,34] and can be summarized by the following procedure:

Create two

*N*independent samples of*k*parameters and store them in matrices**A**and**B**.Generate a matrix

**D**established by all columns of_{i}**A**except the*i*th column, which is from**B**.Evaluate the model outputs for all the matrices $YA=f(A);YB=f(B);YDi=f(Di)$.

- The total-effect indices can then be estimated using the following estimator [48]:$STi\u22481\u22121N\u2211j=1NYA(j)YDi(j)\u2212(1N\u2211j=1NYA(j))21N\u2211j=1NYA(j)YA(j)\u2212(1N\u2211j=1NYA(j))2$(11)

The previously-mentioned method is implemented to estimate total sensitivity indices of the fatigue model parameters and the results are shown in Fig. 5. To compute sensitivity indices in this figure, 1000 Monte Carlo samples of parameters are used based upon the observed rate of convergence of the indices with respect to number of samples. Helton and Davis [49] have shown that using a Latin Hypercube Sample [50] increases the accuracy of the sensitivity indices.

It can be seen qualitatively from scatterplots of Figs. 3 and 4 and quantitatively from estimated total sensitivity indices of Fig. 5 that the parameters *S* and *s* are the most important contributors to the fatigue life due to higher values of $ST$ and exhibiting distinct patterns of the model output clouds in the scatterplots. However, other parameters do not show a significant effect on variation of fatigue life of the material.

To verify the results of global sensitivity analyses, the fatigue life is computed considering three sets of parameters, *first* in which all the parameters are assumed to be random variables sampled from the distributions in Table 1, *second* in which most influential parameters *S* and *s* are assumed to be deterministic, and *third* in which less effective parameters *E* and *ν* are considered as deterministic values. In the two latter cases, the deterministic parameters are excluded from sampling and are fixed to the mean values of the corresponding distributions in Table 1.

The kernel density estimate (KDE) for each case is presented in Fig. 6. The minimal difference between the KDEs due to including or excluding (*E*, *ν*) as random variables indicates the insignificant effect of these parameters on fatigue life of the material. However, considering (*S*, *s*) as deterministic values results in major changes in KDE confirming the importance of these parameters in fatigue life prediction and consequently hardware design.

Considering the computational cost of solving statistical inverse problem as well as curse of dimensionality, global sensitivity analysis can aid in determining the parameters that can be considered as deterministic during statistical model calibration and greatly reduces computational effort in calibration stage. Given the sensitivity results obtained from fatigue model, we consider only *S* and *s* as stochastic parameters in the course of statistical calibration of damage model against fatigue test measurements, while other parameters are assumed to be deterministic values. Obviously, deciding on which parameters to be excluded from the calibration process is a subjective decision made by designer for a particular problem.

### Experimental Observations From Fatigue Tests.

In order to provide data for calibrating the fatigue damage model under uncertainty, fatigue tests were performed on machined airfoil coupons. In this regard, a four-point bend coupons of 7 in length, 1.326 in height, and 0.75 in base thickness with airfoil contour were designed with geometry as shown in Fig. 7(a). Ten coupons were fabricated of the same Ti6Al4V plate material (AMS 4928V, flat bar stock) post-annealed for 2 h at 1300 °F. Curtiss-Wright has built and tested similar four-point bend coupons for jet engine development applications and had confidence in the reliability of this design. Curtiss-Wright Metal Improvement Company carried out the fatigue tests.

For fatigue testing, the coupons were loaded into a four-point bend fixture on an Instron 20 kip test rig as shown in Fig. 7(b). Stress loading was provided from the actuator and lower load points using 6 in roller spacing and reacted above by the two rollers with 1.75 in spacing. Consequently, the region of the bottom edge of the coupon along the center 1.75 in span received a uniform stress loading during test. Because this test concept creates a very uniform stress along this edge, it allows accurate calibration of the stress loading.

An un-notched coupon was affixed with strain gauges as shown in Fig. 7(b) and installed on the Instron 20-kip fatigue test rig. Known loads were slowly applied and the strain developed was compared to that predicted from the known modulus of elasticity for the material and the finite element analysis of the coupon design. The strain gauge data matched at about the 98.5% level with the finite element prediction, inline with what has been typically seen previously for this kind of test.

Because this research effort is focused on computational prediction of cyclic fatigue lifetime, which includes crack initiation and crack growth rates, it was decided to initiate a deterministic starting flaw in the coupon leading edge so that the damage initiation and propagation is confined to a local region and it can be tracked easily. To this end, a notch representing a *K _{t}* stress riser was generated in the curved edge of the test coupons using electro discharge machining. The notch was cut to 0.010 in wide by 0.010 in deep. With a load of 3645 lb on the four-point bend setup, the smooth stress outside of the notch was predicted to be 35 ksi. The elastically predicted stress for a 0.010″ radius notch is 99 ksi, as shown in the analysis line-out of Fig. 8 with the notch at the upper left corner of the model with two planes of symmetry.

Curtiss-Wright used their archived fatigue test data from similar Ti6Al4V coupons to propose an initial test run at 35 ksi stress loading to achieve a targeted run of 200,000 cycles. Table 2 summarizes the experimental data conducted on five specimens and will be used for fatigue model calibration.

### Bayesian Calibration of Fatigue Model.

Conducting the statistical calibration of damage model against fatigue test experimental observations requires modeling decision in the form of likelihood function, model parameters that are to be treated as random variables (instead of deterministic), and prior PDFs that are to be used for the random parameters. Following the results of global sensitivity presented in Sec. 4.1, the statistical calibration is performed on two most influential model parameters on fatigue life prediction that controls damage growth, *S* and *s*, while others are fixed to the mean values reported in the literature for Ti6Al4V material.

Although the fatigue damage forward model considered here can be computed in a few minutes when set within a statistical inversion computational framework (where the forward problem may be run hundreds of thousands of times), the required computational effort is not trivial. Thus, in order to keep the focus on the feasibility of the probabilistic design approach, we limit ourselves to calibrating only two parameters. In this regard, uniform prior PDFs are taken into account for the random parameters $\theta =(S,s)$ over appropriate range (Table 1). The form of the likelihood function reflects the way the discrepancy between the quantities computed with the material constitutive relation and the reference data are modeled. As indicated previously, the reference data **D** provided through the fatigue test consist of the measured number of cycles before specimen failure under cyclic loading. It is known that such data are contaminated with noise and specimen variability (i.e., noise in data). Moreover, fatigue test results are only available in three stress conditions since experimentally investigation of the full spectrum of stress-life is not feasible (i.e., incomplete data). On the other hand, the continuum damage model is imperfect and is a simplified characterization of reality. For example, the current fatigue computational model does not address microstructural evolution such as density of voids and inclusions (i.e., error in model).

where *n* is the number of experimental data.

*parallel tempering*) algorithm [52] is employed for statistical calibration of fatigue model. This algorithm improves the chances of exploring all existing modes of the posterior PDF through sampling increasingly difficult intermediate distributions, accumulating information from one intermediate distribution to the next, until the target posterior distribution is better sampled. Possible intermediate distributions are given by

*L*> 0 and a sequence 0 =

*α*

_{0}<

*α*

_{1}< … <

*α*= 1 where

_{L}*α*= 0 and

_{ℓ}*α*= 1 denote the prior and posterior distributions, respectively. Therefore, as

_{ℓ}*ℓ*increases, the distribution transitions from the initial prior to the (eventually multimodal) posterior [52]. Such sampling algorithms can greatly benefit from parallel computing.

Figure 9 shows the computed posterior marginal kernel density estimation of the parameters (*S*, *s*) as a result of calibrating fatigue model against given five experimental measurements (Table 2). Wide distributions obtained for the inferred parameters indicate large amount of uncertainty in both experiment (noise and incompleteness of data) and the way we modeled the fatigue damage (constitutive model inadequacy, modeling assumption, assessing stress at the notch). Deterministic parameter calibration is also conducted based on least squares regression and direct search method for multidimensional unconstrained minimization based on Nelder–Mead simplex algorithm [53]. The deterministic calibration of fatigue model resulted in *S* = 2.2413 and *s* = 2.2451. Despite inexpensive computational cost of evaluating these deterministic values, such parameters might result in incorrect fatigue life prediction since they do not carry information regarding the uncertainty in model prediction.

### Probabilistic Design of Fatigue Life.

Final stage in predictive modeling involves employment of the calibrated model with quantified uncertainties to make predictions as well as design decisions. This will be addressed in this section for fatigue life prediction of metallic component using the statistically calibrated fatigue damage model.

The predicted fatigue strength of material using computational model for wide range of applied load is shown in Fig. 10. This figure shows S–N diagram that plots maximum applied stress amplitude versus cycles to failure for stress ratio of *R* = 0.1. The mean stress-life values as well as 95% confidence interval are shown in this figure. The confidence intervals are defined as smallest interval at each stress level where failure cycles have probability of 0.95. The experimental data of the fatigue tests utilized for Bayesian model calibration are also shown in this figure. The mean values of model prediction show larger discrepancy with the data points at high stress regime. This figure shows although the mean prediction curve (corresponding to deterministic values of calibrated parameters) is close to the experimental observations, there is a significant uncertainty in predicted fatigue lives, i.e., wide confidence intervals in a particular stress level. This indicates that the deterministic model calibration is inadequate to deal with the significant uncertainties in model prediction and results in design with unknown reliability.

As mentioned previously, surface treatment such as laser peening enables increasing fatigue lifetime of a metallic component. The effect of such treatment is included into the fatigue model through compressive residual plastic strain. The amount of residual strain can be controlled through laser peening and thus can be considered as the design variable. In order to investigate the effect of uncertainty in predicting life of a component, a design scenario is considered here: For a given cyclic load, predict optimal residual strain (indicating surface treatment) such that, for a specific probability, the fatigue life of the component falls beyond a specified target number of cycles. For illustration, it is assumed that the desired (target) performance of the metallic hardware is 50,000 cycles. The probability density $\pi (Nf|\u03f5rp)$ and the cumulative distribution $\Pi (Nf|\u03f5rp)$ are shown in Fig. 11 for maximum stress of 100 ksi and *R* = 0.1 condition. Simple iterative method results in minimum effective residual plastic strain of $\u03f50rp=0.193$ to satisfy $\Pi (Nf|\u03f5rp)=0.15$. In other words, in this case with 85% of reliability, the fatigue life is 50,000 cycles. Note that the applied cyclic stress in this design scenario is different from the stress level of the available fatigue tests. As mentioned previously, in most cases, computational prediction is conducted out of the range of available observations.

## Conclusions

In the present study, a new probabilistic design methodology is demonstrated that allows the quantification of uncertainties in experimental data and computational model for fatigue life prediction of structural components as well as propagation of these uncertainties on the prediction of components behavior. Such methodology integrates the mathematical model of fatigue damage material behavior and the experimental observations with a Bayesian framework for model calibration and uncertainty quantification. This enables assessing the level of confidence in computational model prediction to improve design and performance of hardware components beyond the range of test data. The physical problem under study is the behavior of airfoil coupons of Ti6Al4V material that are common in turbine engines of aircraft structures under cyclic loading. The experimental data in terms of number of cycles to failure at different stress levels are obtained from fatigue tests conducted on machined airfoil coupons. A fatigue model based on continuum damage theory is used to model the onset and evolution of damage as well as to forecast the lifetime of the material experiencing cyclic loading.

A computationally feasible framework is developed and implemented in this study to quantify the uncertainty in experimental data due to incomplete and noisy measurements, modeling error due to constitutive model inadequacy, and other modeling assumptions typically made in fatigue life prediction. The framework depicted in Fig. 2 involves global sensitivity analysis to determine the important model parameters, Bayesian calibration of model against fatigue test data with quantified uncertainty in model and data, and predicting fatigue strength of material as well as design decision making under uncertainty. An illustrative inverse design problem is presented to compute the optimal residual plastic strain, which is a controllable variable through surface treatment, based on desired component performance and expected reliability.

The outcome of integration among experimental data, computational fatigue damage model, and statistical analyses indicates the presence of large uncertainty in the prediction delivered by the model. This proves the necessity of conducting probabilistic analysis and design for cyclic fatigue life of metallic components. The presence of such uncertainty is mainly due to modeling assumptions as well as the lack of experimental data at wide range of stress levels. It is expected using micromechanical based model that accounts for the evolution of microcrack and voids, employing finite element model of the full coupon instead of analyzing a hot spot at the notch, and obtaining more experimental data at different stress level will reduce the uncertainties and increase the level of confidence regarding the hardware performance and design in practical applications. Moreover, the general probabilistic design framework proposed in this work enables accounting for other sources of uncertainties. For example, fatigue tests data on peened specimen provide opportunity for statistical calibration of residual plastic strain input variable of the model. Accounting for the inevitable uncertainty due to surface treatments and propagation of such variability to the life prediction of a component greatly increases the benefit of these methods to extend component lifetimes as well as reliable assessment in structural safety enhancement.

## Funding Data

Naval Air Warfare Center, Aircraft Division (Grant No. N68335-16-C-0516).