Model calibration and validation are two activities in system model development, and both of them make use of test data. Limited testing budget creates the challenge of test resource allocation, i.e., how to optimize the number of calibration and validation tests to be conducted. Test resource allocation is conducted before any actual test is performed, and therefore needs to use synthetic data. This paper develops a test resource allocation methodology to make the system response prediction “robust” to test outcome, i.e., insensitive to the variability in test outcome; therefore, consistent system response predictions can be achieved under different test outcomes. This paper analyzes the uncertainty sources in the generation of synthetic data regarding different test conditions, and concludes that the robustness objective can be achieved if the contribution of model parameter uncertainty in the synthetic data can be maximized. Global sensitivity analysis (Sobol’ index) is used to assess this contribution, and to formulate an optimization problem to achieve the desired consistent system response prediction. A simulated annealing algorithm is applied to solve this optimization problem. The proposed method is suitable either when only model calibration tests are considered or when both calibration and validation tests are considered. Two numerical examples are provided to demonstrate the proposed approach.

Introduction

In engineering applications, it is often required to estimate the system response under untested conditions using available computational models and test data at different conditions. The computational model aims to describe the physics of the system and can be denoted as Y=F(X;θ), where Y is the system response, X is the set of model inputs, and θ is the set of model parameters. The uncertainty in an input X can be described by a probability distribution πX(x). In the actual test, in some cases, we can control an input at a nominal value but the control is not perfect; thus, πX(x) characterizes this imperfect control. In some other cases, an input such as outdoor temperature cannot be controlled but measured; thus, πX(x) characterizes the natural variability in X. The model parameters θ have fixed but unknown values in all tests on the same specimen. The uncertainty regarding the values of θ is epistemic uncertainty due to lack of information, which can be reduced using test data. (In some problems, the model parameters θ may not be physical quantities but simply artifacts of modeling, in which case the concept of true value may not be applicable; such cases are not considered in this paper. Also, in some problems, the model parameters could be input-dependent; this paper does not consider such cases.)

Two important questions in the system response prediction are: (1) how to quantify and reduce the uncertainty in θ and (2) how to validate the agreement of the computational model to the true physics or quantify their difference. These two questions are resolved by model calibration and model validation, respectively. Usually, model calibration is conducted first to quantify the values of θ or reduce the uncertainty about their values, and then model validation follows. Various approaches to model calibration and validation have been studied in the literature. Consider an example of model calibration using Bayesian inference. While some researchers directly use the computational model Y=F(X;θ) and calibrate θ, others [1] use a model discrepancy term δ(X) to correct the computational model and calibrate both θ and δ(X). Consider another example regarding the use of test data. Some researches treat all the data as calibration data and use the calibrated model parameters in predicting the system response [2,3]; others integrate the results of model calibration and model validation (each done with different sets of data) in predicting the system response [46].

No matter what approaches are pursued, model calibration and validation always require test data. Due to the variability in test outcomes, two sets of test data of the same size may lead to two distinct system response predictions (after calibration and/or validation) even if the same computational model and the same framework of model calibration/validation are used. Here, “test outcome” is defined as the value of test data, i.e., the measurements of test inputs and outputs. The variability in the test outcome is due to the following reasons: (1) the input is controlled at a nominal value but the control is imperfect; (2) the input has natural variability, which means that the input cannot be controlled; and (3) there is measurement error in the input and output data.

If a single data point is used in model calibration/validation, the calibration/validation result will be affected by the value of this data point significantly. However, as more data points are applied, the calibration/validation result will converge; thus, the consequent system response prediction will also converge. Thus, as the number of tests increases, the model prediction uncertainty becomes less and less sensitive to variability in the test outcomes. This raises the following questions, when test budget is limited: (1) is it possible to organize the test campaign to make the system response prediction robust to variability in test outcomes and (2) how many tests of each type are necessary to achieve the robustness objective. Note that in this paper, the term “test type” refers to two attributes: (1) whether the test data are for calibration or validation and (2) the physical quantity measured in the test. For example, if three quantities are measured in tests and all data are used for calibration, we have 3×1=3 types of test; but if part of the data is used for calibration and the remaining data are used for validation, then we have 3 × 2 = 6 types of tests. The focus of this paper is to develop an optimization approach to answer this question, assuming the computational model and the framework of model calibration/validation are given. The design variables of this optimization are the numbers of each type of test, denoted as Nq if q types of tests are available; the objective function and constraints will be discussed later. Note that (1) this optimization needs to be solved before any actual test is conducted [4] and (2) this optimization needs to consider test outcome uncertainty due to which the subsequent system response prediction is also uncertain.

Several approaches for test resource allocation have been studied in the literature [4,711], and the main difference among these approaches is the choice of the objective function. Note that model calibration aims to reduce the uncertainty in model parameters, and thus reduce the uncertainty in the subsequent system response prediction. Thus, in the case that only model calibration is considered in system response prediction, generally the objective of test resource allocation optimization is to minimize the system response prediction uncertainty subject to limited budget. Several quantities have been used to represent system response prediction uncertainty, and the first one is variance. Sankararaman et al. [4] minimized E(V(Y)) where V(Y) is the variance of the system response prediction Y at given numbers of each type of test, and E() denotes the average of V(Y) over different synthetic data sets. Similarly, Vanlier et al. [8] defined the variance reduction of Y via model calibration as 1E(σnew2/σold2) and maximized it, where σnew2 is the variance of the system response prediction using the posterior distribution and σold2 is the variance of the system response prediction using the prior distribution. Entropy measures have also been used to represent system response prediction uncertainty. In Ref. [9], the authors maximized the relative entropy (Kullback–Leibler divergence) from the system response prediction π(y) using the prior distribution and the system response prediction π(y) using the posterior distribution; while in Refs. [10] and [11], the authors maximized the mutual information, i.e., the change of entropy from π(y) to π(y).

The previously mentioned approaches that select only calibration tests to minimize the uncertainty in the system response prediction are not applicable when model validation is also incorporated in the system response prediction. The reason is that model validation may indicate that the calibrated model is not exactly valid; accounting for this result increases the uncertainty in the system response prediction. Thus, the earlier optimization formulations would lead to the conclusion that model validation is not necessary. Mullins et al. [12] proposed a method considering both model calibration and model validation, in which model calibration is via Bayesian inference, and model validation is via a stochastic model reliability metric, i.e., describing model validity through a probability distribution. In this method, the objective regarding model validation tests was to minimize the spread in the family of system response predictions that results from the uncertainty in model validity, denoted as E{V[E(Y)]} where the inner E(Y) is the system response prediction mean at given synthetic data set and given value of model validity, and V[] is the average over the distribution of model validity, and the outer E{} is the average over the different data sets. The objective regarding model calibration tests is still to minimize the variance of the system response prediction, denoted as E{E[V(Y)]} where V(Y) is the system response prediction variance based on a given synthetic data set and given value of model validity; the inner E[] is the average over the distribution of model validity, and the outer E{} is the average over different synthetic data sets.

In this paper, the proposed concept of “test resource allocation for system response prediction robustness” means that the system response prediction becomes insensitive to the variability in test outcomes; thus, at the optimal value of the design variables (i.e., number of tests Nq), different test outcomes result in consistent system response predictions. This concept and the required objective function will be explained in Sec. 2. The approach is suitable in different situations when only model calibration tests are considered or when both calibration and validation tests are considered. Note that the proposed methodology only selects the number of each type of test; it does not design the actual tests, i.e., select the input for the test. Experimental design is a subsequent step to test selection; we only focus on test selection.

The constraint in the optimization of test resource allocation is generally the budget. Note that the constraint and objective are interchangeable, i.e., the optimization may have two alternative formats: (1) subject to the budget constraint, optimize the design variable Nq (the number of each type of test) to reach the most robust system response prediction; or (2) subject to the robustness requirement in the system response prediction, find N to minimize the budget. The proposed concept can be realized with either formulation.

In addition, it is important to note that the data considered in test resource allocation analysis has to be synthetic since it is done before any actual test. The actual physical test data from a test are obtained by: (1) selecting the values of inputs X; (2) applying X to the physical test configuration where the model parameters θ are at their true but unknown values; and (3) recording the input–output data, where both the input and output measurements may be subject to measurement errors. In actual tests where the values of X have been decided, the test outcome uncertainty arises only from experimental variability, including measurement errors. The generation of synthetic data is a simulation of the three steps mentioned earlier, with the physical test configuration replaced by a computational model and the model parameters being unknown. Thus, two additional uncertainty sources are introduced in the synthetic data: (1) uncertainty regarding the value of θ and (2) model discrepancy, i.e., the difference between the computational model and the actual physics. In a Bayesian framework, the first one can be represented by the prior distribution of θ based on available knowledge. But no information on the model discrepancy is available before any testing.

In addition, compared to the actual test, the physical meaning of the input distribution πX(x) may be changed in generating the synthetic data. As explained at the beginning of Sec. 1, for an actual test, the uncertainty characterized by πX(x) is due to the following sources: (1) imperfect control over the true value, (2) natural variability of the input, and (3) measurement errors. In generating the synthetic data, πX(x) accounts for the same uncertainty sources in the case that test conditions are known (for example, the nominal values of the inputs are known). But in the case of unknown test condition, πX(x) mainly accounts the uncertainty about which experimental conditions will be subsequently selected. For example, if the tester only mentions that the possible nominal value of an input is between 5 and 10, then we may have a uniform distribution πX(x)U(5,10) to represent this uncertainty in the nominal value. In this case, the uncertainty in πX(x) is epistemic. In this paper, the proposed method is versatile and able to handle both cases. It is possible for the decision-maker to apply the proposed method before and after knowing the test conditions, and different answers can be obtained due to changed availability of knowledge.

In summary, the objectives of this paper are to: (1) find the optimal number of each type of test such that different data sets result in consistent system response predictions; (2) develop solutions for both formats of the optimization problem; and (3) adapt to different cases when only model calibration tests are considered or when both calibration and validation tests are considered. The rest of this paper is organized as follows: Section 2 proposes the objective in the optimization of robust test allocation. Section 3 analyzes the uncertainty sources in the synthetic data and the use of Sobol’ indices to assess their contributions toward the uncertainty in the system response prediction. Section 4 develops a flexible approach for test resource allocation optimization. Section 5 uses two numerical examples to illustrate the proposed approach.

Global Sensitivity Analysis of Uncertainty in Synthetic Test Data

Objective of Robust Test Resource Allocation.

The objective of the proposed test resource allocation optimization can be visually represented as in Fig. 1, which shows the families of the system response prediction probability density functions (PDFs) at different values of the design variables N. Within a sub-figure, the variation between the PDFs is caused by the test outcome variability among different data sets. From Figs. 1(a)1(c), this variation becomes smaller and the system response predictions reveal stronger consistency due to: (1) the decreased variability of mean values E(Y) across the PDFs, meaning that the centroids of the family members are closer; and (2) the decreased variability of the variance V(Y) across the PDFs, meaning that the ranges of values covered by the PDF are similar. In other words, at the value of optimal N in Fig. 1(c), the effects of test outcome uncertainty on E(Y) and V(Y) are small so that consistent system response predictions can be obtained with different sets of test data. Note that this paper is only concerned about the mean value E(Y) and variance V(Y) in the system response prediction, not the exact shape of the PDF in Fig. 1. Here, the “variability” of E(Y) and V(Y) is captured by their variance, i.e., V(E(Y)) and V(V(Y)) across different data sets.

Therefore, this paper defines the objective for robust test resource allocation as: minimize the contribution of test outcome uncertainty toward the variability (i.e., variance) in the system response prediction mean value E(Y) and the system response prediction variance V(Y).

Global sensitivity analysis (GSA) using Sobol’ indices is a prominent approach [1315] to quantify the contributions of input uncertainty toward the variance in the output. A brief introduction to Sobol’ indices is given in Sec. 2.2. One challenge is to establish a deterministic function required by the Sobol’ indices computation, in mapping the test outcome uncertainty to the system response prediction uncertainty. This challenge will be analyzed and overcome in Sec. 3.

Sobol’ Indices.

Sobol’ indices computation assumes an input–output function [16,17] Y=F(X) where X={X1,,XL} is a vector containing model inputs. Here, all the model inputs are uncorrelated random variables. This function is deterministic, meaning that a single realization of X gives a corresponding single realization of Y. (Note that other GSA methods may not require a deterministic function; for example, the GSA method based on the classical analysis of variance using factorial design of experiments [18] can handle both deterministic and stochastic input–output relations.) The main effects (first-order) Sobol’ index is defined as 
Sl=V(E(Y|Xl))V(Y)
(1)
Sl is a number between 0 and 1 since V(Y)V(E(Y|Xl))>0, and a higher value of Sl indicates greater contribution of Xl to the variance of the system response prediction Y. The main index measures the individual contribution of Xl by itself. The overall contribution of Xl by itself plus interactions with other inputs is measured by the total effects Sobol’ index SlT 
SlT=1V(E(Y|Xl))V(Y)
(2)
where Xl denotes all the model inputs other than Xl.
Equations (1) and (2) can be extended to assess the contribution of a model input subset Xp, which contains more than one input [19,20]. The main Sobol’ index of Xp is defined by extending Eq. (1) as 
SXP=V(E(Y|Xp))V(Y)
(3)

SXP is a combined measure of the individual contributions of the components of Xp and of the interactions among them.

And the total effects Sobol’ index of Xp is defined by extending Eq. (2) as 
SXPT=1V(E(Y|Xp))V(Y)
(4)

where Xp is the complementary subset of Xp. SXPT is a combined measure of the individual contributions of the components of Xp, the interactions among them, and the interactions between Xp and Xp.

The direct computation of Sobol’ indices requires double-loop Monte Carlo simulation and thus is expensive. Taking Sl in Eq. (1) as an example, we need: (1) an inner loop E(Y|Xl) to compute the mean value of Y using n random samples of Xl and (2) an outer loop to compute V(E(Y|Xl)) by iterating the inner loop n2 times at different values of Xl. In addition, other n3 Monte Carlo simulation iterations are required to compute V(Y). Various algorithms have been developed in the literature to reduce the computational cost [2125]. Any one of them can be used to compute the Sobol’ indices in this paper. Several illustrative examples on computing Sobol’ indices can be found in Ref. [19].

The Sobol’ index computation requires (1) a deterministic input–output function and (2) the representation of all the inputs by uncorrelated continuous probabilistic distributions. These two requirements need to be achieved before applying Sobol’ indices in the proposed approach for test resource allocation. Section 3 analyzes the uncertainty sources in test outcomes and develops an approach to achieve both requirements.

Uncertainty Sources in Test Outcomes

Recall that all the data considered in test resource allocation analysis have to be synthetic since the analysis is done before any actual test. The uncertainty in the synthetic data depends on specific test conditions, including: (1) the possible values of inputs X; (2) the number of test types; and (3) whether a single test specimen or multiple specimens are used for each type of test.

Regarding the first condition, this paper assumes that a distribution of X is provided by the testing personnel or assumed based on some information. For example, for a single model input XX, we may have XU(LX,UX) where LX is the lower bound and UX is the upper bound. We can also use other types of distribution such as Gaussian distribution to capture the uncertainty in X if additional information is available.

This section will analyze the uncertainty sources in the synthetic data regarding the second and third conditions; the corresponding deterministic function required by the Sobol’ indices also varies correspondingly. The rest of this section starts from the simplest case of one type of test and single specimen, and subsequently extends it to multiple types of tests and multiple test specimens.

Single Type of Test and Single Test Specimen.

If only one type of test is available and all tests are conducted on a single specimen, the actual test data is a set of N data points obtained from the same specimen. Figure 2 shows the generation and usage of the synthetic data in this case. As shown in the left part of Fig. 2, to generate a data set of N synthetic data points, four steps should be followed: (1) select and fix the values of θdθ, where dθ is the dimension of model parameters; (2) generate N samples of model inputs xjdX(j=1 to N), where dX is the dimension of model inputs; and (3) propagate xj(j=1toN) and θ through the computational model F(); and (4) record the model input and output with measurement errors added. The resultant data set contains pairwise data points {ωj,zj}(j=1toN) as

 
ωj=xj+ejzj=F(xj,θ)+ϵj
(5)

where ejdX is the model input measurement error and ϵj is the model output measurement error. If the model input measurement error is ignored, then ωj=xj.

A crucial point in the generation of synthetic data is regarding the model parameters θ. For a single specimen, θ have true but unknown values, meaning that the uncertainty in θ is epistemic. Thus, the uncertainty caused by θ is the uncertainty in selecting the values of θbefore generating a synthetic data set; once selected, the values of θ are fixed within the synthetic data set. This uncertainty in θ only exists in the synthetic data; actual tests will fix the value of θ at their true values.

The four steps mentioned earlier indicate three uncertainty sources in generating a pairwise synthetic data point {ωj,zj}, including:

  1. (1)

    Uncertainty regarding the values of model parameters θ can be represented by their prior distribution π(θ) based on available knowledge before conducting any physical test. This uncertainty is epistemic since θ have unknown but fixed true values.

  2. (2)

    Uncertainty regarding the possible values of inputs xj to be used in the tests. As mentioned earlier, a distribution of X has been provided or assumed. This uncertainty is also epistemic if the values of X are unknown during test selection analysis, but will be decided by the test personnel in actual tests.

  3. (3)

    Uncertainty regarding input measurement errors ej and output measurement errors ϵj. Usually, measurement error is assumed to have a zero mean Gaussian distribution; thus, ejN(0,ΣX) and ϵjN(0,σ2). The uncertainty in ej and ϵj is aleatory if the values of ΣX and σ are known; but additional epistemic uncertainty regarding ΣX and σ will be introduced if their values are unknown.

In sum, Fig. 2 shows that for a given number of tests, the synthetic data set {ωj,zj}(j=1toN) is uniquely determined once θ, xj, ej, and ϵj (j=1toN) are determined. Then for a given framework of model calibration/validation (since we are assuming a single type of test, so either all data are used for calibration or all data are used for validation), the subsequent system response prediction distribution πY(y) and its mean value E(Y) and variance V(Y) are also uniquely determined. Thus, the deterministic functions suitable for computing Sobol’ indices are 
E(Y)=E(G(θ,α1,,αN))V(Y)=V(G(θ,α1,,αN))
(6)

where αj={xj,ej,ϵj}2dX+1 for j=1toN representing the uncertainty sources in generating a single pairwise data point {ωj,zj}, and N is the number of pairwise data points; G() represents the entire process shown in Fig. 2, including both synthetic data generation and model calibration/validation analyses before predicting the system response.

In Eq. (6), the uncertainty in {α1,,αN} represents the variability in the actual test outcomes; while the epistemic uncertainty in θ only exists in the synthetic data, not in actual test data. To minimize the sensitivity of the system response prediction to the variability in the test outcomes, we need to minimize the sensitivity index of {α1,,αN} in Eq. (6) so that E(Y) and V(Y) are insensitive to the variability in test outcomes and consistent system response prediction distributions can be achieved under different actual test outcomes. However, this minimization requires the sensitivity index closer to zero while numerical accuracy is always a challenge for small sensitivity indices.

Instead, this paper chooses to maximize the sensitivity index of θ. If that is achieved, the epistemic uncertainty in θ will be dominant toward the uncertainty in the system response prediction mean E(Y) and the system response prediction variance V(Y) (based on synthetic data). In the system response prediction using actual test data where θ are fixed at their true values, the most dominant uncertainty contribution to E(Y) and V(Y) will be removed. Therefore, the uncertainty in E(Y) and V(Y) caused by test outcome uncertainty will reduce significantly and consistent system response prediction distributions can be achieved under different actual test outcomes. In sum, the basic idea of the proposed approach is to maximize the contribution of epistemic uncertainty regarding model parameters in the synthetic data.

Note that the proposed approach guarantees consistent system response predictions regardless of what the true values of θ are, since the Sobol’ index is a global sensitivity analysis method and considers the entire distribution of θ.

Single Type of Test and Multiple Test Specimens.

For a single type of test, multiple test specimens are required if the test is destructive so that each specimen can be used only once. Two examples of destructive tests are fatigue test and tensile strength test. The true value of a model parameter θlθ for l=1todθ is fixed for a single specimen, but varies across different specimens. This variability of θ may be represented by a probability distribution π(θl|Pθl) where Pθl are the distribution parameters of θl. For example, Pθl={μ,σ} if θl has a Gaussian distribution N(μ,σ2) where μ is the mean value and σ is the standard deviation. In addition, the entire set of distribution parameters for all components of θ are denoted as Pθ where PθlPθ for l=1todθ. In this case, Pθ have unknown true values; thus, the uncertainty in Pθ is epistemic; and this uncertainty can be represented by a prior distribution π(Pθ) based on available knowledge. Thus, model calibration aims to quantify the uncertainty in Pθ, instead of θ. (Note that θ have both aleatory and epistemic uncertainty, whereas the uncertainty in Pθ is epistemic.)

In the case of single type of test and multiple test specimens, the steps in generation and usage of the synthetic data set of N data points are similar to those in Fig. 2, but the box “model parameters θ ” should be replaced by “ Pθθj,” where θj is the value of θ generated for the j th specimen (i.e., the j th test). Compared to Fig. 2, the values of Pθ are now selected before generating a synthetic data set; once selected, the values of Pθ are fixed within the synthetic data set. The values of model parameters θj(j=1toN) for each of the N specimens are generated from the conditional distribution π(θl|Pθl) for l=1todθ.

It seems natural to replace θ in Eq. (6) with Pθ and build new functions for the Sobol’ indices computation. However, the new functions will not be deterministic functions as required by the Sobol’ indices. A specific realization of Pθ does not determine the values of θ but only the distribution π(θl|Pθl) for l=1todθ; thus, θ are still stochastic at given Pθ. Only deterministic values of θ and αi={xj,ej,ϵj} (j=1toN) can decide the subsequent system response prediction distribution πY(y) and its mean value E(Y) and variance V(Y). In sum, an approach to establish a deterministic relationship from Pθ to θ is needed.

This required deterministic relationship can be provided by the auxiliary variable method developed in Refs. [15], [26], and [27]. This method introduces an auxiliary variable Uθl, which is the cumulative distribution function (CDF) value of π(θl|Pθl), and builds the needed deterministic relationship using the probability integral transform as 
θl=Fθl|Pθl1(Uθl)
(7)

where Fθl|Pθl1() is the inverse CDF of θl at given Pθl. Note that Uθl has the standard uniform distribution U(0,1). Equation (7) indicates three steps: (1) generate the values of Pθl from their prior distribution to produce the conditional distribution π(θl|Pθl); (2) generate the value of Uθl from U(0,1); and (3) substitute Uθl into the inverse CDF Fθl|Pθl1() to obtain a unique value of θl.

The uncertainty in model parameter θl consists of two components: (1) the epistemic uncertainty in distribution parameters Pθl, represented by the prior distribution π(Pθl); and (2) the aleatory uncertainty in θl at given Pθl, represented by the conditional distribution π(θl|Pθl). These two parts are coupled since π(θl|Pθl) depends on the value of Pθl. The introduced auxiliary variable Uθl captures the aleatory uncertainty, and also helps to decouple the aleatory and epistemic uncertainties [26] since the distribution of UθlU(0,1) does not depend on Pθl.

With the introduction of the auxiliary variable, deterministic functions suitable for Sobol’ indices computation can be established as 
E(Y)=E(G(Pθ,Uθ,α1,,αN))V(Y)=V(G(Pθ,Uθ,α1,,αN))
(8)
where αj={xj,ej,ϵj} for j=1toN as in Eq. (6); Uθ contains all the auxiliary variables introduced for each θl; thus, UθlUθ for l=1todθ; G() represents the entire process of synthetic data generation and the framework of model calibration/validation (using the synthetic data) to predict the system response.

As explained earlier, the basic idea of the proposed approach is to maximize the contribution of epistemic uncertainty of θ in the synthetic data, in the case of a single specimen. In the case of multiple specimens, we need the contribution of Pθ to be dominant in the context of Eq. (8). If that is achieved, in the system response prediction using actual test data where Pθ are fixed at their true values, the most dominant uncertainty contribution to E(Y) and V(Y) will be removed. Therefore the uncertainty in E(Y) and V(Y) caused by test outcome uncertainty will be reduced significantly, and different actual test outcomes will lead to consistent system response predictions.

Multiple Types of Tests and Single Test Specimen.

In the case that q different types of tests are to be considered and each type utilizes only one specimen (nondestructive test), Fig. 2 expands to Fig. 3, and Eq. (6) expands to

 
E(Y)=E(G(θ,A1,,Aq))V(Y)=V(G(θ,A1,,Aq))
(9)

Equation (9) gives the required deterministic functions for Sobol’ indices computation. In Eq. (9), Ai={α1i,,αNii} for i=1toq represents the uncertainty regarding inputs and measurement errors in generating the synthetic data for the i th type of test, where αji={xji,eji,ϵji} for i=1toq and j=1toNi; j represents the test number and Ni is the total number of the i th type of test. Note that here θ is the vector of the model parameters in all types of tests, and test type refers to calibration test versus validation test, and the output quantities measured, as explained in Sec. 1.

Similar to the earlier discussion, in the test resource allocation optimization regarding Eq. (9), we need the contribution of the epistemic uncertainty in θ toward the uncertainty in E(Y) and V(Y) to be dominant. For the case of multiple types of tests and single test specimen, an example with a framework considering only model calibration is considered in Sec. 5.1; another example of a framework incorporating both model calibration and model validation is considered in Sec. 5.2.

Multiple Types of Tests and Multiple Test Specimens.

The most complex case is that q different types of tests are to be considered and the i th type of test utilizes Ni specimens corresponding to Ni tests. Similar to Eq. (9), the epistemic uncertainty is regarding the unknown true values of distribution parameters Pθ; and an auxiliary variable is introduced for each model parameter in order to establish deterministic functions required by the Sobol’ indices computation, as explained in Sec. 3.2. The resultant functions are 
E(Y)=E(G(Pθ,Uθ,A1,,Aq))V(Y)=V(G(Pθ,Uθ,A1,,Aq))
(10)

Similarly, in the test resource allocation optimization regarding Eq. (10), we need the contribution of the epistemic uncertainty in Pθ toward the uncertainty in E(Y) and V(Y) to be dominant.

Selection of Sobol’ Indices.

Thus far, deterministic functions for Sobol’ indices computation in different test conditions have been established. Robust design of resource allocation can be achieved by maximizing the contribution of the epistemic uncertainty regarding either θ (single specimen) or Pθ (multiple specimen). This epistemic uncertainty is represented by a set of random variables (θ in Eqs. (6) and (9); Pθ in Eqs. (8) and (10)). The total effect sensitivity index considers the interactions between the subset of random variables and its complement; thus, to be more comprehensive, the optimization in this paper uses Eq. (4) to compute the total effect index for the subset of epistemic uncertainty (either θ or Pθ). In the rest of the paper, Sobol’ index indicates the total effect index in Eq. (4). The computed Sobol’ indices are denoted as SmE(Y) for E(Y) and SmV(Y) for V(Y). In the case of single specimen, m=θ so that SmE(Y) and SmV(Y) are the Sobol’ indices of θ; in the case of multiple specimen, m=Pθ so that SmE(Y) and SmV(Y) are the Sobol’ indices of Pθ.

Optimum Test Resource Allocation

Formulation.

As discussed in Secs. 1 and 2, the proposed robust test resource allocation method aims to make the system response prediction insensitive to the variability in test outcomes so that consistent system response predictions of the system response can be achieved under different sets of test data. This consistency can be obtained if the contribution of epistemic uncertainty in θ or Pθ toward the computed uncertainty (based on the simulation model and synthetic data) in E(Y) and V(Y) is dominant. (In actual tests, this uncertainty is not present; thus, the uncertainty in E(Y) and V(Y) is significantly reduced.) That gives two objectives in the optimization: (1) maximize SmE(Y), the Sobol’ index of θ or Pθ with respect to E(Y); and (2) maximize SmV(Y), the Sobol’ index of θ or Pθ with respect to V(Y). Several methods are available to solve multi-objective problems. One simple method is to combine SmE(Y) and SmV(Y) through a weighted sum since they are both dimensionless and have the same scale [0,1]. This constitutes the first optimization formulation of robust test resource allocation 
Max p1SmE(Y)+p2SmV(Y)s.t.  i=1qCiNiC0 and Ni0
(11)

where Ci>0 is the unit cost of the i th (i=1toq) type of test and Ni is the number of tests of the i th type; and C0 is the budget constraint; and p1 and p2 are user-defined positive constant weight coefficients.

Equation (11) tries to obtain the optimal test allocation subject to the budget constraint. As explained in Sec. 1, another possible format of optimization is to minimize the budget subject to the sensitivity threshold. Thus, the alternative optimization formulation for robust test resource allocation is 
Min i=1qCiNis.t.  SmE(Y)λE(Y),SmV(Y)λV(Y) and Ni0
(12)

where λE(Y) and λV(Y) are the desired lower bounds of the Sobol’ index for E(Y) and V(Y), respectively.

Equations (11) and (12) are both integer optimization problems since the decision variables Ni(i=1 to q) are integers. Sometimes, integer optimization is solved using a relaxation approach [28], where the integer constraint is first relaxed, and the integers nearest to the resultant optimal solution are used as the solution of the original (unrelaxed) problem. Unfortunately, this approach is not applicable here because the synthetic data to be used in model calibration/validation can be generated only if Ni(i=1toq) are integers. It is not possible to generate test data for a noninteger number of tests.

Solution Algorithm.

A simulated annealing algorithm [29] is used for the solution of Eqs. (11) and (12) because it can handle stochastic discrete optimization problems without requiring relaxation. For discrete optimization problems such as in Eqs. (11) and (12), this algorithm aims to minimize an objective function f(s) where s={s1,,sL} is a vector of integers and its feasible region is Ω. If the objective is to maximize f(s) as shown in Eq. (11), f(s) ought to be minimized.

As shown in Fig. 4, the simulated annealing algorithm starts from an initial value s0Ω. If s is the optimal solution in an iteration, a new value s will be randomly selected within the neighborhood of s. This neighborhood, denoted as (s), can be defined by different proposal density functions; and this paper defines (s)=[s1±d1,,sl±dL]Ω where dl is a user-defined positive integer for l=1toL. In one iteration, if f(s)<f(s), the new value s is accepted as the new optimal solution; otherwise, the probability to accept s is

 
Pa=exp(f(s)f(s)T)
(13)
where T is the parameter that governs how tight the acceptance criterion should be. Specifically, a random sample λ is generated from the standard uniform distribution U(0,1), and s is accepted if λ<Pa. The reason for accepting s with a probability Pa even when it does not improve the objective function is to explore additional regions and reduce the opportunity to stop at a local minimum. As the algorithm proceeds, the threshold for acceptance becomes tighter, so only reductions and very small increases to the objective function can be accepted. This threshold tightening is governed by a reduction to the parameter T as 
T=T0(1kK)α
(14)

where T0 is the user-defined starting value of T, k is the current iteration number, K is the total number of iterations allowed, and α is a user-defined exponent that determines the rate of decrease of T. This iteration proceeds until the total allowed number of iterations K is expended.

Summary.

This section proposed formulations for test resource allocation optimization, considering two formats: (1) maximizing the Sobol’ index of the epistemic uncertainty in θ or Pθ subject to budget constraint and (2) minimizing the cost subject to the Sobol’ index threshold. Both formats are applicable to the cases of single or multiple specimens and single or multiple types of tests. As a result, the system response predictions become insensitive to the variability in test outcomes. The decision variables (numbers of tests) are discrete variables, and a simulated annealing algorithm is used to solve this discrete optimization. In this optimization, the Sobol’ index of the epistemic uncertainty in θ or Pθ is computed by the method discussed in Sec. 2.

Numerical Examples

This section uses two examples to illustrate the proposed method. The first example is a mathematical problem and the second example is a structural dynamics problem. Regarding the types of tests, specimen, and calibration/validation, the first example considers: (1) multiple types of tests, (2) model calibration only, and (3) both the cases of single and multiple specimens. The second example considers: (1) multiple types of tests, (2) both model calibration and validation, and (3) single specimen only.

Mathematical Example.

In this example, the system output is the sum of two subsystem outputs, and each subsystem has separate model inputs and model parameters 
Y=W1+W2,W1=X1θ1,W2=X2θ2
(15)

The inputs X1 and X2 are assumed to be independent random variables; the uncertainty regarding their values in tests is represented by uniform distributions X1U(90,110),X2U(40,60), based on ranges obtained from the test personnel.

Two types of tests are available. Test type I measures W1 with measurement error ϵ1N(0,502); and test type II measures W2 with measurement error ϵ2N(0,402). The resultant synthetic data are pairwise data {X1,W1} and {X2,W2}, respectively. Assume that the unit cost of type I test is 4 and the unit cost of type II test is 1.

Two cases are considered in this example: single test specimen versus multiple test specimens. In case 1 of single specimen, model parameter θ={θ1,θ2} has true but unknown values to be calibrated. In case 2 of multiple specimens, {θ1,θ2} follow normal distributions N(μθ1,σθ12) and N(μθ1,σθ12) across specimens, and the parameters to be calibrated are Pθ={μθ1,σθ1,μ2,σθ2}.

The process to realize the system response prediction Y, i.e., the framework of model calibration/validation with the synthetic data is shown in Fig. 5, where the posterior distributions of calibration parameters together with the known distributions of X1 and X2 are propagated through the computational model in Eq. (15) to obtain the distribution of Y. Note that model validation is not considered in this example; only calibration is considered. The proposed test resource allocation approach can also handle model validation, as shown in the next numerical example.

Case 1: Single Test Specimen.

In this case, model parameters θ={θ1,θ2} have unknown deterministic values and prior distributions, θ1N(5,0.52), θ2N(10,12), are assumed for them. This case is applied to the two optimization formulations in Eqs. (11) and (12).

Optimization formulation 1.
For the optimization in Eq. (11), we set the total budget constraint at 16; thus, Eq. (11) becomes (assuming equal weights p1=p2) 
Max  SθE(Y)+SθV(Y)s.t.   4N1+N216 and Ni0
(16)

where N1 is the number of type I tests and N2 is the number of type II tests. N1 and N2 are the decision variables, i.e., we need to decide the number of replications of each type of test.

The simulated annealing algorithm is used to solve Eq. (16), and Fig. 6 records the process of optimization. Figure 6(a) shows that the optimization starts at an initial design point (N1,N2)=(1,1) and terminates at the optimal solution (N1,N2)=(2,8). Figure 6(b) shows that only some of the random walks are accepted and the maximized Sobol’ index sum SθE(Y)+SθV(Y) is 1.89. The feasible region in Fig. 6(a) covers the combinations of N1 and N2 such that 4N1+N216. Note that (1) this feasible region is obtained by extra computation and (2) this feasible region is shown only to help in visualizing the result but is not needed in the optimization.

As discussed in Sec. 3.1, since the robustness objective SθE(Y)+SθV(Y) is maximized, the optimal solution (N1,N2)=(2,8) for Eq. (16) should lead to consistent system response prediction regardless of the true values of θ. Three steps are pursued to verify it: (1) assume “true” values of θ; (2) generate multiple sets of data with the size of (N1,N2)=(2,8) based on the assumed value of θ from step 1; and (3) plot the family of system response prediction PDFs using the data sets in step 2 and observe whether they are consistent. Although the data are still synthetic, this is a simulation of the system response prediction using the actual test data since the model parameters θ are fixed at the same value across different data sets; while in the synthetic data generation for test resource allocation shown in Fig. 2, the model parameters are fixed within a single data set but vary across different data sets. The results of this verification are shown in Fig. 7. Figure 7(a) indicates that (N1,N2)=(2,8) leads to consistent system response predictions if the true values of model parameters are {θ1,θ2}={4.9,9.5}; similarly, Figs. 7(b) and 7(c) show that consistent system response predictions are also obtained if {θ1,θ2}={5.4,9.8} or {θ1,θ2}={5.0,10.5}.

As a comparison, Fig. 8 shows the same results as Fig. 7 but at a suboptimal solution of (N1,N2)=(1,12). This suboptimal solution spends the same cost as the optimal solution, but the enlarged variation across different PDFs in Fig. 8 indicates that this suboptimal solution cannot guarantee consistent predictions as the optimal solution. To quantify this conclusion, Table 1 compares the “variance of the variance of the prediction” V(V(Y)) at the optimal and suboptimal solution. This table clearly shows that the optimal solution always has smaller values of V(V(Y)) at different values of θ, which proves that the optimal solution gives more consistent predictions.

Optimization formulation 2.
For the optimization in Eq. (12), we set the Sobol’ index lower bounds as λE(Y)=λV(Y)=0.95; thus, Eq. (12) becomes 
Min 4N1+N2s.t.   SθE(Y)0.95,SθV(Y)0.95 and Ni0
(17)

The simulated annealing algorithm is used to solve Eq. (17), and Fig. 9 records the process of optimization. Figure 9(a) shows that the optimization starts at an initial design point (N1,N2)=(8,8) and terminates at the optimal solution (N1,N2)=(3,7). Figure 9(b) shows that only some of the random walks are accepted and the minimized cost is 19. The feasible region in Fig. 9(a) covers the combinations of N1 and N2 such that SθE(Y)0.95 and SθV(Y)0.95. Similar to Fig. 6, note that (1) this feasible region is obtained by extra computation and (2) this feasible region is shown only to help in visualizing the result but is NOT needed in the optimization.

As discussed in Sec. 3.1, since the robustness constraints SθE(Y)0.95,SθV(Y)0.95 are satisfied, the optimal solution (N1,N2)=(3,7) for Eq. (17) should lead to consistent system response prediction regardless of the true values of θ. The same three steps for Fig. 7 are pursued to verify it. The results of this verification are shown in Fig. 10. Figure 10(a) indicates that (N1,N2)=(3,7) leads to consistent system response predictions if the true values of model parameters are {θ1,θ2}={5.7,10.5}; similarly, Figs. 10(b) and 10(c) show that consistent system response predictions are also obtained if {θ1,θ2}={5.2,9.1} or {θ1,θ2}={4.6,10.8}.

Case 2: Multiple Test Specimens.

In this case, model parameters Pθ={μθ1,σθ1,μ2,σθ2} have unknown deterministic values and uniform prior distributions μθ1U(4,6), σθ1U(0.2,1), μθ2U(8,10), σθ2U(0.8,1.5) are assumed for them. This case is also applied to the two optimizations in Eqs. (11) and (12). The unit cost of type I test is 4 and the unit cost of type II test is 1.

Optimization formulation 1.
For the optimization in Eq. (11), we set the total budget constraint at 33; thus, Eq. (11) becomes (assuming equal weights p1=p2) 
Max   SPθE(Y)+SPθV(Y)s.t.   4N1+N233 and Ni0
(18)

The simulated annealing algorithm is used to solve Eq. (18), and Fig. 11 records the process of optimization. Figure 11(a) shows that the optimization starts at an initial design point (N1,N2)=(5,5) and terminates at the optimal solution (N1,N2)=(5,13). Figure 11(b) shows that only some of the random walks are accepted and the maximized Sobol’ index sum SPθE(Y)+SPθV(Y) is 1.92.

As discussed in Sec. 3.1, since the robustness objective SPθE(Y)+SPθV(Y) is maximized, the optimal solution (N1,N2)=(5,13) for Eq. (18) should lead to consistent system response prediction regardless of the true values of Pθ. The results of this verification are shown in Fig. 12.

As a comparison, Fig. 13 shows the same results as in Fig. 12 but at a suboptimal solution of (N1,N2)=(4,17). This suboptimal solution spends the same cost as the optimal solution, but the enlarged variation across different PDFs in Fig. 13 indicates that this suboptimal solution cannot guarantee consistent predictions as the optimal solution. To quantify this conclusion, Table 2 compares V(V(Y)) at the optimal and suboptimal solution. This table clearly shows that the optimal solution always has smaller values of V(V(Y)) at different values of θ, which proves that the optimal solution gives more consistent predictions.

Optimization formulation 2.
For the optimization in Eq. (12), we set the Sobol’ index lower bounds as λE(Y)=λV(Y)=0.95; thus, Eq. (12) becomes 
Min 4N1+N2s.t.  SPθE(Y)0.95,SPθV(Y)0.95 and Ni0
(19)

The simulated annealing algorithm is used to solve Eq. (19), and Fig. 14 records the process of optimization. Figure 14(a) shows that the optimization starts at an initial design point (N1,N2)=(12,12) and terminates at the optimal solution (N1,N2)=(5,10). Figure 14(b) shows that only some of the random walks are accepted and the minimized cost is 30.

As discussed in Sec. 3.1, since the robustness constraints SPθE(Y)0.95,SPθV(Y)0.95 are satisfied, the optimal solution (N1,N2)=(5,10) for Eq. (19) should lead to consistent system response prediction regardless of the true values of Pθ. The results of this verification are shown in Fig. 15.

Multilevel Problem.

The second numerical example is a multilevel structural dynamics challenge problem provided by Sandia National Laboratories [30]. In this example, we have four types of tests and a single specimen, as explained in Sec. 3.3. As shown in Fig. 16, this multilevel problem consists of three levels. Tests are available at level 1 and level 2, and it is required to predict the system response in level 3.

Level 1: The three mass-spring-damper components are connected in series (Fig. 16(a)), and a sinusoidal force input P=300sin(500t) is applied to m1. The observable quantity is the maximum acceleration A3L1 at the top mass and the measurement error is ϵ1N(0,1002). The computational model for A3L1 can be found in structural dynamics text books [31]; thus, synthetic data of A3L1 can be generated.

Level 2: The mass-spring-damper system is mounted on a beam supported by a hinge at one end and a spring at the other end (Fig. 16(b)), and a sinusoidal force input P=3000sin(350t) is applied on the beam. The observable quantity is the maximum acceleration A3L2 at the top mass and the measurement error is ϵ2N(0,4002). The computational model for A3L2 based on finite element analysis is provided by Sandia National Laboratories [30]; thus, synthetic data of A3L2 can be generated. Level 1 and level 2 are defined as lower levels, and test data are assumed to be available only at lower levels.

Level 3: This has the same configuration as level 2, but the input is a random process loading (indicating a difference in usage condition), as shown in Fig. 16(a). Level 3 is the prediction configuration of interest, and the response to be predicted is the maximum acceleration A3L3 at the top mass at level 3. No test data are available at level 3. The computational models for A3L3 are also provided by Sandia National Laboratories [30].

All three levels have the same model parameters, i.e., the three spring stiffnesses k={k1,k2,k3}. This example assumes the case of single test specimen; thus, k are the parameters to be calibrated. They are assumed to be deterministic but unknown, with independent prior distributions k1N(5000,5002), k2N(10000,10002), and k3N(9000,9002).

Four types of tests are available in this example:

  1. (1)

    Type I test measures A3L1 and the resultant data set D1C is used in model calibration;

  2. (2)

    Type II test measures A3L1 but the resultant data set D1V is used in model validation;

  3. (3)

    Type III test measures A3L2 and the resultant data set D2C is used in model calibration;

  4. (4)

    Type IV test measures A3L2 but the resultant data set D2V is used in model calibration.

The unit costs of these four types of tests are denoted as Ci(i=1 to 4), respectively, and the number of each type of test is denoted as Ni(i=1 to 4), respectively.

The key step to predict A3L3 is to estimate the values of the model parameters k={k1,k2,k3}. A reasonable route is to quantify the model parameters k={k1,k2,k3} using lower level calibration data of A3L1 and A3L2, and propagate the results through the computational model at the system level. However, either A3L1 or A3L2 can be used to calibrate the same model parameters; thus, three calibration options are possible: (1) calibration using the data on A3L1 alone; (2) calibration using the data on A3L2 alone; and (3) calibration using the data on both A3L1 and A3L2. The challenge in such a multilevel problem is how to select from or combine these alternative calibration results. This paper uses the roll-up method developed in Refs. [5] and [32] to solve this challenge. This roll-up method uses Bayesian model averaging of various calibration results and the weights for the averaging are obtained from model validation in each lower level. Thus, the framework of model calibration/validation for system response prediction considers both model calibration and validation. A brief introduction of this framework is given here:

  1. (1)

    Model calibration by Bayesian inference to obtain the posterior distributions π(k|D1C), π(k|D2C), and π(k|D1C,D2C), respectively.

  2. (2)

    Model validation at lower levels using the model reliability metric [5,33]. The resultant model validity at level 1 and level 2 is denoted as P(G1) and P(G2), respectively.

  3. (3)
    Obtain the integrated distribution π(k|D1C,V,D2C,V) by the roll-up formula [5,32,34] in the below equation 
    π(k|D1C,V,D2C,V)=P(G1)P(G2)π(k|D1C,D2C)+P(G1)P(G2)π(k|D2C)+P(G1)P(G2)π(k|D1C)+P(G1)P(G2)π(k)
    (20)
    where P(G1)=1P(G1) and P(G2)=1P(G2) and π(k) denotes the prior distribution of k. In Eq. (20), the integrated distribution π(k|D1C,V,D2C,V) is a weighted average of four terms: in the first term, the posterior distribution π(k|D1C,D2C) uses the calibration data of both level 1 and level 2 and its weight P(G1)P(G2) is the probability that both models are valid; in the second and third terms, the posterior distribution π(k|DiC) uses the calibration data at level i alone and its weight is the probability that the model at level i is valid but the model at another level is invalid; in the last term, the weight P(G1)P(G2) of the prior distribution π(k) is the probability that both of the models are invalid. Recently, a more comprehensive approach incorporating the relevance between lower levels and level 3 has been developed in Ref. [6]; and the proposed method in this paper is also applicable for this new approach.
  4. (4)

    Propagate π(k|D1C,V,D2C,V) through the computational model of A3L3 to predict the distribution of A3L3.

Since the computational models and measurement errors are known so that synthetic data of four types of test can be generated, and the framework of model calibration/validation is known, the proposed approach of test resource allocation is used to optimize the number of each type of test.

This example is applied to the two optimizations in Eqs. (11) and (12). Assume the unit cost of each type of test as C1=C2=1, C3=C4=5.

Optimization Formulation 1.

For the optimization in Eq. (11), we set the total budget constraint at 60; thus, Eq. (11) becomes (assuming equal weights p1=p2) 
Max  SθE(Y)+SθV(Y)s.t.  N1+N2+5N3+5N460 and Ni0
(21)

The simulated annealing algorithm is used to solve Eq. (21). The initial value is N1=N2=N3=N4=3. Among 500 iterations, the random walks of 226 iterations are accepted. Figure 17 shows the change of index sum over the iterations and the maximized index sum at the optimal solution is 1.88. The final optimal solution is N1=11,N2=9,N3=6,N4=2.

As discussed in Sec. 3.1, since the robustness objective Sθ(E(Y))+Sθ(V(Y)) is maximized, the optimal solution (N1,N2,N3,N4) would result in consistent system response predictions regardless of the true value of model parameters k. Similar to the mathematical example in Sec. 5.1, verification of this multilevel test allocation result is shown in Fig. 18. Figure 18 indicates that consistent system response predictions with three different assumed true values of model parameters.

Optimization Formulation 2.

This example is also applied to the optimization in Eq. (12). Assuming the unit cost of each type of test is C1=C2=1, C3=C4=5, and the threshold λE(Y)=λV(Y)=0.95, Eq. (12) becomes 
Min N1+N2+5N3+5N4s.t.   SθE(Y)0.95,SθV(Y)0.95 and Ni0
(22)

The simulated annealing algorithm is used to solve Eq. (22). The initial value is N1=N2=N3=N4=15. Among 500 iterations, the random walks of 164 iterations are accepted. Figure 19 shows the change of cost over the iterations and the minimized cost at the optimal solution is 66. The final optimal solution is N1=11,N2=10,N3=6,N4=3.

As discussed in Sec. 3.1, since the robustness constraints SθE(Y)0.95,SθV(Y)0.95 are satisfied, the optimal solution (N1,N2,N3,N4)=(11,10,6,3) should lead to consistent system response prediction regardless of the true value of model parameters k. Similar to the mathematical example in Sec. 5.1, verification of this multilevel test allocation result is shown in Fig. 20. Figure 20 indicates that consistent system response predictions with three different assumed true values of model parameters.

Summary

Test resource allocation aims to optimize the number of each type of test before any actual test is conducted. This paper focuses on the proposed robust test resource allocation, which means that the system response prediction is insensitive to the variability in the test outcomes so that consistent system response predictions can be achieved under different test outcomes.

The main challenge for the proposed approach is to quantify the contribution of test outcome uncertainty toward the uncertainty in the system response prediction. Since test resource allocation is needed before any actual test, this test outcome uncertainty is simulated by the uncertainty in the synthetic data. This paper analyzes the uncertainty sources in the synthetic data regarding different test conditions and concludes that consistent system response predictions will be achieved if the contribution of epistemic uncertainty regarding model parameters in the synthetic data can be maximized. This paper uses the global sensitivity analysis method Sobol’ indices to assess this contribution, so the desired consistent system response predictions can be guaranteed regardless of the true values of the parameters in the actual tests (θ for single specimen and Pθ for multiple specimen).

Two cases of optimization are considered in this paper: (1) subject to the budget constraint, optimize the number of each type of test to reach the most robust design or (2) subject to the robustness requirement, find the number of each type of test to minimize the budget. In addition, the proposed approach can be applied in multiple situations: (1) only model calibration tests are performed or (2) both model calibration and model validation tests are performed. The method can also be applied to tests involving single or multiple specimens. The proposed method results in a discrete stochastic optimization problem, and a simulated annealing algorithm is used to solve this problem.

This paper assumes that the test inputs are from a range of values and represents the uncertainty regarding the test inputs through uniform distributions. Note that this paper is only focused on choosing the number of experiments after the available physical tests are identified. To answer the question that how to choose the physical tests, several factors should be considered, in particular the relevance and sensitivity of the experiments to the calibration quantity of interest. The assessment of relevance and sensitivity addressed in Ref. [1] may be useful in identifying the useful physical test configurations. This paper only addresses the variability of the test data, and on optimizing the number of each type of test so that we can get consistent predictions under different test data outcomes.

This paper assumes that the quantity of interest to predict is a scalar, so we can easily use variance as its uncertainty indicator; thus, the variance-based Sobol’ index can be easily used. If the quantity of interest is a vector, another indicator instead of variance may be needed, and the corresponding sensitivity index is also required. Thus, further work is needed to extend the proposed method to vector and field outputs.

Another direction for further work is regarding test design. The context of the proposed method is during the stage of budget planning, and usually at this stage, details of the test design are not known or considered. Thus, this paper only focuses on optimizing the number of each type of the test. The extension of the proposed approach to include test design, i.e., deciding the specific test conditions, can be studied in future work such that the resultant system response prediction uncertainty can be further reduced. This can be addressed in two ways: (1) by simultaneously optimizing the number of tests and the test inputs or (2) by adaptively deciding the number of tests and their input conditions based on the observation data as the test campaign progresses.

Acknowledgment

The authors appreciate valuable discussions with Joshua Mullins from Sandia National Laboratories.

Funding Data

  • Sandia National Laboratories (Contract No. BG-7732).

References

References
1.
Kennedy
,
M. C.
, and
O'Hagan
,
A.
,
2001
, “
Bayesian Calibration of Computer Models
,”
J. R. Stat. Soc.
,
63
(
3
), pp.
425
464
.
2.
Conti
,
S.
, and
O'Hagan
,
A.
,
2010
, “
Bayesian Emulation of Complex Multi-Output and Dynamic Computer Models
,”
J. Stat. Plann. Inference
,
140
(
3
), pp.
640
651
.
3.
Ainsworth
,
M.
, and
Oden
,
J. T. T.
,
1997
, “
A Posteriori Error Estimation in Finite Element Analysis
,”
Comput. Methods Appl. Mech. Eng.
,
142
(
1–2
), pp.
1
88
.
4.
Sankararaman
,
S.
,
McLemore
,
K.
,
Mahadevan
,
S.
,
Bradford
,
S. C.
, and
Peterson
,
L. D.
,
2013
, “
Test Resource Allocation in Hierarchical Systems Using Bayesian Networks
,”
AIAA J.
,
51
(
3
), pp.
537
550
.
5.
Li
,
C.
, and
Mahadevan
,
S.
,
2014
, “
Uncertainty Quantification and Output Prediction in Multi-Level Problems
,”
AIAA
Paper No. 2014-0124.
6.
Li
,
C.
, and
Mahadevan
,
S.
,
2016
, “
Role of Calibration, Validation, and Relevance in Multi-Level Uncertainty Integration
,”
Reliab. Eng. Syst. Saf.
,
148
, pp.
32
43
.
7.
Urbina
,
A.
,
2009
, “
Uncertainty Quantification and Decision Making in Hierarchical Development of Computational Models
,”
Ph.D. dissertation
, Vanderbilt University, Nashville, TN.
8.
Vanlier
,
J.
,
Tiemann
,
C. A.
,
Hilbers
,
P. A. J.
, and
van Riel
,
N. A. W.
,
2012
, “
A Bayesian Approach to Targeted Experiment Design
,”
Bioinformatics
,
28
(
8
), pp.
1136
1142
.
9.
Coles
,
D.
, and
Prange
,
M.
,
2012
, “
Toward Efficient Computation of the Expected Relative Entropy for Nonlinear Experimental Design
,”
Inverse Probl.
,
28
(
5
), p.
55019
.
10.
Sebastiani
,
P.
, and
Wynn
,
H. P.
,
2000
, “
Maximum Entropy Sampling and Optimal Bayesian Experimental Design
,”
J. R. Stat. Soc.
,
62
(
1
), pp.
145
157
.
11.
Terejanu
,
G.
,
Upadhyay
,
R. R.
, and
Miki
,
K.
,
2012
, “
Bayesian Experimental Design for the Active Nitridation of Graphite by Atomic Nitrogen
,”
Exp. Therm. Fluid Sci.
,
36
, pp.
178
193
.
12.
Mullins
,
J.
,
Li
,
C.
,
Mahadevan
,
S.
, and
Urbina
,
A.
,
2014
, “
Optimal Selection of Calibration and Validation Test Samples Under Uncertainty
,”
32nd IMAC, A Conference and Exposition on Structural Dynamics
, Orlando, FL, Feb. 3–6, pp.
391
401
.
13.
Saltelli
,
A.
,
Ratto
,
M.
,
Andres
,
T.
,
Campolongo
,
F.
,
Cariboni
,
J.
,
Gatelli
,
D.
,
Saisana
,
M.
, and
Tarantola
,
S.
,
2008
,
Global Sensitivity Analysis: The Primer
,
Wiley
,
Chichester, UK
.
14.
Hu
,
Z.
, and
Mahadevan
,
S.
,
2016
, “
Global Sensitivity Analysis-Enhanced Surrogate (GSAS) Modeling for Reliability Analysis
,”
Struct. Multidiscip. Optim.
,
53
(
3
), pp.
501
521
.
15.
Li
,
C.
, and
Mahadevan
,
S.
,
2016
, “
Relative Contributions of Aleatory and Epistemic Uncertainty Sources in Time Series Prediction
,”
Int. J. Fatigue
,
82
(Pt. 3), pp.
474
486
.
16.
Marrel
,
A.
,
Iooss
,
B.
,
Laurent
,
B.
, and
Roustant
,
O.
,
2009
, “
Calculations of Sobol Indices for the Gaussian Process Metamodel
,”
Reliab. Eng. Syst. Saf.
,
94
(
3
), pp.
742
751
.
17.
Owen
,
A.
,
2013
, “
Better Estimation of Small Sobol' Sensitivity Indices
,”
ACM Trans. Model. Comput. Simul.
,
23
(
2
), p.
11
.
18.
Ginot
,
V.
,
Gaba
,
S.
,
Beaudouin
,
R.
,
Aries
,
F.
, and
Monod
,
H.
,
2006
, “
Combined Use of Local and ANOVA-Based Global Sensitivity Analyses for the Investigation of a Stochastic Dynamic Model: Application to the Case Study of an Individual-Based Model of a Fish Population
,”
Ecol. Modell.
,
193
(
3–4
), pp.
479
491
.
19.
Zhang
,
X.
, and
Pandey
,
M. D.
,
2014
, “
An Effective Approximation for Variance-Based Global Sensitivity Analysis
,”
Reliab. Eng. Syst. Saf.
,
121
, pp.
164
174
.
20.
Chen
,
W.
,
Jin
,
R.
, and
Sudjianto
,
A.
,
2005
, “
Analytical Variance-Based Global Sensitivity Analysis in Simulation-Based Design Under Uncertainty
,”
ASME J. Mech. Des.
,
127
(
5
), pp.
875
886
.
21.
Sobol’
,
I. M.
,
2001
, “
Global Sensitivity Indices for Nonlinear Mathematical Models and Their Monte Carlo Estimates
,”
Math. Comput. Simul.
,
55
(
1–3
), pp.
271
280
.
22.
Homma
,
T.
, and
Saltelli
,
A.
,
1996
, “
Importance Measures in Global Sensitivity Analysis of Nonlinear Models
,”
Reliab. Eng. Syst. Saf.
,
52
(
1
), pp.
1
17
.
23.
Sobol’
,
I. M.
, and
Myshetskaya
,
E. E.
,
2008
, “
Monte Carlo Estimators for Small Sensitivity Indices
,”
Monte Carlo Methods Appl.
,
13
(
5–6
), pp.
455
465
.
24.
Saltelli
,
A.
,
Tarantola
,
S.
, and
Chan
,
K.
,
1999
, “
A Quantitative Model-Independent Method for Global Sensitivity Analysis of Model Output
,”
Technometrics
,
41
(
1
), pp.
39
56
.
25.
Tarantola
,
S.
, and
Koda
,
M.
,
2010
, “
Improving Random Balance Designs for the Estimation of First Order Sensitivity Indices
,”
Procedia: Soc. Behav. Sci.
,
2
(6), pp.
7753
7754
.
26.
Sankararaman
,
S.
, and
Mahadevan
,
S.
,
2013
, “
Separating the Contributions of Variability and Parameter Uncertainty in Probability Distributions
,”
Reliab. Eng. Syst. Saf.
,
112
, pp.
187
199
.
27.
Li
,
C.
, and
Mahadevan
,
S.
,
2015
, “
Global Sensitivity Analysis for System Response Prediction Using Auxiliary Variable Method
,”
AIAA
Paper No. 2015-0661.
28.
Schrijver
,
A.
,
1998
,
Theory of Linear and Integer Programming
,
Wiley
,
Chichester, UK
.
29.
Alrefaei
,
M. H.
, and
Andradóttir
,
S.
,
1999
, “
A Simulated Annealing Algorithm With Constant Temperature for Discrete Stochastic Optimization
,”
Manage. Sci.
,
45
(
5
), pp.
748
764
.
30.
Red-Horse
,
J. R.
, and
Paez
,
T. L.
,
2008
, “
Sandia National Laboratories Validation Workshop: Structural Dynamics Application
,”
Comput. Methods Appl. Mech. Eng.
,
197
(
29–32
), pp.
2578
2584
.
31.
Chopra
,
A. K.
,
2011
,
Dynamics of Structures: Theory and Applications to Earthquake Engineering
,
Prentice Hall
,
Upper Saddle River, NJ
.
32.
Sankararaman
,
S.
, and
Mahadevan
,
S.
,
2015
, “
Integration of Model Verification, Validation, and Calibration for Uncertainty Quantification in Engineering Systems
,”
Reliab. Eng. Syst. Saf.
,
138
, pp.
194
209
.
33.
Sankararaman
,
S.
, and
Mahadevan
,
S.
,
2013
, “
Assessing the Reliability of Computational Models Under Uncertainty
,”
AIAA
Paper No. 2013-1873.
34.
Mullins
,
J.
,
Li
,
C.
,
Sankararaman
,
S.
,
Mahadevan
,
S.
, and
Urbina
,
A.
,
2013
, “
Uncertainty Quantification Using Multi-Level Calibration and Validation Data
,”
AIAA
Paper No. 2013-1872.