Integrated Computational Materials Engineering (ICME) calls for the integration of computational tools into the materials and parts development cycle, while the Materials Genome Initiative (MGI) calls for the acceleration of the materials development cycle through the combination of experiments, simulation, and data. As they stand, both ICME and MGI do not prescribe how to achieve the necessary tool integration or how to efficiently exploit the computational tools, in combination with experiments, to accelerate the development of new materials and materials systems. This paper addresses the first issue by putting forward a framework for the fusion of information that exploits correlations among sources/models and between the sources and “ground truth.” The second issue is addressed through a multi-information source optimization framework that identifies, given current knowledge, the next best information source to query and where in the input space to query it via a novel value-gradient policy. The querying decision takes into account the ability to learn correlations between information sources, the resource cost of querying an information source, and what a query is expected to provide in terms of improvement over the current state. The framework is demonstrated on the optimization of a dual-phase steel to maximize its strength-normalized strain hardening rate. The ground truth is represented by a microstructure-based finite element model while three low fidelity information sources—i.e., reduced order models—based on different homogenization assumptions—isostrain, isostress, and isowork—are used to efficiently and optimally query the materials design space.

## Introduction

### Motivation: Toward Accelerated Materials Design.

Over the past two decades, there has been considerable interest in the development of frameworks to accelerate the materials development cycle. Back in the late 90s, Greg Olson popularized the concept of materials-as-hierarchical-systems [1,2], amenable for improvement through the exploitation of explicit processing–structure–properties–performance relationships. Olson used this framework to develop (inverse) linkages connecting performance/property requirements to desired (multiscale) structural features and the latter to the corresponding processing steps. A decade later, the Integrated Computational Materials Engineering (ICME) [3,4] framework prescribed the combination of theory, experiments, and computational tools to streamline and accelerate the materials and manufacturing development cycle. Similarly, the Materials Genome Initiative [5] calls for the acceleration of the materials development cycle through the combination of experiments, simulations and data. We would like to point out that ICME and Materials Genome Initiative (MGI) are aspirational in that the former does not prescribe the way to carry out the integration of multiple tools and the latter does not put forward a feasible strategy to accelerate the materials development cycle. On the other hand, ICME and MGI have motivated considerable development in terms of the sophistication in the tool sets used to carry out the computer-assisted exploration of the materials design space [69].

### Challenges and Opportunities.

On the integration front, it has long been recognized that in order to establish quantitative processing–structure–properties–performance relationships, it is necessary to integrate multiple (computational) tools across multiple scales [9]. Realizing such integration is a necessary (albeit, not sufficient) condition to achieving any measure of success when attempting to carry out computationally assisted materials development exercises. Explicit integration of multiple tools is technically challenging, particularly because of the considerable expense of computational models, the complexity of the input/output interfaces of such models, and the asynchronous nature of the development of such tools. We would like to note, however, that some efforts have recently emerged that attempt to explicitly integrate models within a single framework for materials design [1012]. Approaches that instead use statistical techniques and machine learning tools to better sample the design space have proven to be effective [13].

Another strategy for the accelerated discovery of materials (most closely associated with the MGI) has been the use of high-throughput experimental [1416] and computational [17] approaches that, while powerful, have important limitations as they tend to be suboptimal in resource allocation as experimental decisions do not account for the cost and time of experimentation. Resource limitation cannot be overlooked as it is often the case that once a bottleneck in high-throughput workflows has been eliminated (e.g., synthesis of ever more expansive materials libraries), another one suddenly becomes apparent (e.g., need for high-resolution characterization of materials libraries).

Recently, notions of optimal experimental design, within the overall framework of Bayesian optimization, have been put forward as a strategy to overcome the limitations of traditional (costly) exploration of the design space. For example, Balachandran et al. [18] have put forward a framework that balances the need to exploit current knowledge of the design domain with the need to explore it by using a metric that evaluates the value of the next experiment (or simulation) to carry out. Bayesian optimization-based approaches rely on the construction of a response surface of the design space and are typically limited to the use of a single model to carry out the queries. This is an important limitation, as often times, at the beginning of a materials discovery problem, there is not sufficient information to elucidate the feature set (i.e., model) that is most related to the specific performance metric to optimize.

Talapatra et al. [19] recently proposed a framework that is capable of adaptively selecting competing models connecting materials features to performance metrics through Bayesian model averaging, followed by optimal experimental design. Ling et al. [20] propose a value of information framework that is capable of managing information from multiple sources with a particular emphasis on imprecise probabilities. Also, there has been recent work on nonhierarchical fusion for design that has led to promising avenues for information source integration [2123] that we generally build off here.

### Description of This Work.

It is clear from the brief discussion above that, while considerable progress has been made recently in the development of novel frameworks for accelerating materials development efforts, several important challenges remain to be solved. Model-based ICME-style frameworks tend to focus on integrating tools at multiple levels under the assumption that there is a single model/tool relevant at a specific scale of the problem. This precludes the use of multiple models that may be more/less effective in different regions of the performance space. Data-centric approaches, on the other hand, tend to focus (with some exceptions) on the brute-force exploration of the materials design space, without accounting for the considerable cost associated with such exploration.

In this work, we present a framework that addresses the two outstanding issues listed previously in the context of the optimal microstructural design of ductile multiphase materials, such as advanced high strength steels. Specifically, we carry out the fusion of multiple information sources that connect microstructural descriptors to mechanical performance metrics. This fusion is done in a way that accounts for and exploits the correlations between each individual information source—reduced order model constructed under different simplifying assumptions regarding the partitioning of (total) strain, stress or deformation work among the phases constituting the microstructure—and between each information source and the ground truth—represented in this case by a full-field microstructure-based finite element model. We note here that while this finite element model is computational, and thus could be considered as a higher fidelity model as part of a multifidelity framework, our intention is create a framework for predicting ground truth. Specifically, we are not interested in matching the highest fidelity model, but in predicting material properties when created at ground truth. There is usually no common resource trade-off in this scenario, which is in contrast to traditional computational multifidelity frameworks that trade computational expense and accuracy. Thus, the finite element model is used here as a proxy for a ground truth experiment and is treated as such in the demonstrations provided.

In our framework, we value the impact a new query to an information source has on the fused model. In particular, we perform the search over the input domain and the information source options concurrently to determine which next query will lead to the most improvement in our objective function. This concurrent approach, to our knowledge, has not been addressed in the literature. In addition, our exploitation of correlations between the discrepancies of the information sources in the fusion process differs significantly from previous work and enables the identification of ground truth optimal points that are not shared by any individual information sources in the analysis.

The remainder of the paper is as follows: First, we proceed to motivate the work in the context of microstructure-sensitive modeling and design of dual-phase ductile materials, e.g., advanced high strength steels. These advanced structural alloys are one of the most technologically sought after materials used in lightweight structural applications, such as automotive manufacturing. Next, we describe a microstructure-based finite element model—considered in this work as the “ground truth”—for predicting the stress–strain response of ductile dual-phase materials as well as the reduced order models that predict the stress–strain response of multiphase microstructures under different assumptions regarding the partitioning of stress, strain or work of deformation. We then present and demonstrate the proposed framework for correlation-exploiting information fusion through reification. This is followed by the description and demonstration of the proposed multi-information optimization framework. We close the paper by discussing further directions for the current research program.

## Mechanical Behavior of Dual-Phase Microstructures

A class of one of the most technologically sought after structural materials, known as advanced high-strength steels, derive their exceptional properties from complex, heterogeneous microstructures. Of the various advanced high strength steels, dual-phase steels have experienced the fastest growth in the automotive industry [24]. These dual-phase advanced high strength steels primarily consist of hard martensite islands dispersed in a soft ferrite matrix [25]. Both these phases undergo nonlinear elastic-plastic deformation with strikingly different strength levels and strain hardenability [26,27]. The overall mechanical properties of dual-phase steels are thus determined partly by the mechanical properties of the constituent phases, and partly by the microstructural features, such as the volume fraction of the phases. The properties of the phases and the microstructural features can, in principle, be tuned and optimized to achieve a particular performance matrix.

The microstructure–property correlation of ductile dual-phase materials can be explored by high-fidelity microstructure-based finite element models. However, these come at considerable computational cost that precludes their use to carry out search in the microstructure space for regions of optimal performance. The response of composite microstructures consisting of more than one phase can be approximated through the use of low-fidelity models based on different assumptions underlying the homogenized response of the multiphase microstructure. As described below, here we will carry out the fusion of multiple reduced-order models based on isostrain [28], isostress [29], or isowork [30] assumptions for the partitioning of the macroscopic strain, stress, or work, respectively, among the microstructural constituents.

In this work, we will exploit statistical correlations between the different information sources to arrive at a fused model with significantly better fidelity with respect to the ground truth (microstructure-based finite element model) than any individual source (reduced-order model). The fused model will then be integrated with a Bayesian sequential design optimization framework to arrive at optimal microstructures that maximize the strength normalized strain-hardening rate by identifying and exploiting optimal sequential queries of different information sources. The quantity of interest, strength normalized strain-hardening rate, is an important manufacturing-related attribute as it dictates the ductility and formability of the material. The details of the microstructure-based finite element modeling (ground truth) of dual-phase microstructures and the three lower fidelity reduced-order models (information sources) are described below.

### Microstructure-Based Finite Element Model.

Microstructure-based finite element modeling is carried out to calculate the overall mechanical response of the ductile dual-phase microstructures. To this end, we generate three-dimensional (3D) representative volume elements (RVEs) of the dual-phase microstructures following the procedure detailed in Ref. [31]. The RVE is a composite dual-phase microstructure with two discretely modeled phases: a soft phase representative of the ferrite phase and a hard phase representative of the martensite phase, present in dual-phase advanced high strength steels. A typical 3D RVE of the dual-phase microstructure is shown in the inset of Fig. 1. The RVE consists of 27,000 C3D8 brick elements of the abaqus/standard element library [32], and has a dimension of $100μm×100μm×100μm$. The volume fraction of the phases in the RVE is always an integral multiple of the volume of one element, which is $3.7×10−5μm3$. The RVE is subjected to fully periodic boundary conditions on all six faces and monotonically increasing uniaxial tensile deformation. This allows for the calculation of the overall uniaxial tensile stress–strain response of the composite microstructure.

In the calculations, both phases are assumed to follow isotropic elastic-plastic stress–strain response. The Young's modulus of both phases is taken to be E = 200 GPa and Poisson's ratio is taken to be ν = 0.3. The plastic response of both phases are modeled using the Ludwik power law constitutive relation,
$τ=τo+K(εpl) n$
(1)

where τ is the flow stress, εpl is the equivalent plastic strain, τo is the yield strength, K is the strengthening coefficient, and n is the strain hardening exponent. The values of τo, K, and n for the constituent phases are given in Table 1. The parameters are chosen to represent lower initial yield strength of the ferrite (soft) phase compared to the martensite (hard) phase and higher strain hardenability of the ferrite phase compared to the martensite phase [27,31].

### Reduced-Order Models.

We use three low-fidelity reduced-order models as three sources of information. These three reduced-order models are:

1. (i)
The Voigt/Taylor isostrain model, where the basic assumption is that the strain field is uniform among the constituent phases [33]. The effective stress is expressed in terms of the local stress average with respect to both phases weighted by their respective volume fractions. That is, for this model we have
$εplT=εplh=εpls, τT=fhardτh+(1−fhard)τs$
(2)
2. (ii)
The Reuss/Sachs isostress model, where the basic assumption is that the stresses among the phases are homogeneous [33]. The effective strain is calculated in terms of the average of the strains in each phase weighted by their respective volume fractions. Thus, for this model we have
$τT=τh=τs, εplT=fhardεplh+(1−fhard)εpls$
(3)
3. (iii)
The isowork model, which is an approximation based on the principle that work of deformation is equally distributed in all the constituent phases in the dual-phase microstructure at any strain level. That is,
$τhεplh=τsεpls$
(4)

In Eqs. (2)(4), $εplT$ is the overall plastic strain, $εplh$ is the plastic strain in the hard (martensite) phase, $εpls$ is the plastic strain in the soft (ferrite) phase, τT is the overall stress, τh is the stress in the hard (martensite) phase, τs is the stress in the soft (ferrite) phase, and fhard is the volume fraction of the hard phase in the microstructure. The stress–strain relations, τ = f(εpl), of both phases are assumed to follow, Eq. (1), with the values of the parameters given in Table 1.

### Demonstration of Modeling Capabilities.

The predicted stress–strain response of dual-phase microstructures with varying volume fraction of the hard phase, fhard, using a high-fidelity microstructure-based finite element model is shown in Fig. 1. As shown in the figure, the flow strength of the dual-phase material increases with increasing volume fraction of the hard phase. But the strain-hardening rate, which is the slope of the stress–strain curve, of the material varies nonmonotonically with the volume fraction of the hard phase.

The variation of the strength normalized strain-hardening rate, $(1/τ)(dτ/dεpl)$, with the volume fraction of the hard phase, fhard, estimated at a strain level, εpl = 1.5%, using the microstructure-based finite element calculations, is shown in Fig. 2. As shown in the figure, the value of $(1/τ)(dτ/dεpl)$ at εpl = 1.5%, first increases with increasing volume fraction of the hard phase and then starts to decrease. In general, a higher value of the quantity $(1/τ)(dτ/dεpl)$ denotes higher formability of the material. Note, in Fig. 2, variation of $(1/τ)(dτ/dεpl)$ with fhard exhibits local perturbations. These perturbations are due to the fact that there are several possible realizations of the RVE of a dual-phase microstructure with a fixed volume fraction of the hard phase. These different realizations result in slightly different values of $(1/τ)(dτ/dεpl)$ for a fixed fhard value. For a few selected volume fractions of the hard phase, seven realizations of the dual-phase microstructures were generated and their mechanical responses were calculated. The standard error on the values of $(1/τ)(dτ/dεpl)$ at εpl = 1.5% due to different realizations of the dual-phase microstructure with fixed volume fraction of the hard phase are also shown in figure as error bars.

The predictions of $(1/τ)(dτ/dεpl)$ at a strain level, εpl = 1.5%, as a function of the volume fraction of the hard phase, fhard, using the three low-fidelity reduced-order models are also shown in Fig. 2. Compared to the finite element model, the isostress model gives a reasonable prediction of $(1/τ)(dτ/dεpl)$ at εpl = 1.5% for low volume fraction of the hard phase but significantly overpredicts this quantity for large volume fractions of the hard phase. In contrast, the isostrain and isowork models give reasonable predictions at high volume fraction of the hard phase but underpredict $(1/τ)(dτ/dεpl)$ at εpl = 1.5% at lower volume fractions of the hard phase. It is also important to note here that the maximum values of $(1/τ)(dτ/dεpl)$ at εpl = 1.5% according to each information source are significantly different from the ground truth maximum.

## Correlation Exploiting Multi-Information Source Optimization

In most materials design tasks, there are always multiple information sources at the disposal of the designer. For example, the forward connections between microstructures and properties/performance can in principle be developed through experiments as well as (computational) models at different levels of fidelity and resolution. Conventional approaches to ICME, on the other hand, often times make the implicit and unrealistic assumption that there is only one source available to query the design space—in this work, our framework uses three relatively simple models (under the isostrain, isostress and isowork approximations) as representative of multiple information sources, while considering a microstructure-sensitive RVE-based simulation as the ground truth.

While information fusion on its own represents a considerable improvement upon the vast majority of ICME-based approaches to materials design currently under development, we posit that an even better approach would necessarily have to account for resource constraints on the exploration of the materials design space. Specifically, every source used to query the materials design space carry a certain (time, monetary, opportunity) cost and thus there are hard limits to the number of queries and the sources used to carry out such queries. Unfortunately, such constraints rarely take a concrete form that can be dealt with using formal constrained optimization approaches. This is due to the often dynamic nature of the materials design and procurement process. As a materials design cycle progresses, the current state of the process may dictate if more resources will be allocated to the process or not. Thus, it is advantageous to tackle such problems in a myopic fashion.

For single information sources and sequential querying, there are two traditional techniques for choosing what to query next in this myopic context [34]. These are efficient global optimization [35] and its extensions, such as sequential Kriging optimization [36] and value-based global optimization [37], and the knowledge gradient (KG) [3840]. Efficient global optimization uses a Gaussian process [41] representation of queried information, but assumes no noise [42,43]. sequential Kriging optimization also uses Gaussian processes, but includes a tunable weighting factor to lean toward decisions with higher uncertainty [34]. KG can handle noise and makes its querying selection on the basis of the expected value of the best design after querying. Here, KG does not require that design to have actually been evaluated by an information source.

Recent developments in Refs. [21] and [22] extend these sequential optimization approaches to the case of multiple information sources. The approach we propose here builds off of these approaches by including and exploiting learned correlations in multi-information source fusion and by defining and implementing a two-step lookahead querying strategy referred to as the value-gradient policy. We describe our formal problem statement, the multi-information source fusion approach, and the value-gradient utility used to guide the querying policy, in Secs 3.13.3.

### Problem Formulation.

A mathematical statement of the problem is formulated as finding the best design, x*, such that
$x*=arg maxx∈χf(x)$
(5)

where f is the ground truth objective function, and x is a set of design variables in the vector space χ. This ground truth objective function is typically very expensive to query. We note that in this formulation, there is a tacit dynamic constraint on resources. While ground truth is impractical to query often in an optimization process, other forms of information are usually available and can be used to approximate the ground truth. These information sources differ in terms of fidelity with respect to the ground truth, as well as resource expenditures required per query. These information sources are also fundamentally related through the fact that they seek to estimate the same quantity of interest. Thus, there must exist statistical correlations between these information sources that can potentially be exploited if learned. In this context, the core issue to myopically addressing Eq. (5) is the decision of what information source to query and where in its input domain to execute that query. This decision must balance the cost of the query and what that query is expected to tell us about the solution to Eq. (5).

To assign a value to each potential query option over the information source space and the domains of the respective information sources, we create intermediate Gaussian process surrogates for each information source learned from previous queries. We assume that we have S information sources, $f¯i(x)$, where i ∈ {1, 2,…, S}, available to estimate the ground truth, f(x), at design point x. We further assume that we have Ni previous query results available for information source i. These results are denoted by ${XNi,yNi}$, where $XNi=(x1,i,…,xNi,i)$ represents the Ni input samples to information source i and $yNi$ represents the corresponding outputs from information source i. Posterior Gaussian process distributions of each $f¯i$, denoted as fGP,i(x), at any point x in the input space, are then given as
$fGP,i(x)|XNi,yNi∼N(μi(x),σGP,i2(x))$
(6)
where
$μi(x)=Ki(XNi,x)T[Ki(XNi,XNi)+σn,i2I]−1yNi$
(7)
and
$σGP,i2(x)=ki(x,x)−Ki(XNi,x)T[Ki(XNi,XNi)+σn,i2I]−1Ki(XNi,x)$
(8)
Here, ki is a real-valued kernel function associated with information source i over the input space, $Ki(XNi,XNi)$ is the Ni × Ni matrix whose m, n entry is $ki(xm,i,xn,i), Ki(XNi,x)$ is the Ni × 1 vector whose mth entry is $ki(xm,i,x)$ for information source i, and the term $σn,i2$ can be used to model observation error for information sources or to guard against numerical ill-conditioning. For the kernel function, we use the commonly used squared exponential kernel function given as
$ki(x,x′)=σs2 exp(−∑h=1d(xh−x′h)22lh2)$
(9)

where d is the dimension of the input space, $σs2$ is the signal variance, and lh, where h = 1, 2,…,d, is the characteristic length-scale that indicates the correlation between the points within dimension h. The parameters of the Gaussian process ($σs2$, lh, and $σn2$) associated with each information source can be estimated via maximum likelihood or Bayesian techniques [41].

To these Gaussian process surrogates, we further quantify the discrepancy of each information source with respect to ground truth. These discrepancies can be estimated from, for example, expert opinion or available ground truth data, and can vary over the input space. We add the estimated uncertainty due to information source discrepancy, δf,i(x), to the uncertainty associated with the Gaussian process of information source i, denoted by δGP,i(x). Specifically, each of the S available information sources for estimating the ground truth objective can be written as
$fi(x)=μi(x)+δi(x)$
(10)
where
$δi(x)=δGP,i(x)+δf,i(x)$
(11)

Figure 3 shows a depiction of total uncertainty for an information source, which includes both the uncertainty associated with the Gaussian process and the uncertainty associated with the fidelity of the information source.

### Correlation Exploiting Fusion.

Available information sources for estimating a ground truth quantity of interest are necessarily correlated by virtue of their estimation task. If they were not correlated, then presumably they are irrelevant to the estimation task at hand. Working under the hypothesis that each information source brings to bear some useful information regarding a quantity of interest, we seek to systematically fuse available information from each source. Unlike traditional multifidelity methods [4452], the multi-information source fusion method employed here does not assume a hierarchy of information sources with the goal of efficiently approximating the highest fidelity source. The goal here is to best approximate a ground truth quantity of interest by leveraging all available information. This is achieved by learning the correlations between the discrepancies of the available information sources, which results in the ability to mitigate information source bias and avoid overconfidence that arises from the reuse of dependent information.

There are many techniques in use for fusing information from multiple sources of information. Among these are approaches such as Bayesian modeling averaging [5358], the use of adjustment factors [5962], covariance intersection methods [63,64], and fusion under known correlation [6567]. There are also techniques designed to value the improvement potential of a given information source through model refinement [68] and model refinement and selection [69]. In this work, we consider each information source as fixed. That is, we do not consider improving predictive capabilities of any individual information source and instead focus on leveraging multiple available information sources to construct an improved fused predictive capability.

As noted previously, we hypothesize that every information source contains useful information regarding the ground truth quantity of interest. Thus, as more information sources are incorporated into a fusion process, we expect the variance of quantity of interest estimates to decrease. This is not necessarily the case for techniques such as Bayesian model averaging and adjustment factors approaches. For the case of unknown correlations between information sources, recourse must be made to conservative methods, such as covariance intersection. This method fuses information by assuming the worst case correlation information. Thus, there is much to gain from estimating correlation between information sources and incorporating these learned correlations in the fusion process.

Since our information sources are represented by intermediate Gaussian processes, their fusion follows that of normally distributed information. Under the case of known correlations between the discrepancies of information sources, the fused mean and variance are shown to be [67]
$E[f̂(x)]=eTΣ̃(x)−1μ(x)eTΣ̃(x)−1e$
(12)

$Var(f̂(x))=1eTΣ̃(x)−1e$
(13)
where $e=[1,…,1]T, μ(x)=[μ1(x),…,μS(x)]T$ given S models, and $Σ̃(x)−1$ is the inverse of the covariance matrix between the information sources. We stress here that the mean as estimated by Eq. (12) is not necessarily a convex combination of the information source estimates. For example, in the case of two information sources, Eq. (12) is
$E[f̂(x)]=(σ22−ρσ1σ2)μ1+(σ12−ρσ1σ2)μ2σ12+σ22−2ρσ1σ2$
(14)

where the dependence on x has been omitted for notational clarity. If σ1 < σ2, then μ1 will receive a positive weight. If also, ρ > σ1/σ2, then μ2 will receive a negative weight. Following Ref. [67], a high correlation makes it likely that the estimates will be biased on the same side of the quantity being estimated. Since the less precise estimate is expected to be further from the true quantity than the more precise estimate, the less precise estimate receives a negative weight. This acts to shrink the fused estimate toward the true quantity of interest. This enables the mean of the fused estimate to be outside the bounds of the means of any of the individual information source estimate means.

The key to the proper use of fusion of normally distributed information is the estimation of the correlation coefficients over the domain. For this, we use the reification process defined in Refs. [70] and [71]. In this process, to estimate the correlation coefficients between the deviations of information sources i and j, each of the information sources i and j, one at a time, is reified, or treated as a ground truth model. Assuming that information source i is reified, the correlation coefficients between the information sources i and j, for j = 1,…, i − 1, i + 1,…, S, are given as
$ρij(x)=σi2(x)σi(x)σj(x)=σi(x)(μi(x)−μj(x))2+σi2(x)$
(15)
where μi(x) and μj(x) are the mean values of the Gaussian processes of information sources i and j, respectively, at design point x, and $σi2(x)$ and $σj2(x)$ are the total variances at point x. Afterward, information source j is reified to estimate ρji(x). Then, the variance weighted average of the two estimated correlation coefficients is used as the estimate of the correlation between the errors as
$ρ¯ij(x)=σj2(x)σi2(x)+σj2(x)ρij(x)+σi2(x)σi2(x)+σj2(x)ρji(x)$
(16)

These average correlations are then used to estimate the fused mean and variance in Eqs. (12) and (13).

By computing the fused means and variances in the input design space χ, we construct a fused Gaussian process over this space. This fused information source contains all of our current knowledge about the ground truth objective function. Our goal is to optimize ground truth by leveraging new queries to the less resource-expensive information sources. Once our resources for information source querying have been exhausted, the predicted ground truth optimal design can be synthesized or produced in an experiment. This information can of course then be fed back to the multi-information source optimization framework and used to update information source discrepancy and correlation information. A flowchart of our proposed framework is presented in Fig. 4. We describe the value-gradient utility in the following paragraphs.

The task then is to determine what information source to query and where to query it, concurrently, so as to produce the most value in terms of addressing Eq. (5), with the tacit resource constraint in mind. For this decision, we propose a utility, which we refer to as the value-gradient utility, which takes into account both the immediate improvement in one step and expected improvement in two steps. The idea here being that we seek to produce rapid improvement, with the knowledge that every resource expenditure could be the last, but we also seek to position ourselves best for the next resource expenditure. In this sense, we are equally weighting next step value with next step (knowledge) gradient information, hence the term value-gradient.

The immediate improvement can be quantified by the maximum mean function value of the fused Gaussian process, $μfused*$. Since the best estimate of the objective function is represented by the fused Gaussian process, which is the probabilistic representation of the ground truth objective function, there is uncertainty in the value of the predicted ground truth objective function upon querying the next sample. Thus, we compute the expected value of improvement using the posterior predictive distribution of the fused model. Letting $(i1:N,x1:N,y1:N)$ be the information sources, design points, and the corresponding objective values used for the first N queries and $f̂$ denote the posterior distribution of the fused model, the expected improvement (EI) at design point x is defined as
$EI(x)=E[maxx′∈χE[f̂(x′)|i1:N,x1:N,xN+1=x,y1:N]−maxx′∈χE[f̂(x′)|i1:N,x1:N,y1:N]]=E[maxx′∈χE[f̂(x′)|i1:N,x1:N,xN+1=x,y1:N]] −maxx′∈χE[f̂(x′)|i1:N,x1:N,y1:N]$
(17)

where the last expression comes out of the expectation operator as it is a known value when conditioned on the first N queries.

The KG policy of Refs. [38,72], and [73] takes an information-economic approach to maximize this expectation. Letting $HN=E[f̂(x)∣i1:N,x1:N,y1:N]$ be the knowledge state, the value of being at state HN is defined as $VN(HN)=maxx∈χHN$. The knowledge gradient, which is a measure of expected improvement, is defined as
$νKG(x)=E[VN+1(HN+1(x))−VN(HN)|HN]$
(18)
The KG policy for sequentially choosing the next query is then given as
$xKG=argmaxx∈χνKG(x)$
(19)

Calculation of the knowledge gradient is discussed in detail in two algorithms presented in Ref. [73]. The method has been shown in Refs. [38] and [73] to perform very well when faced with highly nonlinear and multimodal objective functions.

Given both immediate and expected improvement, our proposed value-gradient utility is given as
$U=μfused*+maxx∈χ νKG(x)$
(20)

where the first term is the maximum value of the mean function of the current fused model and the second term is the maximum expected improvement that can be obtained with another query as measured by the knowledge gradient over the fused model. We can then define a value-gradient policy as the policy that selects the next query such that the value-gradient utility is maximized. By considering the immediate gain in the next step and the expected gain in the step that follows, the value-gradient is a two-step look-ahead policy.

To determine the next design point and information source to query efficiently, we generate Latin hypercube samples in the input design space as alternatives denoted as Xf. For low-dimensional problems, a uniform grid of alternatives could also be considered. Among these alternatives, we seek to find the query that maximizes the value-gradient utility of Eq. (20). According to Eq. (6), an evaluation of information source i, at design point x, is distributed normally with mean μi(x) and variance $σGP,i2(x)$. For a given alternative, x, we draw Nq independent samples from the distribution at that point as
$fiq(x)∼N(μi(x),σGP,i2(x)), i=1,…,S and q=1,…,Nq$
(21)
In order to predict the impact of querying each alternative on the utility function, we temporarily augment the design point, x, and the sampled information source output value, $fiq(x)$, one at a time, to the available samples of information source i. By adding this sample, the Gaussian process of information source i and, as a result, the fused Gaussian process are temporarily updated. Then, the maximum mean function value and the maximum knowledge gradient of the temporarily updated fused Gaussian process are evaluated. These quantities can then be used to compute the value-gradient utility that would result if the sample, $(x, fiq(x))$, was realized from information source i. This is given as
$Ux,iq=μfused*,temp+maxx′∈χ νKG(x′)$
(22)
This process is repeated for all Nq samples by removing the previously added sample and augmenting with the next new sample. The expected value-gradient utility obtained from adding alternative x to information source i is then computed as
$EUx,i=1Nq∑q=1NqUx,iq$
(23)
This expected utility is evaluated for all the alternatives and all the information sources. By denoting $Cx,i$ as the cost of querying information source i at design x, which is often computational expense for computational models, we find the query $(iN+1,xN+1)$ that maximizes the expected value-gradient utility per unit cost, given by
$(iN+1,xN+1)=arg maxi∈{1,…,S} , x∈XfEUx,iCx,i$
(24)

After querying the design point xN+1 from the selected information source, iN+1, the corresponding Gaussian process and afterward, the fused Gaussian process, are updated. This process repeats until a termination criterion, such as exhaustion of the querying budget, is met. Then, the optimum solution of Eq. (5) is found based on the mean function of the fused Gaussian process. This design is then to be created at ground truth. Information from this creation can then be fed back into the framework if more resources are allocated. We note here that the computational complexity of the knowledge gradient policy is $O(M2 log M)$, where M is the number of alternatives considered [73]. Thus, the computational complexity of the value-gradient querying policy is $O([(S+1)M]2 log [S+1]M)$, where the S + 1 terms represent each of the S information sources and the fused information source. We also note that value-gradient policy inherits the capabilities of the knowledge gradient policy in terms of ability to handle nonlinear and multimodal objective functions.

## Demonstration: Information Fusion

In this section, we demonstrate the use of our multi-information source fusion approach to the dual-phase steel application. For this demonstration, we fuse information from the three physics-based reduced-order materials information sources with potentially nonuniformly sampled inputs. We compare the results to ground truth data collected from the finite element RVE model. We consider three different cases. The first case involves uniformly sampled data for each information source. The second case involves nonuniformly sampled information from the information sources, with a large region where each information source is only sparsely sampled. The third case involves nonuniform sampling of the information sources where each information source is sampled well over a small region of the input space and sparsely elsewhere. In each case, the multi-information source fusion approach taken here performs well and is far superior to using any of the information sources in isolation. We conclude this section with a novel analysis of the effective number of information sources used to make the fused estimate at each point in the domain. This analysis provides a clear indication of our ability to exploit correlation for fusion, since without correlation information, only a single information source can confidently be used at a given point in the domain.

### Case 1: Uniform Sampling.

In this case, the ground truth is assumed to have been sampled previously at nine uniformly spaced points in the input domain. Each information source, that is, the isostrain, isostress, and isowork models, has been evaluated at the points where ground truth information is known. The nine sampled points for each information source are used to construct Gaussian process surrogate models for each. These are shown as black lines through the nine black dots on the bottom three plots of Fig. 5. The dark-shaded region on each of these plots represents the uncertainty associated with each Gaussian process, respectively. The ground truth data were used to estimate the discrepancy of each information source from ground truth over the domain. This additional uncertainty is the lighter shaded regions in the bottom three plots of the figure. We note here again that we always assume the information sources are unbiased. This assumption allows us to avoid simply fitting each information source to the ground truth data, which would result in eliminating useful information in each information source. On each plot of Fig. 5, the ground truth is represented with the jagged green line and the result of our multi-information source fusion approach is represented by the smooth red line.

From the isostrain, isowork, and isostress subplots, it is clear that no single information source performs well across the domain. Indeed, over much of the domain each source performs poorly. However, as can be seen in the upper left plot, our fused information source is an excellent match to ground truth. This is further evidenced by the data provided in Table 2, where the mean squared errors (MSE):
$MSEi=1N∑j=1N(g(xj)−fi(xj))2$
(25)
and mean Kullback–Liebler divergences (MDKL):
$MDKL,i(g||fi)=1N∑j=1N∫−∞∞pg(xj)log pg(xj)pfi(xj)dg$
(26)
between the ground truth and each information source are presented. Here, p represents the probability density function of the information source given by the subscript. From the table, we see that the fused source is a significant improvement over any of the individual sources and can be used to reliably predict the ground truth over the whole domain.

For this particular example, a subtle but crucial aspect of the use of the reification approach to information source fusion is revealed. For the input region fhard ∈ (95, 100], each information source overpredicts the truth. However, as can be seen in the top plot of Fig. 5, the fused information has overcome the bias of each information source to match the ground truth nearly exactly. If the correlation between the information source discrepancies were not learned, this would not be possible. This is due to the fact that the mean of a fused set of uncorrelated normal distributions is greater than the smallest mean and less than the largest mean in the set. That is, the fused estimate can never overcome the bias that occurred here.

### Case 2: Large Sparsely Sampled Region.

In this case, each information source has been sampled seven times, with all but one of the seven points being in the region [0, 50]. These seven points are not necessarily the same for each information source. We assume we have ground truth information at each of the seven points for each information source. This situation could occur if, for example, different groups have available different sets of ground truth but are unaware of other data or are unwilling to share this information with other groups. The key purpose of this demonstration case is to show the performance of our methodology over a poorly interrogated region of the domain when correlation information has been learned elsewhere over the domain. The results of this demonstration case are shown in Fig. 6. The information on each plot is presented in the same manner as that of Fig. 5. As can be seen from the top plot, the fused information source again performs well, albeit with more predictive uncertainty than case 1, which is to be expected. The MSE and MDKL values between the ground truth and each information source are given in Table 3.

Here, we see again that the fused information source is far superior to any information source in isolation. We also see that the better sampled situation of the first demonstration case results in a more accurate fused information source. Of additional interest in this case is that the ability of our fusion approach to overcome bias of all information sources is more readily apparent. Particularly, over the region fhard ∈ (60, 85], all three information sources overpredict the ground truth. However, the fused estimate has been pushed down toward the ground truth, away from the information source estimates. We stress here that we consider each of the information sources as unbiased in their uncertainty quantification. That is, the direction toward ground truth was not assumed by fitting each information source discrepancy to the ground truth in a biased fashion. Indeed, there are no ground truth samples in this region. The bias mitigation is due to the exploitation of correlations that have been learned through reification.

### Case 3: Nearly Nonoverlapping Samples of Each Information Source.

In this case, each information source has been sampled a few times in a specific region of the domain. The isostrain model was sampled generally in the left half of the domain, the isowork model was sampled generally in the right half of the domain, and the isostress model was sampled generally in the middle of the domain. Ground truth was again used to quantify information source discrepancy but was not shared across information sources. The key purpose of this demonstration case is to show the performance of our methodology when the information sources are essentially disparate in their knowledge of the quantity of interest over the domain. The results of this demonstration case are shown in Fig. 7. The information on each plot is presented in the same manner as that of Fig. 5. As can again be seen by the top plot, our approach performs well. The MSE and MDKL values between the ground truth and each information source are given in Table 4.

Of particular interest in this case is the performance of the fused information source in comparison with each individual source where that source was most heavily sampled. We can see clearly from Fig. 7 that the fused information source is a better approximation to ground truth in the left half of the domain than the isostrain model, where the isostrain model was most queried. The same is true when comparing the middle of the domain estimates from the fused information source to the isostress results, and the right half of the domain results from the fused information source and the isowork model. In each case, the fused information source is able to leverage the limited information from the other information sources to significantly outperform the information source that was heavily queried from in a given region.

### Effective Independent Information Sources.

To complete the demonstration of our multi-information source fusion approach, we define and present a novel number of effective independent information sources index. The index measures the effective number of independent information sources partaking in the fused estimate at each point of the input domain. To define the index, we first consider the normalized change of variance that occurs when information sources are fused together at a given point. This change can be written as
$Δσ2(x)σ*2(x)=1−1σ*2(x)e⊤Σ̃(x)−1e$
(27)
where Δσ2(x) is the variance reduction at x from the current best information source's variance, $σ*2(x)$, at that point. Then, for any number, S, of independent information sources, each with variance $σ*2(x)$ at x, we can write
$Δσ2(x)σ*2(x)=1−1S$
(28)
Thus, the number of effective independent information sources with variance $σ*2(x)$ at the point x is given as
$Ieff=σ*2(x)e⊤Σ̃(x)−1e$
(29)

This index takes the value S when there are S independent sources with the same variance. If any sources have a larger variance, they will not contribute as much to the variance reduction at that point, and Ieff will be less than S. Thus, effective independent information sources, as measured by this index, are relative to the best source at a given point. We note here also that for highly correlated information sources, Eq. (13) can result in variance decreases that are larger than would occur with independent information sources. This generally occurs as a result of very similar but biased in the same direction information sources. The ability to exploit this situation is a feature of the reification approach.

The effective independent information source index for demonstration case 1 is shown in Fig. 8. The figure includes the Ieff for the three source case, as well as each pair of two sources. The two source indices are still considered with respect to the best of the three information sources at a given point. This leads to the possible situation where pairs of information sources are contributing less than one effective information source.

For the fused approximation of case 1, shown in the top plot of Fig. 5, it is interesting to note that the Ieff is not large over the input space, as shown in Fig. 8. For this particular problem, there is often one source that is much more uncertain than the other two at each point in the domain. This renders that source's contribution to Ieff to be very small. For example, isostrain has large variance for low to medium values of fhard and isostress has large variance for large values.

From the pairwise curves, it is clear that initially the isowork–isostrain pair is driving the fused approximation. It is also clear that in this region of low values of fhard, the fusion process is exploiting the high correlation between these two sources and is performing better than three independent sources could. The isostress model takes over the approximation around fhard ≈ 10%. This holds until fhard ≈ 30%, where all three sources are contributing to the prediction. At fhard ≈ 40%, the isowork–isostrain pair again drives the prediction. This continues until the end of the domain is reached. Thus, while all three information sources do not contribute equally over the domain, they are all three required to make the fused approximation presented in Fig. 5.

Though the effective independent information source analysis presented here relied only on the Ieff, which was derived through the discrepancy quantification and reification process, the analysis is consistent with the fundamentals of mechanics for these information sources and this application. This provides evidence that such an analysis could be used to aid in the construction of a more sophisticated physics-based model from models using simplified assumptions. That is, this index provides information about when certain assumptions are valid and when they are not. The index also provides a means of valuing a new evaluation of an information source over the domain. For example, this analysis reveals that sampling the isostress model on the interval fhard ∈ [40, 100] will provide little value in terms of effective information sources when an isostrain and isostress model are also available. Such a valuation could prove useful in a resource constrained process for estimating a quantity of interest with many possible information sources available.

## Demonstration: Multi-Information Source Optimization

In this section, we demonstrate the application of our framework to the optimization of the ground truth strength normalized strain hardening rate for the dual-phase steel application. We stress here that the purpose of our framework is the optimization of ground truth. That is, our motivation is the creation of a myopic multi-information source optimization framework for addressing Eq. (5) in the context of materials design. Thus, we seek to identify the best candidate for a ground truth experiment with whatever resources we have available. Once those resources are exhausted, a ground truth experiment takes place based on the recommendation of our framework. The result of that experiment can then be fed back into the framework. If more resources are then allocated, perhaps on the basis of promising results, then the framework can be employed again.

The specific demonstration consists of the use of the three reduced-order models (isostrain, isostress, and isowork) to query the impact of quantifiable microstructural attributes on the mechanical response of a composite microstructure—in this case a dual-phase steel. The ground truth in this case is the finite element model of the dual-phase material. The objective is the maximization of the (ground truth) normalized strain hardening rate at εpl = 1.5%. The design variable is the percentage of the hard phase, fhard, in the dual-phase material. We assume that our resources limit us to five total queries to (any of) the information sources before we must make a recommendation for a ground truth experiment. Given promising ground truth results, five more queries can be allocated to the information sources. The framework is initialized with one query from each information source and one query from the ground truth. This information is used to construct the initial intermediate Gaussian process surrogates.

The value-gradient policy of our framework was used to select the next information source and the location of the query in the input space for each iteration of the process. For comparison purposes, the KG policy operating directly on the ground truth was also used to reveal the gains that can be had by considering all available information sources. For this, a Gaussian process representation was created and updated after each query to ground truth. The convergence results of our proposed approach using all information sources and the KG policy on the ground truth are shown in Fig. 9. On the figure, the dashed line represents the optimal value of the ground truth quantity of interest. It is clear from this figure that our approach outperformed the knowledge gradient applied to directly to ground truth, and in doing so, saved considerable expense by reducing the number of needed ground truth experiments. The superior performance of our approach can be attributed to its ability to efficiently utilize the information available from the three low fidelity information sources to better direct the querying at ground truth. We note that the original sample from ground truth used for initialization was taken at fhard = 95%, which is far away from the true optimal. This can be seen below in Fig. 10 in the left column. Thus, the framework, by leveraging the three inexpensive available information sources, was able to quickly direct the ground truth experiment to a higher quality region of the design space.

Table 5 presents the results of each ground truth experiment conducted according to the recommendation of our framework. From the table we see that the third recommendation for a ground truth experiment produces a nearly optimal design. The final three experiments show that little more is gained in terms of ground truth objective and that the fused model has learned more about the ground truth in that region. At this point, it is likely that more resources would not be allocated to this design problem and the framework was able to successfully find the best design.

Updates to each information source Gaussian process surrogate model and the fused model representing our knowledge of ground truth are also shown in Fig. 10 for iterations 1, 15, and 30 of the information source querying process. Here, an iteration occurs when an information source is queried. This is distinct from any queries to ground truth. As can be seen from the left column, the first experiment from ground truth and the first query from each information source told us little about the location of the true objective. However, on iteration 15, the fused model, shown by the smooth red curve, has identified the best region of the design space, although it underpredicts the ground truth at this point. We note that at this point, only three expensive ground truth experiments have been conducted. By iteration 30, the fused model is very accurate in the region surrounding the optimal design for ground truth. At this point, six ground truth experiments have been conducted. From the figure, and also from Fig. 2, it is clear that none of the information sources share the ground truth optimum. The ability of the framework to find this optimum rested upon the use of correlation exploiting fusion, and would not have been possible using traditional methods.

To conclude this demonstration, we present the history of the queries to each information source and the ground truth. This information is provided in Fig. 11. Note that the iteration now counts queries to each information source as well as ground truth experiments. From the figure, it is clear that all three information sources are exploited to find the ground truth optimal design, implying that, however imperfect, all sources available to the designer must be used, in an optimal manner, in order to identify the optimal ground truth.

## Conclusions and Future Work

In this paper, we first presented and demonstrated a correlation-exploiting multi-information source fusion approach. The method included new extensions to the fusion of any number of correlated information sources, as well as the creation of a novel effective independent information source index. The fusion methodology was demonstrated on microstructure-sensitive performance prediction for ductile dual-phase materials. In all cases, the proposed fusion approach performed exceptionally well, far exceeding the predictive capabilities of any individual information source. This provides evidence that our approach to information source fusion is highly applicable to the challenge of integration in ICME tools.

We then presented and demonstrated a myopic multi-information source optimization framework. The framework focused on determining the next information source to query and where in the input domain to query it by trading off resource expense and gains expected in ground truth objective function quality. To value each next potential query, we presented a novel value-gradient policy, which seeks to maximize a two-step lookahead utility based on immediate value and the knowledge gradient for a potential next step. The framework was demonstrated on the optimization of ground truth strength normalized strain hardening rate for a dual-phase material. The results of the demonstration revealed the promise of this framework as a suitable methodology for answering the MGI call for accelerating the materials development cycle.

In the near term, the information fusion framework developed here will be validated against larger sets of ground truth data and be demonstrated in higher dimensions. The framework will also be extended to handle information sources with misaligned input–output interfaces, which is a key challenge facing the ICME community.

Moreover, the optimization framework will be extended to handle multiple objectives and studied for scalability to high dimensional input spaces. Additionally, we will explore the possibility of carrying out optimal sequential queries in which the sources of information are not input/output aligned. A specific scenario, for example, would be combining sources that establish relationships between processing parameters/conditions and microstructure with sources that connect microstructures to properties/performance. Much remains to be done, but this work presents a plausible research program toward the realization of the promise of ICME, which in the end rests on tool (or information source) integration.

## Acknowledgment

The authors would like to acknowledge the support of the National Science Foundation through grant No. NSF-CMMI-1663130, DEMS: Multi-Information Source Value of Information Based Design of Multiphase Structural Materials. Arroyave would also like to acknowledge the support of the National Science Foundation through grant No. NSF-CMMI-1534534, DMREF: Accelerating the Development of Phase-Transforming Heterogeneous Materials: Application to High Temperature Shape Memory Alloys. Allaire and Arroyave would also like to acknowledge the support of the National Science Foundation through grant No. NSF-DGE-1545403, NRT-DESE: Data-Enabled Discovery and Design of Energy Materials (D3EM).

## Funding Data

• National Science Foundation (Grant Nos. CMMI-1534534, CMMI-1663130, and DGE-1545403)

## References

References
1.
Olson
,
G. B.
,
1997
, “
Computational Design of Hierarchically Structured Materials
,”
Science
,
277
(
5330
), pp.
1237
1242
.
2.
Olson
,
G. B.
,
2000
, “
Designing a New Material World
,”
Science
,
288
(
5468
), pp.
993
998
.
3.
Allison
,
J.
,
2011
, “
Integrated Computational Materials Engineering: A Perspective on Progress and Future Steps
,”
JOM
,
63
(
4
), pp.
15
18
.
4.
National Research Council
,
2008
,
Integrated Computational Materials Engineering: A Transformational Discipline for Improved Competitiveness and National Security
,
, Washington, DC.
5.
Holdren
,
J. P.
, and
National Science and Technology Council
,
2011
, “
Materials Genome Initiative for Global Competitiveness
,” National Science and Technology Council/OSTP, Washington, DC.
6.
,
J. D.
,
2016
, “
Integrated Computational Materials Engineering: Tools, Simulations and New Applications
,”
JOM
,
68
(
5
), pp.
1376
1377
.
7.
Agrawal
,
A.
, and
Choudhary
,
A.
,
2016
, “
Perspective: Materials Informatics and Big Data: Realization of the Fourth Paradigm of Science in Materials Science
,”
APL Mater.
,
4
(
5
), p.
053208
.
8.
Kalidindi
,
S. R.
, and
De Graef
,
M.
,
2015
, “
Materials Data Science: Current Status and Future Outlook
,”
Annu. Rev. Mater. Res.
,
45
(
1
), pp.
171
193
.
9.
The Minerals Metals & Materials Society (TMS)
,
2015
,
Modeling Across Scales: A Roadmapping Study for Connecting Materials Models and Simulations Across Length and Time Scales
,
The Minerals Metals & Materials Society
,
Warrendale, PA
.
10.
Reddy
,
S.
,
Gautham
,
B.
,
Das
,
P.
,
Yeddula
,
R. R.
,
Vale
,
S.
, and
Malhotra
,
C.
,
2017
, “
An Ontological Framework for Integrated Computational Materials Engineering
,” Fourth World Congress on Integrated Computational Materials Engineering (ICME)
, pp.
69
77
.
11.
Savic
,
V.
,
Hector
,
L.
,
Basu
,
U.
,
Basudhar
,
A.
,
Gandikota
,
I.
,
Stander
,
N.
,
Park
,
T.
,
Pourboghrat
,
F.
,
Sil Choi
,
K. S.
,
Sun
,
X.
,
Hu
,
J.
,
Abu-Farha
,
F.
, and
Kumar
,
S.
,
2017
, “
Integrated Computational Materials Engineering (ICME) Multi-Scale Model Development for Advanced High Strength Steels
,”
SAE
Paper No. 2017-01-0226.
12.
Diehl
,
M.
,
Groeber
,
M.
,
Haase
,
C.
,
Molodov
,
D. A.
,
Roters
,
F.
, and
Raabe
,
D.
,
2017
, “
Identifying Structure–Property Relationships Through Dream. 3D Representative Volume Elements and DAMASK Crystal Plasticity Simulations: An Integrated Computational Materials Engineering Approach
,”
JOM
,
69
(
5
), pp.
848
855
.
13.
Bessa
,
M.
,
,
R.
,
Liu
,
Z.
,
Hu
,
A.
,
Apley
,
D. W.
,
Brinson
,
C.
,
Chen
,
W.
, and
Liu
,
W. K.
,
2017
, “
A Framework for Data-Driven Analysis of Materials Under Uncertainty: Countering the Curse of Dimensionality
,”
Comput. Methods Appl. Mech. Eng.
,
320
, pp.
633
667
.
14.
Potyrailo
,
R.
,
Rajan
,
K.
,
Stoewe
,
K.
,
Takeuchi
,
I.
,
Chisholm
,
B.
, and
Lam
,
H.
,
2011
, “
Combinatorial and High-Throughput Screening of Materials Libraries: Review of State of the Art
,”
ACS Comb. Sci.
,
13
(
6
), pp.
579
633
.
15.
Suram
,
S. K.
,
Haber
,
J. A.
,
Jin
,
J.
, and
Gregoire
,
J. M.
,
2015
, “
Generating Information-Rich High-Throughput Experimental Materials Genomes Using Functional Clustering Via Multitree Genetic Programming and Information Theory
,”
ACS Comb. Sci.
,
17
(
4
), pp.
224
233
.
16.
Green
,
M. L.
,
Choi
,
C.
,
Hattrick-Simpers
,
J. R.
,
Joshi
,
A. M.
,
Takeuchi
,
I.
,
Barron
,
S. C.
,
Campo
,
E.
,
Chiang
,
T.
,
Empedocles
,
S.
,
Gregoire
,
J. M.
,
Kusne
,
A. G.
,
Martin
,
J.
,
Mehta
,
A.
,
,
K.
,
Trautt
,
Z.
,
Van Duren
,
J.
, and
Zakutayev
,
A.
,
2017
, “
Fulfilling the Promise of the Materials Genome Initiative With High-Throughput Experimental Methodologies
,”
Appl. Phys. Rev.
,
4
(
1
), p.
011105
.
17.
Curtarolo
,
S.
,
Hart
,
G. L.
,
Nardelli
,
M. B.
,
Mingo
,
N.
,
Sanvito
,
S.
, and
Levy
,
O.
,
2013
, “
The High-Throughput Highway to Computational Materials Design
,”
Nat. Mater.
,
12
(
3
), pp.
191
201
.
18.
Balachandran
,
P. V.
,
Xue
,
D.
,
Theiler
,
J.
,
Hogden
,
J.
, and
Lookman
,
T.
,
2016
, “
Adaptive Strategies for Materials Design Using Uncertainties
,”
Sci. Rep.
,
6
(
1
), p. 19660.
19.
Talapatra
,
A.
,
Boluki
,
S.
,
Duong
,
T.
,
Qian
,
X.
,
Dougherty
,
E.
, and
Arroyave
,
R.
,
2018
, “
Towards an Autonomous Efficient Materials Discovery Framework: An Example of Optimal Experiment Design Under Model Uncertainty
,” e-print arXiv:1803.05460.
20.
Ling
,
J. M.
,
Aughenbaugh
,
J. M.
, and
Paredis
,
C. J.
,
2006
, “
Managing the Collection of Information Under Uncertainty Using Information Economics
,”
ASME J. Mech. Des.
,
128
(
4
), pp.
980
990
.
21.
Chen
,
S.
,
Jiang
,
Z.
,
Yang
,
S.
, and
Chen
,
W.
,
2016
, “
Multimodel Fusion Based Sequential Optimization
,”
AIAA J.
,
55
(
1
), pp.
241
254
.
22.
Lam
,
R.
,
Allaire
,
D. L.
, and
Willcox
,
K. E.
,
2015
, “
Multifidelity Optimization Using Statistical Surrogate Modeling for Non-Hierarchical Information Sources
,”
AIAA
Paper No. AIAA 2015-0143
.
23.
Allaire
,
D.
, and
Willcox
,
K.
,
2014
, “
A Mathematical and Computational Framework for Multifidelity Design and Analysis With Computer Models
,”
Int. J. Uncertainty Quantif.
,
4
(
1
), pp. 1–20.
24.
Bhattacharya
,
D.
,
2011
, “
Metallurgical Perspectives on Advanced Sheet Steels for Automotive Applications
,”
,
Springer
, Berlin, Heidelberg, pp.
163
175
.
25.
Rashid
,
M.
,
1981
, “
Dual Phase Steels
,”
Annu. Rev. Mater. Sci.
,
11
(
1
), pp.
245
266
.
26.
Chen
,
P.
,
Ghassemi-Armaki
,
H.
,
Kumar
,
S.
,
Bower
,
A.
,
Bhat
,
S.
, and
,
S.
,
2014
, “
Microscale-Calibrated Modeling of the Deformation Response of Dual-Phase Steels
,”
Acta Mater.
,
65
, pp.
133
149
.
27.
Srivastava
,
A.
,
Bower
,
A.
,
Hector
,
L.
, Jr.
,
Carsley
,
J.
,
Zhang
,
L.
, and
Abu-Farha
,
F.
,
2016
, “
A Multiscale Approach to Modeling Formability of Dual-Phase Steels
,”
Modell. Simul. Mater. Sci. Eng.
,
24
(
2
), p.
025011
.
28.
Voigt
,
W.
,
1889
, “
On the Relation Between the Elasticity Constants of Isotropic Bodies
,”
Ann. Phys. Chem.
,
274
, pp.
573
587
.
29.
Reuss
,
A.
,
1929
, “
Berechnung der Fließgrenze von Mischkristallen auf Grund der Plastizitätsbedingung für Einkristalle
,”
ZAMM-J. Appl. Math. Mech./Z. Für Angew. Math. Mech.
,
9
(
1
), pp.
49
58
.
30.
Bouaziz
,
O.
, and
Buessler
,
P.
,
2002
, “
Mechanical Behaviour of Multiphase Materials: An Intermediate Mixture Law Without Fitting Parameter
,”
Rev. Métall.
,
99
(
1
), pp.
71
77
.
31.
Gerbig
,
D.
,
Srivastava
,
A.
,
Osovski
,
S.
,
Hector
,
L. G.
, and
Bower
,
A.
,
2018
, “
Analysis and Design of Dual-Phase Steel Microstructure for Enhanced Ductile Fracture Resistance
,”
Int. J. Fract.
,
209
(
1–2
), pp.
3
26
, pp.
1
24
.
32.
Abaqus, 2010, “ABAQUS Analysis User's Manual: version 6.10,” Dassault Systems, Vélizy-Villacoublay, France.
33.
Nemat-Nasser
,
S.
, and
Hori
,
M.
,
2013
,
Micromechanics: Overall Properties of Heterogeneous Materials
, Vol.
37
,
Elsevier
, Amsterdam, The Netherlands.
34.
Scott
,
W.
,
Frazier
,
P.
, and
Powell
,
W.
,
2011
, “
The Correlated Knowledge Gradient for Simulation Optimization of Continuous Parameters Using Gaussian Process Regression
,”
SIAM J. Optim.
,
21
(
3
), pp.
996
1026
.
35.
Jones
,
D. R.
,
Schonlau
,
M.
, and
Welch
,
W. J.
,
1998
, “
Efficient Global Optimization of Expensive Black-Box Functions
,”
J. Global Optim.
,
13
(
4
), pp.
455
492
.
36.
Huang
,
D.
,
Allen
,
T. T.
,
Notz
,
W. I.
, and
Miller
,
R. A.
,
2006
, “
Sequential Kriging Optimization Using Multiple-Fidelity Evaluations
,”
Struct. Multidiscip. Optim.
,
32
(
5
), pp.
369
382
.
37.
Moore
,
R. A.
,
Romero
,
D. A.
, and
Paredis
,
C. J.
,
2014
, “
Value-Based Global Optimization
,”
ASME J. Mech. Des.
,
136
(
4
), p.
041003
.
38.
Frazier
,
P. I.
,
Powell
,
W. B.
, and
Dayanik
,
S.
,
2008
, “
A Knowledge-Gradient Policy for Sequential Information Collection
,”
SIAM J. Control Optim.
,
47
(
5
), pp.
2410
2439
.
39.
Gupta
,
S. S.
, and
Miescke
,
K. J.
,
1994
, “
Bayesian Look Ahead One Stage Sampling Allocations for Selecting the Largest Normal Mean
,”
Stat. Pap.
,
35
(
1
), pp.
169
177
.
40.
Gupta
,
S. S.
, and
Miescke
,
K. J.
,
1996
, “
Bayesian Look Ahead One-Stage Sampling Allocations for Selection of the Best Population
,”
J. Stat. Plann. Inference
,
54
(
2
), pp.
229
244
.
41.
Williams
,
C. K.
, and
Rasmussen
,
C. E.
,
2006
,
Gaussian Processes for Machine Learning
,
The MIT Press
, Cambridge, MA.
42.
Schonlau
,
M.
,
Welch
,
W. J.
, and
Jones
,
D.
,
1996
, “
Global Optimization With Nonparametric Function Fitting
,”
ASA, Section on Physical and Engineering Sciences
, pp.
183
186
.
43.
Schonlau
,
M.
,
Welch
,
W. J.
, and
Jones
,
D. R.
,
1998
, “
Global Versus Local Search in Constrained Optimization of Computer Models
,”
New Developments and Applications in Experimental Design
(Lecture Notes in Monograph Series, Vol. 34), pp.
11
25
.
44.
Alexandrov
,
N.
,
Lewis
,
R.
,
Gumbert
,
C.
,
Green
,
L.
, and
Newman
,
P.
,
1999
, “
Optimization With Variable-Fidelity Models Applied to Wing Design
,” National Aeronautics and Space Administration, Hampton, VA, Report No.
CR-209826
45.
Alexandrov
,
N.
,
Lewis
,
R.
,
Gumbert
,
C.
,
Green
,
L.
, and
Newman
,
P.
,
2001
, “
Approximation and Model Management in Aerodynamic Optimization With Variable-Fidelity Models
,”
AIAA J.
,
38
(
6
), pp.
1093
1101
.
46.
Balabanov
,
V.
,
Haftka
,
R.
,
Grossman
,
B.
,
Mason
,
W.
, and
Watson
,
L.
,
1998
, “
Multifidelity Response Surface Model for HSCT Wing Bending Material Weight
,”
AIAA
Paper No. AIAA 1998-4804
.
47.
Balabanov
,
V.
, and
Venter
,
G.
,
2004
, “
Multi-Fidelity Optimization With High-Fidelity Analysis and Low-Fidelity Gradients
,”
AIAA
Paper No. AIAA 2004-4459
.
48.
Qian
,
P. Z.
, and
Wu
,
C. J.
,
2008
, “
Bayesian Hierarchical Modeling for Integrating Low-Accuracy and High-Accuracy Experiments
,”
Technometrics
,
50
(
2
), pp.
192
204
.
49.
Choi
,
S.
,
Alonso
,
J. J.
, and
Kroo
,
I. M.
,
2009
, “
Two-Level Multifidelity Design Optimization Studies for Supersonic Jets
,”
J. Aircr.
,
46
(
3
), pp.
776
790
.
50.
Eldred
,
M.
,
Giunta
,
A.
, and
Collis
,
S.
,
2004
, “
Second-Order Corrections for Surrogate-Based Optimization With Model Hierarchies
,”
AIAA
Paper No. AIAA 2004-4457
.
51.
March
,
A.
, and
Willcox
,
K.
,
2012
, “
Provably Convergent Multifidelity Optimization Algorithm Not Requiring High-Fidelity Derivatives
,”
AIAA J.
,
50
(
5
), pp.
1079
1089
.
52.
March
,
A.
, and
Willcox
,
K.
,
2012
, “
Convergent Multifidelity Optimization Using Bayesian Model Calibration
,”
Struct. Multidiscip. Optim.
,
46
(
1
), pp.
93
109
.
53.
Leamer
,
E.
,
1978
,
Specification Searches: Ad Hoc Inference With Nonexperimental Data
,
Wiley
,
New York
.
54.
,
D.
, and
Raftery
,
A.
,
1994
, “
Model Selection and Accounting for Model Uncertainty in Graphical Models Using Occam's Window
,”
Am. Stat. Assoc.
,
89
(
428
), pp.
1535
1546
.
55.
Draper
,
D.
,
1995
, “
Assessment and Propagation of Model Uncertainty
,”
J. R. Stat. Soc. Ser. B
,
57
(
1
), pp.
45
97
https://www.jstor.org/stable/2346087.
56.
Hoeting
,
J.
,
,
D.
,
Raftery
,
A.
, and
Volinsky
,
C.
,
1999
, “
Bayesian Model Averaging: A Tutorial
,”
Stat. Sci.
,
14
(
4
), pp.
382
417
https://www.jstor.org/stable/2676803.
57.
Clyde
,
M.
,
2003
, “
Model Averaging
,”
In Subjective and Objective Bayesian Statistics
,
2nd ed.
,
Wiley-Interscience
, Hoboken, NJ, Chap. 13.
58.
Clyde
,
M.
, and
George
,
E.
,
2004
, “
Model Uncertainty
,”
Stat. Sci.
,
19
(
1
), pp.
81
94
.
59.
Mosleh
,
A.
, and
Apostolakis
,
G.
,
1986
, “
The Assessment of Probability Distributions From Expert Opinions With an Application to Seismic Fragility Curves
,”
Risk Anal.
,
6
(
4
), pp.
447
461
.
60.
Zio
,
E.
, and
Apostolakis
,
G.
,
1996
, “
Two Methods for the Structured Assessment of Model Uncertainty by Experts in Performance Assessments of Radioactive Waste Repositories
,”
Reliab. Eng. Syst. Saf.
,
54
(
2–3
), pp.
225
241
.
61.
Reinert
,
J.
, and
Apostolakis
,
G.
,
2006
, “
Including Model Uncertainty in Risk-Informed Decision Making
,”
Ann. Nucl. Energy
,
33
(
4
), pp.
354
369
.
62.
Riley
,
M.
, and
Grandhi
,
R.
,
2011
, “
Quantification of Modeling Uncertainty in Aeroelastic Analyses
,”
J. Aircr.
,
48
(
3
), pp.
866
873
.
63.
Julier
,
S.
, and
Uhlmann
,
J.
,
1997
, “
A Non-Divergent Estimation Algorithm in the Presence of Unknown Correlations
,”
American Control Conference
, Albuquerque, NM, June 6, pp.
2369
2373
.
64.
Julier
,
S.
, and
Uhlmann
,
J.
,
2001
, “
General Decentralized Data Fusion With Covariance Intersection
,”
Handbook of Data Fusion
,
D.
Hall
, and
J.
Llinas
, ed.,
CRC Press
,
Boca Raton, FL
.
65.
Geisser
,
S.
,
1965
, “
A Bayes Approach for Combining Correlated Estimates
,”
J. Am. Stat. Assoc.
,
60
(
310
), pp.
602
607
.
66.
Morris
,
P.
,
1977
, “
Combining Expert Judgments: A Bayesian Approach
,”
Manage. Sci.
,
23
(
7
), pp.
679
693
.
67.
Winkler
,
R.
,
1981
, “
Combining Probability Distributions From Dependent Information Sources
,”
Manage. Sci.
,
27
(
4
), pp.
479
488
.
68.
Panchal
,
J. H.
,
Paredis
,
C. J.
,
Allen
,
J. K.
, and
Mistree
,
F.
,
2008
, “
A Value-of-Information Based Approach to Simulation Model Refinement
,”
Eng. Optim.
,
40
(
3
), pp.
223
251
.
69.
Messer
,
M.
,
Panchal
,
J. H.
,
Krishnamurthy
,
V.
,
Klein
,
B.
,
Yoder
,
P. D.
,
Allen
,
J. K.
, and
Mistree
,
F.
,
2010
, “
Model Selection Under Limited Information Using a Value-of-Information-Based Indicator
,”
ASME J. Mech. Des.
,
132
(
12
), p.
121008
.
70.
Allaire
,
D.
, and
Willcox
,
K.
,
2012
, “
Fusing Information From Multifidelity Computer Models of Physical Systems
,”
15th International Conference on Information Fusion (FUSION),
Singapore, July 9–12, pp.
2458
2465
.
71.
Thomison
,
W. D.
, and
Allaire
,
D. L.
,
2017
, “
A Model Reification Approach to Fusing Information From Multifidelity Information Sources
,”
AIAA
Paper No. AIAA 2017-1949
.
72.
Powell
,
W. B.
, and
Ryzhov
,
I. O.
,
2012
,
Optimal Learning
, Vol.
841
,
Wiley
, Hoboken, NJ.
73.
Frazier
,
P.
,
Powell
,
W.
, and
Dayanik
,
S.
,
2009
, “
The Knowledge-Gradient Policy for Correlated Normal Beliefs
,”
INFORMS J. Comput.
,
21
(
4
), pp.
599
613
.