## Abstract

The paper presents a novel approach to applying Bayesian Optimization (BO) in predicting an unknown constraint boundary, also representing the discontinuity of an unknown function, for a feasibility check on the design space, thereby representing a classification tool to discern between a feasible and infeasible region. Bayesian optimization is a low-cost black-box global optimization tool in the Sequential Design Methods where one learns and updates knowledge from prior evaluated designs, and proceeds to the selection of new designs for future evaluation. However, BO is best suited to problems with the assumption of a continuous objective function and does not guarantee true convergence when having a discontinuous design space. This is because of the insufficient knowledge of the BO about the nature of the discontinuity of the unknown true function. In this paper, we have proposed to predict the location of the discontinuity using a BO algorithm on an artificially projected continuous design space from the original discontinuous design space. The proposed approach has been implemented in a thin tube design with the risk of creep-fatigue failure under constant loading of temperature and pressure. The stated risk depends on the location of the designs in terms of safe and unsafe regions, where the discontinuities lie at the transition between those regions; therefore, the discontinuity has also been treated as an unknown creep-fatigue failure constraint. The proposed BO algorithm has been trained to maximize sampling toward the unknown transition region, to act as a high accuracy classifier between safe and unsafe designs with minimal training cost. The converged solution has been validated for different design parameters with classification error rate and function evaluations at an average of <1% and ∼150, respectively. Finally, the performance of our proposed approach in terms of training cost and classification accuracy of thin tube design is shown to be better than the existing machine learning (ML) algorithms such as Support Vector Machine (SVM), Random Forest (RF), and Boosting.

## 1 Introduction

In the early design phase, it is very important for the designers to be able to identify the feasible regions in a large design space, while the design cost is low. This knowledge guides can help guide the designers to eliminate inferior designs and avoid investing in the high cost prototyping and testing of those designs at the later design phase. With efficient knowledge of the feasible space, the designers can also avoid falsely selecting infeasible designs as optimal which can result in high risk to failure consequences. In design practice, most design problems are too complex to be handled by simple optimization frameworks due to having constraints on cost, time, formulation, etc. Also, approximating a complex design problem into much simpler problems can lead to the negligence of the original complex constraints; thus, the design may violate those constraints and not provide a useful choice for practical decisions. Some practical design problems have been investigated where complex optimization frameworks have been modeled [1–3]. However, in many design problems, it is difficult to numerically formulate an objective function or constraint boundaries and, therefore, we consider those as *black-box* problems which typically have high function evaluation cost [4,5]. Thus, a trade-off between learning and expense is present, and a low fidelity surrogate model is often implemented to reduce cost. When we have no or limited knowledge on the expensive true unknown functions, we cannot guarantee the maximization of our learning toward optimizing the functions without proper guidance or expertise. Also, due to the mentioned high function evaluation cost, exhaustive search is not a valid option. In such problems, a Bayesian Optimization (BO) technique (BO), which eliminates the need of standard formulation of the black-box functions, is widely applied in sequential learning and provides better guidance in sampling the designs for expensive experiments, or function evaluations, in order to find the optimal region of that unknown function at minimal cost of experiments. In the BO approach, we first build a posterior surrogate model, given the data from the current evaluations. We then use this model to strategically select the best design locations for future evaluations by maximizing the Acquisition functions, defined from the posterior model. BO can be used in optimizing any black-box functions in a design problem, either to emulate the unknown objective functions, when the goal is to locate the optimal solutions, or to emulate the unknown constraints when the goal is the classification and preservation of only potential good designs [6]. This paper is focused on BO framework to emulate the unknown constraint boundary as a classification problem for design feasibility check. However, the motivation behind this classification problem comes from the ultimate goal of design optimization with a complex design space which is described in Sec. 1.1.

### 1.1 Research Motivation.

Although BO is a powerful method, it works on the assumption that the true function is continuous [7] and generally fails to converge to the solution if the objective function has a discontinuity. This is because of insufficient knowledge of the BO about the nature of the discontinuity of the unknown true function. Figures 1 and 2 provide an example where a BO model fails to converge to the true discontinuous function even after excessive sampling. Figure 1 shows the true response function in terms of design variables *x*_{1}, *x*_{2} where there is jump discontinuity at *x*_{1}, *x*_{2} = 1. Figure 2 shows how inefficiently the BO model emulated the true function even after 500 sampling for function evaluations, denoted by black dots, and produces a very nonsmooth surface with many peaks near the discontinuity. With this limitation of BO for the discontinuous design space, we will present a research example problem to highlight our motivation behind the design feasibility check classification problem.

*Compact Heat Exchanger Design:* In this research, we consider the design of a diffusion bonded Compact Heat Exchanger (CHX) as a motivating example. In the design of a diffusion bonded CHX, the ultimate goal is to find the optimal design geometries which minimizes the risk of creep-fatigue failure under constant loading of temperature and pressure. The stated risk depends on the location of the design in regions such as Elastic, Shakedown, Plastic, or Ratchetting. A similar example has been provided for thin tubes where the location of the designs can be numerically represented from a Bree diagram [8] (Fig. 1 in the Supplemental Materials on the ASME Digital Collection) in terms of pressure and thermal stresses. Under cyclic loading, the *Elastic* and *Shakedown* region in the Bree diagram are considered as the *safe* region where no strain accumulation occurs or the growth of residual strain is practically diminished when sufficient loading cycles are applied. However, *Plastic* and *Ratchetting* in the Bree diagram are *unsafe* designs where the plastic strain accumulates until failure. When the complexity of the design increases such as in diffusion bonded CHX [9] (Fig. 2 in the Supplemental Materials on the ASME Digital Collection), we cannot provide a numerical representation of the function which defines the region of a design; thus it is considered as black-box problem. However, the transition between the Shakedown and Plastic (or Ratchetting) region creates a jump discontinuity at the transition line due to a different formulation in strain analysis in each region. This is the limitation of the application of BO as we have demonstrated earlier, and thus the convergence of the model to find the optimal solution cannot be trusted when the design space has s discontinuity, as it has high likelihood to lead to nonoptimal solutions. Also, since we have no knowledge of the true function, it is evident that we have no knowledge on the location of discontinuous transition region as well, representing the black-box unknown constraint boundary between the safe and unsafe region. Thus, ignoring the constraint boundary can lead to selecting infeasible solutions, which can lead to very high design failure costs, such as accidents, economic disasters, etc. This motivates us to believe that it is necessary first to understand the discontinuity of the unknown function, build the constraint boundary to classify between safe and unsafe design in terms of creep-fatigue failure as a design pre-stage before attempting to find the optimal design.

### 1.2 Research Contribution.

To address the issues of Sec. 1.1, this paper proposes an approach to Bayesian optimization in solving the stated classification problem. The goal is to predict the location of the discontinuous transition region as a constraint boundary between the safe and unsafe regions by strategically sampling designs, while minimizing the cost of expensive function evaluations and maximizing classification accuracy in predicting the true boundary. In order to achieve the above desired goal, the research contributes the formulation of a new function which projects the original discontinuous design space into an artificial continuous design space, mitigating the limitation of BO. This new function helps us to develop the acquisition function in the proposed BO model, which when maximized, guides our sampling toward the desired unknown constraint boundary (transition region), which ultimately maximizes the classification accuracy. The contribution of this research is to provide a classification method to classify the creep-fatigue failure feasibility of any new designs in the specified design space, without conducting further expensive function evaluations once the model is fully trained (converged). This paper focuses on the proof of concept and therefore we have simplified the large-scale complex design of the diffusion bonded CHX into a simple thin tube where we will be able to compare the results obtained from the proposed model with the known true solution. The proposed approach can be considered as the pre-stage for the optimization of the design geometry of diffusion bonded CHX to minimize the risk of failure with manufacturing and experimental cost, subject to constraints for creep-fatigue failure (from proposed pre-stage) and other manufacturing constraints, which is considered as future research.

The roadmap of this paper is as follows. Section 2 provides an overview on Bayesian optimization and machine learning to solve classification problem. Section 3 presents the thin tube problem and projecting the original discontinuous design space to the artificial continuous design space, from knowledge gained by actual function evaluations or experiments. Section 4 provides the detailed description of the design methodology in fast and adaptive sampling toward predicting the constraint boundary (transition region) and minimizing error rate in the classification problem. Section 5 shows the results of the proposed approach under different design parameters. Section 6 concludes the paper with final thoughts.

## 2 Literature Review

### 2.1 Classification Problem.

A classification problem, in general, is a subset of machine learning problems where the main idea is to subset a region of interest or design space into labels or clusters through proper training of a machine learning tool with existing data. After the designer is satisfied with training the model, any new design data can be classified as which category the design has the maximum probability of belonging without further expensive evaluations. Classification problems can be subdivided into Binary Classification and Multi-Label Classification problems. To solve these, different machine learning tools has been used such as Support Vector Machine (SVM), Random Forest (RF), and Boosting [10–12]. Recently, advanced methods like a neural network have been used in both binary and multi-label classification problems [13,14]. Inan et al. [15] proposed a robust neural network based classification method for premature ventricular contractions. Li et al. [16] attempted a hyperspectral image reconstruction method using a convoluted neural network to enhance classification accuracy. Similarly, clustering approaches has been taken, especially for multi-label classification problem with a large number of labels [17]. Barros et al. [18] proposed a probabilistic clustering approach for a hierarchical multi-label classification of Protein Functions. Solving a design classification problem with standard machine learning classifier methods is dependent on the quality or amount of training data and always raise the question on how much data is enough to get the maximum learning [19,20], thus can be very critical to the sampling cost and methods. Therefore, in order to apply these machine learning algorithms, we need to assume we already have a lot of existing data, which is not true in our classification problem. Considering a black-box problem where these sampled designs undergo expensive evaluations, training data is limited due to very high cost. As mentioned earlier, the research objective in this chapter is not only to identify an appropriate classifier tool, but also an efficient sampling strategy for training the classifier sequentially toward the desired goal of fast and adaptive learning (minimizing expensive evaluations). With this BO as a design classifier, we attempt to first optimize the location (discontinuity or constraint boundary) with the existing technique of minimizing expensive sampling (data) for fast and adaptive learning (data sampling suggested where there is more likelihood of achieving user-defined good solutions), then classify any new designs (either side of the discontinuity or constraint boundary) by the trained posterior surrogate model of the converged (maximized learning) BO. Thus, this research also contributes in integrating the classification technique into the existing efficient sampling method of BO, without having to worry about pre-existing data.

### 2.2 Bayesian Optimization.

Bayesian optimization [7] is an emerging field of study in sequential design methods. It is considered as a low-cost global optimization tool for design problems having expensive black-box objective functions. The general idea of BO is to emulate an expensive unknown design space and find the local and global optimal locations while reducing the cost of function evaluation from expensive high-fidelity models. This approach has been widely used in many machine learning problems [21–25]. However, attempts have been made when the response is discrete such as in consumer modeling problems where the responses are in terms of user preference [7,26]. The idea is to approximate the user preference discrete response function into continuous latent functions using Binomial-Probit model for two choices [27,28] and polychotomous regression model for more than two choices where the user can state no preference [29]. BO has also been implemented in multi-objective [30] and high-dimensional [31,32] engineering design problems.

Bayesian optimization adopts a Bayesian perspective and assumes that there is a prior on the function; typically, we use a Gaussian process prior. The prior is represented from the experiment or training data which is assumed as realizations of the true function. The overall Bayesian optimization approach has two major components: A predictor or *Gaussian Process Model (GPM)* and an *Acquisition Function (AF)*. As shown in Fig. 3, we first build a posterior GPM, given the data from the current experiments. The surrogate GPM then predicts the objective or response of the samples generated from a design of experiments (DOE) based sampling method within the design space. We then use this model to strategically select the best design locations for future experimentation by maximizing the acquisition function, defined from the posterior simulations obtained from the GPM. However, we need to assume that the objective or response is Lipschitz continuous [7]. As an alternative to a GPM, random forest regression has been proposed as an expressive and flexible surrogate model in the context of sequential model-based algorithm configuration [33]. Although random forests are good interpolators in the sense that they output good predictions in the neighborhood of training data, they are very poor extrapolators where the training data are far away [34]. This can lead to selecting redundant exploration (more experiments) in the noninteresting region as suggested by the acquisition function in the early iterations of the optimization, due to having additional prediction error of the region far away from the training data. This motivates us to consider the GPM in a Bayesian framework while extending the application to discontinuous design response surfaces, which can be represented as complex practical problems in the domain of experimental design. We next describe the GPM and AF.

#### 2.2.1 Gaussian Process Model.

Figure 4 shows a simple 1D Gaussian Process Model with one design variable *x* and one response variable *z* = *f*(*x*). The dots are the experimental design variables and the dotted and solid lines are the true and the predictor mean functions or responses in the design space, given the observations. The shaded area along the solid line shows the measure of uncertainty over the surrogate GPM prediction. We can clearly see that the variance near the observations is small and increases as the design samples are farther away from the observational data, thereby related to kriging models where the errors are not independent. Much research has been ongoing regarding incorporating and quantifying uncertainty of the experimental or training data by using a nugget term in the predictor GPM. It has been found that the nugget provides a better solution and computational stability framework [35,36]. Furthermore, GPM has also been implemented in high-dimensional design space exploration [37] and big data problems [38], as an attempt to increase computational efficiency. A survey of implementation of different GP packages has been provided in different coding languages such as matlab, R, and python [39].

#### 2.2.2 Acquisition Function.

The second major component in Bayesian optimization is the Acquisition Function whose goal is to guide the search for future experiments toward the desired goal and thereby bring the sequential design into the BO. The AF predicts an improvement metric for each sample. The improvement metric depends on *exploration* (unexplored design spaces) and *exploitation* (region near high responses). Thus, the acquisition function gives high value of improvement to the samples whose mean prediction is high, variance is high, or a combination of both. Thus, by maximizing the acquisition function, we select the best samples to find the optimum solution and reduce the uncertainty of the unknown expensive design space. Various formulations have been applied to define the acquisition functions. One such method is the Probability of Improvement, PI [40] which is improvement based acquisition function. Jones in Ref. [41] notes that the performance of PI(·) “is truly impressive;… however, the difficulty is that the PI(·) method is extremely sensitive to the choice of the target. If the desired improvement is too small, the search will be highly local and will only move on to search globally after searching nearly exhaustively around the current best point. On the other hand, if the small-valued tolerance parameter *ξ* in PI(.) equation is set too high (see [41]), the search will be excessively global, and the algorithm will be slow to fine-tune any promising solutions.” Thus, the Expected Improvement acquisition function, EI [7], is widely used over PI which is a trade-off between exploration and exploitation. Another Acquisition function is the Confidence bound criteria, CB, introduced by Cox and John [42], where the selection of points is based on the upper or lower confidence bound of the predicted design surface for maximization or minimization problem respectively.

## 3 Problem Description

In this section, we describe the thin tube design problem which represents the proof of concept for the large complex design of diffusion bonded CHX. As the tube is assumed to undergo constant loading of temperature and pressure, there will be risk of creep-fatigue failure which will vary with the design geometry. *Fatigue damage* is created when one cycles a test specimen at a fixed stress amplitude for enough cycles until it develops microstructural damage and eventually fails. *Creep damage* is created when one holds a test specimen at a fixed load for a long enough time that it eventually develops microstructural damage and fails. *Creep-fatigue damage* is therefore to do both of these things simultaneously (i.e., a stress controlled cycle with a hold) and the specimen will generally fail sooner than conducting the cycling and the hold individually. As mentioned previously, our goal is to predict the transition region between the safe and unsafe region as defined in Sec. 1.2. For design variables for the CHX which will influence creep-fatigue behavior, we choose the radius (*rad*) and length (*l*) of the tube. Next we describe the experimentation and the formulation of the objective function which depends on the experimental results and the prior knowledge on the domain of solid mechanics.

### 3.1 Model Experiments.

In this section, we provide the computation of the location of any design in terms of Elastic, Plastic, Shakedown and Ratchetting, and the respective strain accumulation. We represent these outputs as the responses from the expensive experiments. In our problem of thin tube design, though these computations are not expensive and can be done analytically, we still represent these as expensive function evaluations which will be true for our future problem of considering the actual diffusion bonded CHX geometry where expensive Finite Element Analysis (FEA) is required. The computations have been done based on the formulation of a *Bree diagram* [8]. In this paper, we considered the Bree diagram for a nonwork-hardening material whose yield stress remains unchanged by changes in mean temperature, as provided in the Supplemental Materials on the ASME Digital Collection. For the sake of simplicity, we have ignored the further division of Shakedown (*S*1, *S*2) and Ratchetting (*R*1, *R*2) as shown in the figure, and assumed a single region of Shakedown (*S*) and Ratchetting (*R*). This is because, for the purpose of our problem, any design in Shakedown is considered safe, while in Ratchetting is considered unsafe.

Below are the steps for computation of the various stresses and strains for the thin tube required for our methodology:

*Step 1*: Calculate pressure and temperature stress(1)$\sigma p=P*rad/d$where(2)$\sigma t=(E*\alpha *\Delta T)/2(1\u2212\rho )$(3)$\Delta T=\Delta Tslop*l+Tin$where(4)$\Delta Tslop=\u2212Tin+Tout*(rad\u2212radminradmax\u2212radmin)$*σ*_{p}and*σ*_{t}are the pressure and temperature stresses;*P*is internal pressure subjected to the tube which is taken as 25 MPa;*rad*is the radius;*d*is the wall thickness;*l*is the length;*E*= 200 GPa is the Young’s modulus;*α*= 16*e*− 6 is the thermal coefficient of the linear expansion;*ρ*= 0.27 is the Poisson’s ratio; Δ*T*is temperature drop across the wall with*T*_{in}and*T*_{out}are the inlet and outlet temperatures which are taken as 400 °C and 20 °C, respectively;*rad*_{min}and*rad*_{max}are the minimum and maximum radius.*Step 2*: Determine the region of the design:Case 1:

If

*σ*_{p}≤ 0.5*σ*_{y}and*σ*_{t}< 2*σ*_{y}, (*σ*_{y}= 205 MPa is the yield stress), the design is in the*Elastic*or*Shakedown*(Safe) region;else, if

*σ*_{t}> 2*σ*_{y}, the design is in*Plastic*or*Ratchetting*region (Unsafe). For the design in Plastic or Ratchetting, if $\sigma p*\sigma t\u2264\sigma y2$, the design is in*Plastic*.else, if

*σ*_{t}= 2*σ*_{y}, the design is in at the transition line.

Case 2:

If

*σ*_{p}> 0.5*σ*_{y}and*σ*_{p}+ 0.25*σ*_{t}<*σ*_{y}, the design is in*Elastic*or*Shakedown*(Safe);else, if

*σ*_{p}+ 0.25*σ*_{t}>*σ*_{y}, the design is in*Ratchetting*region (Unsafe).else, if

*σ*_{p}+ 0.25*σ*_{t}=*σ*_{y}, the design is in at the transition line.

Step 3: Calculate the strain accumulation:

- If the design is in
*Elastic*/*Shakedown*, strain*ɛ*_{s}can be calculated aswhere(5)$\epsilon s=(2E)*(\sigma \u2212(xd)*\sigma t)$and(6)$\sigma =\sigma p+2(xd)*\sigma t$*x*is the section of the tube wall, which varies from 0 at the outer wall to*d*at the inner wall. Since our problem is subjected to internal pressure, the maximum stress is at the inner wall of the tube. Thus, we consider the worst condition and focus on the stress at the inner wall at*x*=*d*. - If the design is in
*Plastic*, strain*ɛ*_{p}can be calculated as(7)$\epsilon p=(\sigma t\u22122\sigma y)*n/E$ - If the design is in
*Ratchetting*, strain*ɛ*_{r}can be calculated as:where(8)$\epsilon r=(2n*\sigma tE)*(1\u22122(\sigma y\u2212\sigma p)/\sigma t)$*n*is the number of cycles. In this problem, we considered*n*= 50.

It is to be noted that when the design is at the transition line, as per Step 2, we avoid the Step 3 strain calculation for those designs as for those designs, Eqs. (5), (7), or (8) are all justified, and this creates the jump discontinuity (refer Fig. 3 in the Supplemental Materials on the ASME Digital Collection for 1D example). In Sec. 3.2, we present the formulation of the distance metric which mitigates this discontinuity issue, suitable for the BO framework.

### 3.2 Formulation of Distance Metric.

In this section, we provide the formulation of the distance metric. Although we can obtain the strain accumulation for a particular design from the model experiments, we do not have a good idea of strain accumulation for a design close to the transition region, where we do not know which equation (Eqs. (5), (7), and (8)) applies. Therefore, with the value of strain only, it is difficult to formulate an objective function where we can either maximize or minimize the strain accumulation in the BO model in order to maximize the accuracy and iteratively get closer to the unknown transition region. Also, the jump discontinuity lies at the transition line between safe and unsafe region in the design space of strain accumulation. Therefore, we propose to formulate a new function with the help of the experimental results, which we have defined as a distance function, *Y*, by transforming the original discontinuous design space into an artificially created continuous design space. The computation of the distance value for any designs is based on the heuristics that, given two designs that are in the Shakedown (Safe) region, the design having more strain accumulation is closer to the unknown transition region and therefore a higher value will be assigned. The reverse occurs for any design in Plastic or Ratchetting (Unsafe) region where for any two designs in those regions, the design having lower strain accumulation is toward the unknown transition region and therefore a lower value will be assigned. This prior knowledge helps us build our distance function where we first separate the sampled designs (prior data) in terms of regions which can be evaluated from experiments (Step 2 in Sec. 3.1). It is worth noting that for the complex problem of diffusion bonded HCX, the determination of the region for a design must be conducted from FEA. After we separate all the sampled designs into the regions of Elastic/Shakedown, Plastic and Ratchetting, we next assume a linear increment of strain accumulation as increasing the risk of creep-fatigue failure and build our formula for computing the distance value of the *i*th design at iteration *k* of BO model, *Y*_{k,i} as below:

- For design
*i*, in*Elastic/Shakedown*:$Ys,k,i=YSmin+(\epsilon s,i\u2212min(\epsilon s,k))max(\epsilon s,k)\u2212min(\epsilon s,k)*(YSmax\u2212YSmin)$ - For design
*i*, in*Plastic*:$Yp,k,i=YPmin+(\epsilon p,i\u2212min(\epsilon p,k))max(\epsilon p,k)\u2212min(\epsilon p,k)*(YPmax\u2212YPmin)$ - For design
*i*, in*Ratchetting*:where$Yr,k,i=YRmin+(\epsilon r,i\u2212min(\epsilon r,k))max(\epsilon r,k)\u2212min(\epsilon r,k)*(YRmax\u2212YRmin)$*ɛ*_{s,i},*ɛ*_{p,i},*ɛ*_{r,i}are the strain accumulation of design*i*, given the design falls into Elastic/Shakedown, Plastic, or Ratchetting, respectively; min(*ɛ*_{s,k}), max(*ɛ*_{s,k}) are the minimum and the maximum strain accumulation among all the sampled designs (training data) in Shakedown at iteration*k*; min(*ɛ*_{p,k}), max(*ɛ*_{p,k}) are the minimum and the maximum strain accumulation among all the sampled designs (training data) in Plastic at iteration*k*; min(*ɛ*_{r,k}), max(*ɛ*_{r,k}) are the minimum and the maximum strain accumulation among all the sampled designs (training data) in Ratchetting at iteration*k*;*YS*_{min},*YS*_{max}are the minimum and maximum distance function bounds for the designs in Elastic/Shakedown and are set as 0 and 0.45, respectively;*YP*_{min},*YP*_{max}are the minimum and maximum distance function bounds for the designs in Plastic and are set as 0.55 and 1, respectively;*YR*_{min},*YR*_{max}are the minimum and maximum distance function bounds for the designs in Ratchetting and are set as 0.51 and 1, respectively. With changing the values for*YS*_{max},*YP*_{min},*YR*_{min}, the efficiency of the model changes in terms of accuracy and cost of function evaluations and, therefore, a sensitivity analysis has been done within a recommended range of values which will be described later. However, the values given have been found to produce consistent performance in terms of accuracy.The idea of the objective is that the design samples, at iteration

*k*, in the Elastic/Shakedown region which are nearest to the predicted transition region will have*Y*_{s,k,i}= 0.45 and the design samples in the Elastic/Shakedown region which are farthest from the predicted transition region will have*Y*_{s,k,i}= 0. All the other samples, or training data, in the Elastic/Shakedown region will have values within the range of [0–0.45] based on the closeness to the predicted transition region. Similarly, at iteration*k*, the sample in the Plastic or Ratchetting region which is nearest to the predicted transition region will have*Y*_{p,k,i}or*Y*_{r,k,i}= 0.55 and the sample in the Plastic or Ratchetting region which is farthest from the predicted transition region will have*Y*_{p,k,i}or*Y*_{r,k,i}= 1. All the other samples or training data in the Elastic/ Shakedown region will have values within the range of [0.55–1] based on the closeness to the predicted transition region. The width of the transition region thus set in this case as [0.45–0.55], assuming the true transition or constraint boundary line is at*Y*_{s,i}=*Y*_{p,i}=*Y*_{r,i}= 0.5 for any iteration of the BO model. In our formulation, this setting of a distance value of 0.5 for any design at the transition line builds the continuity in the design space. It is to be noted that knowing the distance value at the transition line, we attempt to optimize the location of the unknown transition region, given the width of the region. Locating the exact transition line or constraint boundary will require exhaustive experiments or function evaluations and may occur overfitting issues in prediction to classify safe and unsafe designs; therefore, we assume that with sequential improvement of the prediction of the location of the transition region from BO increases accuracy in the location of transition line (constraint boundary line) as well.

To summarize, the reasons to construct the distance function are (1) an output as a discrete region is not useful in the BO framework: we need to transform region knowledge into a continuous metric and (2) it allows us to define our objective in the BO framework in terms of finding the transition region between Elastic/Shakedown versus Plastic and Ratchetting. It is to be noted that this is a sequential design approach and with more training data (increase prior knowledge), the values of distance function, *Y* for all the training data, except at the transition line, changes and are re-computed per iteration.

## 4 Design Methodology

Figure 5 shows the detailed structure of the proposed Bayesian Optimization framework. Below is the algorithm with explanation of each steps of the proposed Bayesian Optimization classification method to predict the transition region between safe and unsafe region for the thin tube problem as describe in Sec. 3.1; however, the general algorithm is applicable to larger scale problems such as the diffusion bonded CHX.

*Step-by-Step description for proposed Bayesian Optimization to find the transition region between safe and unsafe designs of the thin tube:*

*Step 0 (Initialization)*: Define the design space or the region of interest for the given problem. From the defined design space, generate grid matrix $X\xaf\xaf$ using a DOE approach. Conduct function evaluations or experiments of very limited randomly generated samples in the design space. In our thin tube problem, we choose ten random selected designs as starting samples which are not included in $X\xaf\xaf$.*Step 1: Build a training data matrix with the sampled designs:*The data consist of**X**as design input variables and**Y**as output functions. In our problem, we define**X**as the matrix of design geometries as radius (*rad*) and length (*l*) of the thin tube andas the vector of distance values of the respective sampled designs as described in Sec. 3. Create the training data matrix, assuming at iteration k,*Y**D*_{k}= {*X*_{k},(*Y**X*_{k})}. It is to be noted*D*_{k}contains the distance values for all sampled designs from any region. Also, like in the Bree diagram, it is possible to choose the pressure and temperature stresses as the design variables. However, with design geometry as the design variables, it is more useful for a designer to directly visualize and understand the efficiency of the designs.*Step 2: GPM:*Next, with the knowledge gained from previous experiments (prior knowledge),*D*_{k}= {*X*_{k},(*Y**X*_{k})}, we can*develop a single posterior Gaussian Process model*.*Step 3:*Use the posterior GP model $\Delta k$, to conduct posterior predictive simulations of the nonsampled designs in the grid matrix $X\xaf\xaf$ and predict respective mean and mean squared error (MSE) of distance values, forming two vectors of $\mu (Y\xaf\xaf(X\xaf\xaf))|\Delta k$ and $\sigma 2(Y\xaf\xaf(X\xaf\xaf))|\Delta k$ respectively.*Step 4: Define objectives to sample designs toward unknown transition region:*Now, we have the vector of predictive posterior means of all the nonsampled designs, we need to define our objective, for which the acquisition function will be formulated. In this classification problem between safe and unsafe designs, our goal is to therefore maximize sampling of designs toward the unknown transition boundary and thus train the BO model sequentially with higher accuracy toward the transition region. Thus, maximizing the distance function for the set of designs in Elastic or Shakedown (Safe) will create the optimal region toward the transition region as the design in Shakedown closest to the transition region will have higher distance values. Similarly, minimizing the distance function for the set of designs in Plastic and Ratchetting (Unsafe) will create the optimal region toward the transition region as the designs in those regions closest to the transition region will have lower distance function values.To convert into a single-objective maximization problem, we did the elementwise transformation of the vector $\mu (Y\xaf\xaf(X\xaf\xaf))|\Delta k$. Let us define the transformed mean vector as $\mu T(Y\xaf\xaf(X\xaf\xaf))$, after conducting the elementwise operation as follows:where $y\xaf\xaf$ is the scalar posterior mean value of the nonsampled design $x\xaf\xaf$.(12)${\mu (y\xaf\xaf(x\xaf\xaf))ifx\xaf\xafregion\u2208Elastic/Shakedown1\u2212\mu (y\xaf\xaf(x\xaf\xaf))ifx\xaf\xafregion\u2208Plastic/Rachetting$*Step 5: Define the Acquisition Function and maximize the Acquisition function**u*(.): After the transformation as mentioned in Step 4, we calculated the acquisition function value elementwise as $u(y\xaf\xaf(x\xaf\xaf)|\Delta k)$, for each nonsampled designs, considering the respective mean and MSE values in vectors $\mu T(Y\xaf\xaf(X\xaf\xaf))$ and $\sigma 2(Y\xaf\xaf(X\xaf\xaf))$. Thus, we develop the vector of the acquisition function values as $u(Y\xaf\xaf(X\xaf\xaf)|\Delta k)$. A selection criterion is applied to choose new design location for future sampling,${x\xaf\xafmax};x\xaf\xafmax\u2208X\xaf\xaf$, which will maximize the predicted improvement of the learning of the unknown design space (maximizing acquisition function). Thus, we select the design with maximum acquisition function value as(13)$y\xaf\xafmax(x\xaf\xafmax)=max(u(Y\xaf\xaf(X\xaf\xaf)|\Delta k))$Augment the data, $D\xaf\xafk={Dk;(x\xaf\xafmax,y\xaf\xafmax)}$. The methodology used to compute the acquisition function has been described in Sec. 4.3.

*Step 6:*Check for convergence criteria 1. If not met, run*j*= 1:*n*loops of Steps 2–6; each loop takes one optimal design location ${x\xaf\xafmax,j}$; to select the best*n*design locations $X\xaf\xafmax={x\xaf\xafmax,1,\u2026,x\xaf\xafmax,n}$ to proceed to the next round of experiments. This step provides multiple experimental data in a single round of an experiment since it will be unrealistic and time consuming to provide one experiment at a time. The assumption behind this step is that we believe the GP prediction of ${x\xaf\xafmax,j}$ is accurate and proceed to the next best location ${x\xaf\xafmax,j+1}$ by minimizing the error in the current selected location ${x\xaf\xafmax,j}$. We believe this is a fair assumption since with more knowledge, the GP prediction will be close to the actual experiment data. In the early round of experiments, although we might see deviations from the actual experiment results (not following the assumptions), with the knowledge from those experiments, eventually the GP will improve and provide predictions closer to the actual experiment results as the model convergences (following assumptions).*Step 7: Expensive Function evaluations:*Conduct experiments for new design location $Xk+1=X\xaf\xafmax$. This step is outside the model environment as actual experiments will be conducted from the original high-fidelity model to generate required outputs (strain accumulation and the location of the designs), which ultimately used to compute distance metric (Eqs. (9)–(11)). Therefore, the new experiment data is: {*X*_{k+1},(*Y**X*_{k+1})}.*Step 8: Data Augmentation:*Update the prior knowledge for the next iteration of the model. Update training data matrix with current experimented data*D*_{k+1}= {*D*_{k};(*X*_{k+1},(*Y**X*_{k+1}))}. Repeat Steps 2–8 until convergence.*Step 9:*If convergence criteria 2 is met, update the GP with the final training data $\Lambda $, augmented with final sampled data and stop the model. Convergence criteria 1 and 2 will be explained later in this section.*Step 10: Feasibility check between safe and unsafe region:*This step is after the optimization is completed and the proposed BO model is fully trained, satisfying convergence criteria. Now, instead of running expensive evaluations, to classify any new designs as safe or unsafe design, we check the feasibility using the trained low-cost BO model:

- •
If the posterior predictive mean of the new design, $\mu ((Ynew(Xnew))|\Lambda )\u22640.5$, the new design is in safe region.

- •
Otherwise, the new design is in the unsafe region.

The value 0.5 is the threshold as we set this distance value at the transition boundary line.

- •

### 4.1 Gaussian Process Model Formulation of the Thin Tube.

*x*

^{T}

*β*is the Polynomial Regression model. In our model, we have used first- and second-order polynomial regression model. The polynomial regression model captures the global trend of the data. In general, first-order polynomial regression is used, which is also known as universal kriging [43]; however, it has also been claimed that it is fine to use a constant mean model [44].

*z*(

*x*) is a realization of a correlated Gaussian Process with mean

*E*[

*z*(

*x*)] and covariance

*cov*(

*x*

^{i},

*x*

^{j}) functions defined as follows:

*σ*

^{2}is the overall scale parameter and

*θ*

_{m}is the correlation length parameter in dimension

*m*of

*d*dimension of

*x*. These are termed as the hyper-parameters of GP model.

*R*(

*x*

^{i},

*x*

^{j}) is the spatial correlation function. In our model, we have used a Gaussian spatial correlation function which is given by Eq. (17). The objective is to estimate (by maximum likelihood estimate) the hyper-parameters

*σ*,

*θ*

_{m}which creates the surrogate model that best explains the training data

*D*_{k}at iteration k.

*D*_{k}= {

*X*_{k},

**(**

*Y*

*X*_{k})} is the prior information from previous experiments from high-fidelity models, representing the realizations of prior belief of the unknown true functions, and $x\xaf\xafk+1\u2208X\xaf\xaf$ is any new design. The predictive output distribution of

*x*

_{k+1}, given the posterior GP model, is given by Eq. (18)

*COV*_{k}is the kernel matrix of already sampled designs

*X*_{k}and

*cov*_{k+1}is the covariance function of new design $x\xaf\xafk+1$ which is defined as follows:

### 4.2 Generating Grid Points From Unknown Design Spaces of Thin Tube.

In this section, we discuss the generation of grid points within the specified design spaces, where the selected grid points by the acquisition function will be considered as samples for experiments. The goal of generating a grid using a rectangular grid or Latin hypercube is to use the space filling properties to cover the entire design space of the unknown design response surface. More details on the formulation and sampling strategies of these two methods have been provided in the paper [45]. However, the proposed model is not restricted to use these two methods and the user can select a preferred sampling strategy.

### 4.3 Acquisition Function Formulation of the Thin Tube.

In this section, we provide a detailed formulation of the acquisition function for Step 4 of the proposed model. Three types of acquisition functions have been studied in the model: *Probability of Improvement*, *Expected Improvement*, and *Full Exploration* search. The first acquisition function considers the idea of pure exploitation (selecting design points where predicted mean is high); the second acquisition function develops on the idea of exploitation (selecting design points where predicted mean is high) and exploration (selecting design points where predicted variance is high). The final acquisition function is based on only exploration. The final acquisition function is very useful when the design space is very flat, and the global optimal solution is confined in a very small region. With the first two acquisition functions, it has been seen the model can fall into false convergence since the design space is flat with limited samples in the early iterations. The acquisition function predicts very low probability/expected improvement as all the responses have similar values for all the experimented design inputs. Thus, when the design surface is unknown and could be very flat in most regions, it is important to use a full exploration acquisition function in the early iterations of the model to ensure that any potential optimal region is not missed. Once we find a sample within the confined interesting region, we can switch back to exploration-exploitation search to avoid unnecessary selection of samples for experiments in the nonoptimal regions. In our model, we have set a switching criterion as follows:

*do Full Exploration search*$ifmax(Yk)\u2212min(Yk)\u2264\delta k=1,2,\u2026K$*else do Expected Improvement or Probability of Improvement*

*δ*is a very small value which is set as 0.1. We can also set

*δ*as the percentage (say 1%) of the mean

*Y*_{k}.

*k*. Below are the equations for the acquisition functions, Probability of Improvement (Eq. (22)), Expected Improvement (Eqs. (23) and (24)), and Full Exploration (Eq. (25))

*y*(

*x*

^{+}) is the maximum actual response among all the experimented data until the current stage which is at

*x*=

*x*

^{+}; $\mu (y\xaf\xaf)$ and $\sigma 2(y\xaf\xaf)$ are the predicted mean and MSE from GPM for the nonsampled design $x\xaf\xaf\u2208X\xaf\xaf$; Φ(.) is the cdf;

*ϕ*(.) is the pdf;

*ξ*≥ 0 is a small value which is recommended to be 0.01 [22] as this works well in most cases, whereas the cooling function of

*ξ*did not. Jones [41] notes that the performance of PI(·) is highly sensitive to the value of

*ξ*, with nonideal values leading to poor performance.

### 4.4 Convergence Criteria.

In this section, we discuss the convergence criteria established for the model. From the steps of the proposed BO model, there are two checks for convergence in the model in Step 6 and in Step 9 in Sec. 4. The convergence criteria in Step 6 is Convergence 1 and the convergence criteria in Step 9 is Convergence 2. If either of the convergence checks succeed, the model stops and return the final solution:

Convergence 1:

The maximum improvement value of the acquisition function in selecting the first design sample (first iteration in Step 6) after conducting actual experiments is less than

*α*=0.001. Mathematically, it can be stated as

*j*= = 1

Convergence 2:

- The absolute difference in the total mean MSE of the predicted responses in m successive iterations is less than
*α*_{1}where $Y\xaf\xafk$ is the column vector of all the predicted value of matrix $X\xaf\xaf$ at iteration(27)$|\mu (\sigma 2(Y\xaf\xafk))\u2212\mu (\sigma 2(Y\xaf\xafk+m))|\u2264\alpha 1$*k*. Stopping the model after limiting the budget in terms of maximum number of experiments or function evaluations, i.e., $\u2211nk\u2265S$ where

*S*is the maximum number of function evaluations possible;*n*_{k}is the number of samples selected for experiments at*k*th iteration.

## 5 Results

In this section, we will show the results of the proposed Bayesian optimization framework on the design of the tube in terms of the performance of finding the transition region between safe and unsafe region. We used the DACE package [46] in matlab to fit the GP model in the Bayesian optimization. With radius (*rad*) and length (*l*) of the tube as decision variables, two test scenarios have been considered with different thicknesses of the tube: 1.7 mm and 1.2 mm. The feasible bounds for radius and length are [4–6.55] mm and [0.1–1] m, respectively. Figures 6–9 shows the results after the model satisfies convergence criteria 1, considering the two different thickness of tube.

### 5.1 Predicted Transition Region of Converged Model.

The pink and black dots in the Figs. 6 and 7 represent the randomly starting samples and the BO guided adaptive sample design locations that have been trained from actual function evaluations as described in Sec. 3.1. The final posterior predicted transition region, representing also the discontinuity or the constraint boundary region, has been developed based on those prior training data only and, therefore, the designers can provide decisions about the feasibility of any new designs in the specified design space based on the small sample of data, instead of undergoing further experiments. The green and red highlighted region represent the final transition region which is defined as the predicted distance function value ** Y**, ranges between 0.4 to 0.5 and 0.5 to 0.6, respectively, given the prior training data. The green highlighted region represents the area in the Shakedown region (safe), but very close to transition region near the constraint boundary line. The red dots represent the area in the Plastic or Ratchetting region (unsafe), but very close to transition region near the constraint boundary line.

From visualization, we can see that a that design falls *above* the green region is most likely to be safe and a good design. A design that falls *within* the green or red region is very close to the transition region and therefore recommended for further analysis. Any design that falls *below* the red region is most likely not a safe design, susceptible to creep-fatigue failure. The converged results of both scenarios from the proposed BO framework have been compared with the true solution (Bree diagram of thin tube) in terms of pressure and temperature stresses in Figs. 8 and 9. The grey shadowed part is the region of interest of our test cases where we show the predicted transition region (denoted by red and green) centered about the true transition line. We know from the Bree diagram that below the solid black and dashed red line is the true Elastic/Shakedown (safe) region and above those are the Plastic and Ratchetting region (unsafe), respectively. The region of interest in Fig. 8 does not cover the Ratchetting region; thus, we see the red region above the solid black line (toward the plastic region) and the green region below the solid black line (toward the elastic region). The region of interest is more complicated in Fig. 9, since the region covers Elastic/Shakedown (below black and red dashed line), Plastic (above black line) and Ratchetting region (above red dashed line). It can be understood that due to a more complex design response surface, the model took more training data (black dots) for the 1.2 mm versus the 1.7 mm thickness to reach model convergence.

### 5.2 Classification Error.

Next, we consider a randomly selected 100,000 new designs (test data) from Latin Hypercube sampling in the same design space for validation to classify between safe and unsafe region (Step 9 in Sec. 4). Tables 1 and 2 provide the confusion matrices for both the test scenarios with a classification error rate of **0.42%** when *YS*_{max} = 0.45 and *YP*_{min} = *YR*_{min} = 0.55. We found some incorrect classifications as the BO model optimizes for a transition region rather than the true line. However, our assumption appears reasonable, as optimizing the model to locate the transition region provides efficient learning and high accuracy (error rate < 1%) in locating the true constraint boundary line.

Table 3 provides a summary of the sensitivity analysis of the values of *YS*_{max}, *YP*_{min}, *YR*_{min} in Eqs. (9)–(11) in terms of the number of training data sampled and the accuracy of the model in terms of the classification of the new designs, considering the equivalent 100 k test data and both values of thickness parameter of the thin tube after model convergence (Convergence 1). In this case study, from the sensitivity analysis, we can see best consistent accuracy of classification for both scenarios of thickness values when $YSmax=0.45$ and $YPmin=YRmin=0.55$ having a mean error rate of 0.42. However, considering the amount of training data sampled, a range of *YS*_{max} between 0.45 and 0.48 and *YP*_{min}, *YR*_{min} between 0.55 and 0.52 is good in terms of trade-off between cost of training data and the accuracy of classification. However, beyond that range, we can see that either we have significant error rate (mean approx. 5.3%) or significant cost of training data (mean approx. 150–200) to reach the minimal error rate. Thus, when we attempt to locate the exact transition line (last two rows of Table 3) versus a region, the model has the highest error rate (∼5.3%) and requires much more sampling to reduce error, making the model inefficient.

The sensitivity analysis has been presented in this paper as a study to see the effect of gap width on the model performance. Although we have seen some changes in the model classification accuracy, the error rate has generally been less than 1%. This shows the performance is not extremely sensitive to the gap width and therefore, in general application, we can think of a standard value (not too wide or thin gap width) as the values provided in the sensitivity analysis. Further research on the strategy to optimize the band width as a trade-off between model performance and cost for a general problem will be considered in the future.

### 5.3 Comparison With Existing Classification Methods.

Finally, Table 4 shows a comparison of our proposed method with other methods, such as a SVM, Random forest (RF) and Ada Boosting (ADA) [10–12], for classification between safe and unsafe designs among 100k randomly selected new thin tube designs, considering both thickness values. These existing methods were implemented using inbuilt function in R packages [47–49], with a radial kernel in SVM and 2000 trees (iterations) for ADA; the responses are provided as standard binary values (0-unsafe and 1-safe). At first, we use Latin Hypercube sampling to generate a full matrix, ** X**′ (refer Sec. 4, Step 0), over the design space as the training data (2500 samples) for the SVM, RF and ada models. From results in Table 4, we can see in classification, SVM gives the best performance (err rate = 0.325) and ada gives worst performance (err rate = 3.39). Though SVM has lower error rate than our proposed BO method, it took much more sampling to train the models (2500 samples versus 67 and 120 samples), thus causing a significant increase in experimental or function evaluation cost. Thus, we did another comparison where we used

*only*the training data used, until convergence, of the proposed BO models to train the SVM, RF, and ADA models. Using the minimal BO training data, our proposed method provides the best performance (err. rate = 0.42), while SVM gives much higher error rate of 1.5%, and ADA is the worst (err. rate = 9.15%). The detailed confusion matrices for classification using SVM, RF, and ADA are provided in Tables 1–4 in the Supplemental Materials on the ASME Digital Collection.

In this problem, as our main objective is to classify the design between safe and unsafe region, the BO model guides us to do more sampling near the unknown transition region so that the surrogate GP model predicts the output for a design with high accuracy close to the transition boundary, rather than the designs which are far away from the transition boundary. This is because the designs closer to the boundary are more critical for mis-classification, thus higher prediction accuracy is required from GP model, thus higher sampling over that region has been recommended by the BO model. This is not true for the designs farther away from the transition boundary, since even with lower prediction accuracy, the designs have lower likelihood to jump the threshold. Therefore, more sampling in such noninteresting region would be redundant considering the trade-off between experimental cost and model classification accuracy. Thus, with the strategic and adaptive sampling from BO model, we see a minimal error rate with minimal training design samples (Table 4). For a subsequent goal of predicting the model output with higher accuracy from the surrogate model in only the safe design space (such as for finding the *optimal* tube design) we can refine the GP model over this region (a future research topic).

## 6 Conclusion and Future Research Scope

In this paper, we have proposed the application of the Bayesian optimization to locate the constraint boundary of at the transition region between safe and unsafe region for thin tube in terms of risk of creep-fatigue failure under constant application of pressure and temperature stresses, and thereby use as a classification tool for evaluation of new designs as good or bad designs. As we have discussed, the constraint boundary in this problem also represents the discontinuity of the function (discontinuous transition region); the proposed strategy provides a way to tackle the discontinuous design space by projecting to an artificial continuous design space for better convergence of BO model. However, it is worthy to mention that once we obtain the required data (region and strain accumulation) for the design, the formulation of the distance function is not dependent upon the scale or complex design geometry. The complexity arises from how those required data are obtained (e.g., FEA). At each iteration, the model with prior knowledge of training data sampled from previous iterations updates the posterior predictive model. This informs the acquisition function to choose the design for sampling in the next iteration to maximize learning of the optimal region of the unknown function. However, unlike the standard BO model for maximization or minimization problems, our objective is to locate the unknown constraint boundary. Therefore, we reformulate our objective function as a distance function which helps us to recast our objective as a maximization problem where the maximum objective function value, or the new optimal region, is toward the true constraint boundary.

Our proposed BO approach does not have dependencies of having pre-existing training data as incorporating Bayesian knowledge into the optimization framework allows us to strategically select design samples to maximize the learning iteratively and minimize the overall cost for sampling for expensive function evaluations (training data) to achieve the desired level of accuracy. With the resulting small error rate, we have high likelihood that the model emulates the true constraint boundary and this will help us to continue our problem to the next stage (future research) to find the optimal design as we have higher confidence to preserve only feasible designs in term of creep-fatigue failure. The next stage of research will be focused on the full framework will be implemented in a complex high-dimensional CHX (Fig. 11) design. In this problem, we use the results of the classification as an optimization pre-stage, where the design optimization problem considers application and manufacturing constraints.

## Acknowledgment

This research was funded in part by DOE NEUP DE-NE0008533. The opinions, findings, conclusions, and recommendations expressed are those of the authors and do not necessarily reflect the views of the sponsor.

## Conflict of Interest

There are no conflicts of interest.

## Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request. The authors attest that all data for this study are included in the paper.