The paper presents a framework for set-based design under uncertainty and demonstrates its viability through designing a super-cavitating hydrofoil of an ultrahigh speed vessel. The framework achieves designs that safely meet the requirements as quantified precisely by superquantile measures of risk (s-risk) and reduces the complexity of design under uncertainty. S-risk ensures comprehensive and decision-theoretically sound assessment of risk and permits a decoupling of parametric uncertainty and surrogate (model) uncertainty. The framework is compatible with any surrogate building technique, but we illustrate it by developing for the first time risk-adaptive surrogates that are especially tailored to s-risk. The numerical results demonstrate the framework in a complex design case requiring multifidelity simulation.

## Introduction

Set-based design was pioneered by Toyota and later adopted by the U.S. Navy [1]. It responds to the practical need of finding satisfactory designs or, equivalently, finding designs that can be eliminated from further consideration. Thus, a wider set of designs tends to be seriously considered than is the case for an optimization-driven approach [2–4]. In this paper, we develop a novel risk-based framework for set-based design, coin risk-adaptive set-based (RASB) design, and demonstrate its viability by shaping a hydrofoil for an ultrahigh speed vessel.

We rely on superquantile measures of risk (s-risk) [5], pioneered under the name conditional value-at-risk in Ref. [6] and used in finance, to address uncertainty and assess whether quantities of interest (QoIs) meet requirements. There are three main advantages of s-risk: First, it permits a natural decoupling of parameter uncertainty (due to loads, material properties, etc.) from surrogate uncertainty (due to modeling approximations). This realization, not recognized earlier, results in significant simplifications. Second, s-risk reduces in a rigorous sense the complexity of set-based design. Third, it provides a decision-theoretically sound assessment of risk [5,7].

Risk-adaptive set-based design is flexible and permits *any* type of surrogate modeling and multifidelity scheme such as those described in Refs. [8–10]. Still, we develop for the first time a scheme that constructs surrogates of s-risk directly without attempting to build models of intermediate QoIs.

Much of the existing studies of set-based design concentrate on the challenges associated with multidisciplinary (collaborative) design where it is especially important to develop sets of candidates and avoid having teams “over-optimize” at an early stage. In this domain, uncertainty is usually captured by intervals [2–4] with a focus on the uncertainty associated with not knowing the goals of the other teams. RASB design allows for parameter and surrogate uncertainty specified by probability distributions as well as those given by intervals of uncertainty. However, we presently concentrate on a single discipline.

Both interval and distributional models of parameter uncertainty are considered in Ref. [11]. Design alternatives are compared based on their expected utility as in multi-attribute utility theory and its extensions to probability models with imprecision. Designs are successively eliminated as they are found inferior according to criteria for dominance. The framework is demonstrated on a vehicle transmission design problem involving three design alternatives. RASB design avoids the need for selecting a utility function, which is difficult in practice [12]. Instead, we leverage s-risk that adapts to the desired safety level using a single, easily interpreted parameter. RASB design also fully accounts for the use of surrogates built through multifidelity simulations and the associated uncertainties. We demonstrate our approach by considering an uncountable number of design alternatives.

Traditional design optimization under uncertainty considers interval uncertainty [13] and more commonly probabilistic uncertainty models [14–17]. In the latter case, design optimization problems are formulated using failure probabilities (or equivalently reliability indices), which *increase* the complexity of the optimization due their poor mathematical properties. Often, this results in the need for using computationally costly heuristic algorithms such as those based on particle swarms [17]. We reference the special issues [18,19] and the earlier studies [20–22] for more background.

For the first time, we consider s-risk in a real-world design case that requires multifidelity simulation of complex physics. We examine a new family of dual-operating mode super-cavitating hydrofoils [23] devised to ensure high efficiency both in super-cavitating and in fully wet conditions. These unconventional hydrofoils are designed to serve as lifting surfaces in a new ultrahigh speed small waterplane-area twin-hull vessel reaching more than 100 knots [24].

## Risk-Adaptive Set-Based Design Framework

The framework for RASB design as developed in this section relies on a novel formulation of set-based design under uncertainty based on s-risk and the subsequent decoupling of the treatment of parametric uncertainty and surrogate uncertainty. Set-based design centers on finding one, several, or all designs that meet a set of requirement. These *candidate designs* would typically be the output of a stage of the design process that has as objective to the number of designs (concepts) under consideration. The candidate designs might then become input to a next stage where adjusted QoIs and requirements would be studied, and further reduction of the design space will take place. We develop a mathematical formulation of the precise meaning of “candidate design” in the presence of uncertainty. Our formulation applies to any stage of a design process despite the fact that the stages might involve different QoIs and requirements.

In addition to the ability to extensively explore a design space, set-based design has another advantage over optimization-based approaches previously not fully recognized. The situation is best illustrated by a simple example. Ignoring uncertainty and other complications temporarily, suppose that the QoI is $f(x)=\u03f5(x\u22121)$, given as a function of a design variable *x* that is permitted to vary in the design space $[0,2]$. The parameter *ϵ* is near zero. Clearly, an optimization-based approach will be highly sensitive to the value of *ϵ*; the design minimizing *f*(*x*) jumps from *x* = 0 to *x* = 2 with arbitrarily small changes of *ϵ* around zero. In contrast, a set-based approach is much less sensitive to changes in *ϵ*. In fact, the set of designs that meet the requirement $f(x)\u2264|\u03f5|$ is always $[0,2]$, regardless of the value of *ϵ*. The same conclusion holds for essentially all functions [25] and provides a rigorous justification for set-based design.

### Accounting for Uncertain Parameters.

Suppose that a QoI given by $g(x,v)$ is parameterized by a vector **x** containing *design variables* and a vector **v** consisting of *uncertain parameters.* The uncertain parameters represent inputs that are beyond the control of the designer and that can affect the performance of the design such as environmental conditions and manufacturing imprecision. Under design **x** and parameter **v**, $g(x,v)$ is the (actual) numerical value of the QoI, which is usually analytically unavailable and/or computationally expensive to evaluate. Without loss of generality, we assume that low values of the QoIs are preferred to higher values and that the requirement associated with the QoI is that $g(x,v)$ should not exceed a number *r*. If a practical situation demands high values, we simply replace $g(x,v)$ by $\u2212g(x,v)$ and *r* by −*r*. The requirement *r* might be specified by regulatory requirements, given as target at the current design stage, or is simply varied. The latter case shows that a goal of minimizing $g(x,v)$ can be achieved by carrying out a one-dimensional search over *r*. Thus, RASB design addresses also situations demanding optimization.

The goal is to identify designs **x** with $g(x,v)\u2264r$, but this problem is ill-posed because the values of the uncertain parameters **v** are not known. We assume that the uncertainty in **v** can be quantified by a known probability distribution. Typically, such distributions are estimated based on data, but we omit a discussion of this subject. We permit as a special case the possibility that the distribution of a parameter is simply given by upper and lower bounds on its support and therefore enable modeling using interval uncertainty; additional comments about this possibility are included below. When viewing the uncertain parameters as a random vector with a given distribution, we denote it by **V**. A realization of **V** is still denoted by **v**. Now $g(x,V)$ is a random variable for each **x**, which allows us to bring in s-risk.

S-risk reduces a random variable to a representative number that can be used in comparison with requirements. Specifically, for *risk parameter*$\alpha \u2208[0,1]$, the s-risk of $g(x,V)$, denoted by $R\alpha (g(x,V))$, is understood as the $average\u2009of\u2009g(x,V)\u2009in\u2009the\u2009worst(1\u2212\alpha )100%\u2009outcomes.$

This expression can be taken as the definition if $g(x,V)$ has a continuous distribution. If the risk parameter *α* = 0, then $R\alpha (g(x,V))$ is simply the expected value $E[gk(x,V)]$. If *α* = 1, then $R\alpha (g(x,V))$ is the largest possible value of $g(x,V)$. In the case of parameter uncertainty represented by intervals, the latter possibility simply propagates that uncertainty and no probability distribution is needed. A value of *α* between these two extremes provides a middle ground between focusing on the average performance and the absolutely worst outcome. Figure 1 illustrates a situation when $g(x,V)$ has a triangular probability density function. In this case, the worst $(1\u2212\alpha )100%$ outcomes are those with values greater than $1\u221221\u2212\alpha $. The average across these is $R\alpha (g(x,V))=1\u2212(4/3)1\u2212\alpha $.

*μ*and standard deviation

*σ*, then $R\alpha (g(x,V))=\mu +\sigma \varphi (\Phi \u22121(\alpha ))/(1\u2212\alpha )$, where $\varphi $ is the standard normal probability density function and $\Phi $ is the standard normal cumulative distribution function. Generally

across all scalars *c* [7]. If $g(x,V)$ happens to be a deterministic constant *c*, i.e., **V** is deterministic and/or the system negates all variability, then $R\alpha (g(x,V))=c$. If it is not constant, then $R\alpha (g(x,V))$ is always greater than the expected value of $g(x,V)$ unless *α* = 0 when it is equal to the expected value. S-risk is actually continuous in spaces of random variables and also with respect to *α*. It is therefore inherently stable with regard to misspecification of the risk parameter.

S-risk is fundamentally appealing as it has a simple interpretation, accounts for the tails of probability distributions, and is grounded in axiomatic foundations of decision theory [7]. In contrast to the probability that $g(x,V)$ exceeds *r*, which is largely insensitive to the nature of the tail of the probability distribution of $g(x,V)$, s-risk accounts for the magnitude of possible exceedance. Moreover, minimizing such failure probabilities and finding designs with sufficiently low failure probability are fundamentally difficult as such problems are nonconvex even if $g(x,v)$ is convex in **x** for all **v**. Thus, the consideration of failure probability *increases* the complexity of the design process as compared to a deterministic approach that ignores uncertainty in parameters. S-risk *retains* the complexity of a deterministic approach: if $g(x,v)$ is convex in **x**, then the s-risk $R\alpha (g(x,V))$ will also be convex in **x** regardless of the distribution of **V**. If that distribution is discrete, then linearity of $g(x,v)$ in **x** translates into a linear programming expression for s-risk. Thus, at this fundamental level, finding a design **x** with $R\alpha (g(x,V))\u2264r$ is *no harder* than finding an **x** with $g(x,v)\u2264r$ for a given **v**, i.e., ignoring uncertainty. RASB design therefore *reduces* the complexity of design under uncertainty compared to approaches relying on failure probabilities and similar concepts that do not preserve convexity including those that attempt to approximate the failure probability such as FORM, MVFOSM, and moment matching methods [26].

We are now in a position to formalize the first component of RASB design. The requirement that $g(x,V)$ should not exceed *r* is rigorously defined as having $R\alpha (g(x,V))\u2264r$. We say that $g(x,V)$ is *safely*$\u2264r$ when this condition holds. This safeguarding implies that even on average over a set of worst outcomes, the value of the QoI will not exceed *r*. The specific definition of “worst outcomes” depends on *α*. We can also interpret the condition probabilistically [5]: $R\alpha (g(x,V))\u2264r$ is equivalent to having the *buffered failure probability*$\u22641\u2212\alpha $ and it implies that the (usual) failure probability is $\u22641\u2212\alpha $.

It is apparent that the choice of *α* needs to reflect the demands of an application. In the case of interval uncertainty for the parameters **v**, the choice *α* = 1 might be suitable, but it is otherwise often too conservative. The relation to the failure probability provides general guidance: If there is a need for a reliability level corresponding to a failure probability of $10\u22123$, then $\alpha =1\u221210\u22123$ achieves this goal. Since s-risk is continuous in *α*, small changes in *α* implies small changes in the s-risk, which partially alleviates the need to select *α* “correctly.” When relying on failure probabilities, the situation is dramatically worst as small changes in the reliability level might be associated with large jumps in the range of requirements that is satisfactory.

Using the notation $s(x)=R\alpha (g(x,V))$, we highlight a fundamentally convenient property: the requirement on an uncertain system is translated into the *deterministic* requirement $s(x)\u2264r$. The presence of uncertain parameters has been addressed and the challenge is reduced to that of dealing with the complex, but deterministic functions $s(x)$.

### Accounting for Uncertainty in Surrogates.

The s-risk $s(x)$ is rarely easy to compute for physical systems; the underlying values $g(x,v)$ and the corresponding probability distribution of the QoI might be unavailable or costly to evaluate. The predicament of a complex, deterministic function is well known, and we turn to *surrogates* (response surfaces) as usual. Let $S(x)$ be a surrogate of $s(x)$, which we view as a random field over the space of design variables. We assume that the randomness in $S(x)$ is due to uncertainty associated with the accuracy of the surrogate. Examples of surrogates include those based on kriging [8,10,27,28], where the surrogate error is immediately available in a probabilistic form, at least if the underlying Bayesian assumptions are accepted. However, $S(x)$ can be constructed by *any* surrogate building approach assuming that the resulting surrogate is associated with probabilistic estimates of accuracy for example obtained by cross-validation methods and their enhancements [27,29–32].

The original requirement $s(x)\u2264r$ is now replaced by one involving the surrogate. In a situation analogous to that above we find the requirement $S(x)\u2264r$ ill-posed as $S(x)$ is a random variable for every **x** due to the unknown error in the surrogate relative to the actual value $s(x)$. Thus, again, we turn to s-risk. For risk parameter $\beta \u2208[0,1]$, we adopt the requirement $R\beta (S(x))\u2264r$, which is well posed. It ensures that even the average value of $S(x)$ over the worst $(1\u2212\beta )100%$ outcomes of $S(x)$ does not exceed *r*. We stress that here the average is over possible values of an uncertain surrogate. These values are unrelated to the uncertain parameters **V**. Since the surrogate is imperfect, $S(x)$ is random and we account for this fact when making the comparison with *r*. As a concrete example, suppose that the surrogate $S(x)$ is Gaussian with mean $\mu (x)$ and standard deviation $\sigma (x)$, for example obtained by kriging. Then, $R\beta (S(x))\u2264r$ translates into having $\mu (x)+\sigma (x)\varphi (\Phi \u22121(\beta ))/(1\u2212\beta )\u2264r$. Although general recommendations are difficult, this expression provides guidance regarding the choice of *β*. If *β* = 0, then we are simply asking the (posterior) mean of the Gaussian surrogate to satisfy the requirement. Clearly, the actual value $s(x)$ might very well be higher than $\mu (x)$, leading to a nonconservative prediction of the performance of the system. As *β* increases, the requirement becomes more stringent as we account for uncertainty in the surrogate in a manner that resembles that of quantile-based improvement strategies in Bayesian optimization; see for example Ref. [33]. Again, since s-risk is continuous in its risk parameter, small changes in *β* imply small changes in the s-risk, which makes the exact choice of *β* less critical.

We are then in a position to define the *set of candidate designs* of the RASB design framework as $designs\u2009x\u2009with\u2009R\beta (S(x))\u2264r.$

Designs that meet this requirement are safeguarded in a rigorous manner against poor performance, accounting for both parametric uncertainty in **V** (using risk parameter *α*) and surrogate imprecision (with risk parameter *β*).

**Algorithm for RASB design**

(1) Select risk parameter $\alpha \u2208[0,1]$ to capture the desired level of safety against uncertain parameters **v**.

(2) Construct surrogate $S(x)$ of s-risk $s(x)$ including uncertainty (error) estimates.

(3) Select risk parameter $\beta \u2208[0,1]$ to capture the desired level of safety against uncertainty in the surrogate $S(x)$ relative to the actual s-risk $s(x)$.

(4) Compute the set of candidate design.

Of course, we may be faced with multiple QoIs. Then, steps 1–3 are repeated for each QoI and a candidate design is one that satisfies the requirements for all QoIs.

We note that in step 2 any surrogate building technique can be used. Step 4 is usually trivial as after $S(x)$ is built, the s-risk of $S(x)$ is easily computed. For example, when $S(x)$ is Gaussian, it only entails computing mean and standard deviation.

The representation of a set of candidate design might vary. If only a single candidate design is needed, any standard optimization algorithms can be applied to the problem of minimizing $R\beta (S(x))\u2212r$ over the design space and be terminated when objective values become nonpositive. If the goal is to find a few “different” designs within the set, one can again solve a sequence of optimization problems after the notion of “different” is formulated; see for example [34]. If we seek many candidate designs, the simplest approach might be Monte Carlo sampling of points in the design space paired with requirement checks, which also addresses the previous goals in a brute-force manner. These calculations only involve the surrogate and are inexpensive.

## Risk-Tuned Surrogates

Risk-adaptive set-based design permits any surrogate $S(x)$. However, in an attempt to explore the special structure of $s(x)$, here for the first time, we develop a surrogate that is adapted to s-risk.

**c**is a vector of coefficients and

*c*

_{0}is a scalar coefficient both to be determined. Although

*c*

_{0}could be incorporated into $b(h,x)$, we make it explicit due its special role below. The essence of the surrogate building approach is the method by which the coefficients are fitted. The method is tailored to the risk parameter $\alpha \u2208(0,1)$ for parameter uncertainty. Specifically, $(c0,c)$ is determined by

*α*-quantile regression using the

*same α*-value. That is, find coefficients $(c0,c)$ that minimize

*not*carry out standard least-squares regression, but a regression that is adaptive to the chosen risk parameter

*α*. We discard the optimal

*c*

_{0}, but name the other optimized coefficients $c\u0302$. Finally, we set $c\u03020=$ s-risk of ${gi\u2212[c0+c\u22a4b(hi,xi)]}i=1N$. Extending Corollary 4.2 in Ref. [35], we obtain that $R\alpha ({gi}i=1N)\u2264R\alpha ({c\u03020+c\u0302\u22a4b(hi,xi)}i=1N)$. Consequently, over the given data set the model $c\u03020+c\u0302\u22a4b(h,x)$ leads to an upper approximation of the s-risk of the QoI. This motivates the use of the model as an approximation of the s-risk, i.e.,

*N*and not the true distribution of the random vector

**V**. Second, Eq. (1) makes comparisons at individual designs

**x**, which might generally be different than considering “averages” across a variety of designs as relied on while fitting the coefficients. The idea of such risk-adaptive regression can be traced back to Refs. [35] and [36], but has not previously been developed in this form. It is central that

*α*in the regression matches that of interest so we achieve adaptation. In the case of

*α*= 0, we simply determine the coefficients by means of standard least-squares regression, and for

*α*= 1 we Ref. [35]. Equation (1) gives a deterministic surrogate, but we would like to account for surrogate imprecision. There is a vast literature on the subject, and existing approaches based on cross-validation can be adopted; see for example Refs. [27] and [29–32]. Section 5 illustrates one possibility.

The above approach to developing a surrogate for $s(x)$ is novel, and preliminary results in Sec. 5 demonstrate reasonable accuracy. Clearly, further testing is necessary to establish the merit of the approach. Here, we briefly contrast the approach with other possibilities. Conceptually, any technique can be applied to develop a surrogate $g\u0302(x,v)$ for $g(x,v)$ in the combined **x**-**v** space. The s-risk $s(x)$ can be estimated on a grid in the design space using the formula $minc{c+E[max{0,g\u0302(x,V)\u2212c}]/(1\u2212\alpha )}$, which involves optimization over *c* and numerical integration. This provides (approximate) function values $s(x)$ that subsequently can be used to fit a surface in the design space; see Ref. [10]. Several improvements are available, but it is apparent that such a layered approach is complex. In contrast, the proposed scheme develops directly the surrogate from simulation output without the need for additional optimization, numerical integration, and intermediate models of $g(x,v)$ in a potentially high-dimensional space.

## Hydrofoil Design

As a design case, we consider a surface-piercing super-cavitating hydrofoil of an ultrahigh speed vessel. RASB design offers possibilities of reaching higher speeds for such vessels, avoiding the hysteresis phenomena typically affecting conventional super-cavitating hydrofoils. Traditional super-cavitating hydrofoils are designed to operate at high speeds only where flow conditions ensure the presence of a stable vapor cavity enveloping the entire suction surface and closing many chords aft a blunt trailing edge. The shape of the unconventional hydrofoil presented in Refs. [23] and [37] features a pointed leading edge ensuring cavity detachment at the operating angle of attack and cavitation index in design conditions and a sharp edge on the pressure side triggering base cavitation when working at zero angle of attack. The blunt trailing edge, typical of conventional super-cavitating profiles, is instead tapered in a tail designed to be enclosed in the supercavity at high speed while producing a good pressure recovery and higher lift at lower speeds when cavitation disappears. This guarantees the performance of the hydrofoil also in subcavitating conditions. The tail is functionally separated by the forward body of the profile through a sharp corner (face cavitator) on the face in order to trigger base cavitation at intermediate speeds or at high speeds and lower angles of attack. We adopt a shape parametrization with composite B-spline curves.

We compare with a benchmark hydrofoil obtained by differential evolution optimization and a boundary element method, but no consideration of uncertainty [38]. Figure 2 presents a time-averaged snapshot of the flow around the benchmark: Reynolds number $4.58\xd7107$, angle of attack 6 deg, and cavitation index 0.05. The thin cavity shows the drawback of the design: it is highly sensitive to changes in operating conditions. In this case, the existence of a super-cavity relies on the presence of a vapor regime over the suction surface. It is evident that manufacturing errors or variable conditions could significantly affect the flow regime and performances for a design with a thin cavity.

### Description and Parameters.

The vector **x** consists of 15 variables representing coordinates of the control polygon of the composite B-spline curves describing the shape; see Fig. 3. The pressure side is described by five control points, one of which allows to change the vertical position of the face cavitator, i.e., the sharp edge at the beginning of the hydrofoil tail. Variations of the suction side are performed by five control points; three of them move together vertically following the 15th coordinate. The leading and trailing edge positions as well as the control points of the tail pressure side are kept fixed. The control points on the face regulate lift and drag performance while those on the back suction side control the cavity thickness and maintain a sufficient inertia modulus.

We consider uncertainty related to manufacturing errors using a random vector $V=(V1,...,V15)$ describing relative displacement values to the control point coordinates. The actual shape manufactured from a (nominal) design $x=(x1,...,x15)$ is assumed to be $x\xafi=xi(1+Vi),\u2009i=1,...,15$. We refer to the vector $x\xaf=(x\xaf1,...,x\xaf15)$ as the (random) shape of the hydrofoil under design **x**. The designer selects **x** from a range given by the bounds $ximin\u2264xi\u2264ximax,\u2009i=1,...,15$ (see Table 1), which also gives the benchmark design in Fig. 2. The values of the design variables are relative to the hydrofoil chord.

The distribution of **V** is discrete with 898 equally likely realizations essentially uniformly distributed in the box $[\u22120.06,0.06]15$, with some adjustment to eliminate unrealistic manufacturing errors. Thus, “noise” up to $\xb16%$ is added to *x _{i}*. Although no standard guideline exists for hydrofoils, these numbers are motivated by manufacturing tolerance for ship propeller blades (ISO standard 484/1). The framework is indifferent to the distribution of

**V**. Here, we select a discrete distribution to avoid errors due to sampling and other approximations, and more easily highlight the features of RASB design.

Using chord length *c* = 0.66 m, we define five QoIs: (i) Profile inertia modulus *w*, which is required to exceed the minimum value $wmin=8.1\xd710\u22126$ m^{3}. (ii) Profile thickness $tP2$ at 2% of the chord, which needs to exceed the minimum value $tPmin/c=0.2%$. (iii) Drag-over-lift ratio $CD/CL$, which needs to meet a requirement of 0.1 with $CD=D/(0.5\rho cU2)$ and $CL=L/(0.5\rho cU2)$ being normalized drag *D* and lift *L* under operating speed *U* = 61.667 m/s and water density $\rho =997.3$ kg/m^{3}. (iv) Lift *C _{L}*, which has a requirement of 0.2625 at the design angle of attack $\alpha des=6$ deg and at the design cavitation number $\sigma des=0.05$. (v) Negative lift $\u2212CL$, which must meet the requirement −0.23. For the five QoIs, the risk parameters $\alpha 1=0.7,\u2009\alpha 2=0.7,\u2009\alpha 3=0.9,\u2009\alpha 4=0$, and $\alpha 5=0$. (Subscripts indicate which QoI.) Consequently, we seek designs that on average in the worst 30% of the realizations has $\u2212w\u2264\u2212wmin$ and $\u2212tP2\u2264\u2212tPmin$. Moreover, on average in the worst 10% of the realizations we have $CD/CL\u22640.1$ and the average of

*C*is between 0.23 and 0.2625.

_{L}The first two QoIs do not need surrogates as there are explicit formulas for inertia modulus and profile thickness. For drag-over-lift and lift, we develop surrogates following Sec. 3 using risk parameter $\beta 3=0.95$ for drag-over-lift and risk parameters $\beta 4=\beta 5=0$ for lift. The lower values of *β*_{4} and *β*_{5} reflect the situation that less safety is usually accepted for a QoI that is required to operate in a narrow band.

### Simulations and Surrogates.

We use an accurate solver for an unsteady viscous method for the solution of Reynolds averaged Navier–Stokes equations (URANSE) as well as a potential flow solver that both return for a given shape $x\xaf=(x\xaf1,....,x\xaf15)$ lift *C _{L}* and drag

*C*. The URANSE and potential flow output are referred to as high- and low-fidelity output, respectively. In the present context, each high-fidelity solution takes about 6 h on four cores, while each low fidelity run requires 5 s on a single core. Figure 2 displays high-fidelity output.

_{D}The potential flow solver is based on a boundary element method in which a Laplace equation is solved for a perturbation potential and a Bernoulli equation is solved instead of continuity and momentum equations for a steady nonviscous flow. Friction forces on the wetted surface of the hydrofoil are included using an empirical local friction coefficient. A cavitation model solving for the cavity thickness on the hydrofoil suction surface is included by imposing an additional set of kinematic boundary conditions that are satisfied through singularities distributed on the hydrofoil suction surface and wake panels. The model predicts midchord cavity closure, but not bubble dynamic for cavities developing on the pressure surface; see Ref. [38].

The solution of URANSE is achieved for a multiphase flow consisting of a fluid mixture of liquid and vapor using a finite volume technique with collocated arrangement of variables where partial differential equations are solved for the fluid mixture through a scalar quantity *γ* indicating the relative content of vapor with respect to liquid within each cell [39]. A transport equation for *γ*, representing the vapor content, is solved together with RANS equations when the source term indicating the specific net mass transferred in the cavitation process is known. Condensation and vaporization regulating the above mentioned mass transfer are found through the cavitation model [40]. The super-cavitating flow around the hydrofoils is solved using a hybrid structured–unstructured grid: a structured grid close to the hydrofoil surface, two nested unstructured grids in the region of cavity, and a coarse grid elsewhere [41].

For the shapes resulting from the benchmark with the 898 realizations of **V**, we run the URANSE solver and obtain *C _{D}* and

*C*for all 898 resulting shapes. The same input to the potential flow method results in estimates of

_{L}*C*and

_{D}*C*, but also approximately 50% computational failures due to the presence of face midcord cavitation that is not presently resolved by the method. Consequently, instead of using the actual low-fidelity simulations for each of the 898 shapes, we compute for each shape a weighted average over all the successful runs, with weights determined by the distance from the current shape to the shape of those runs. Throughout, we deal exclusively with this modified low-fidelity data set. Figure 4 illustrates the high- and low-fidelity drag-over-lift values across the 898 shapes; Figure 5 gives the corresponding errors in the low-fidelity estimates. These runs generate the data from which the surrogates are fitted by the procedure in Sec. 3 using $c0+c\u22a4b(h(x,V),x)=c0+c1x1+\cdots c15x15+c16h(x,V)+c17[h(x,V)]2$, where $h(x,V)$ is the modified low-fidelity values which are now explicitly available for any $(x,V)$ using weighted averages as described above. We estimate surrogate uncertainty by simply dividing the data set of 898 drag-over-lift values into nine roughly equally large groups, on which separate fitting is carried out. The division ensures a reasonably large sample size (about 100) in each case. Many other cross-validation approaches could have served the same purpose. The choice of $\beta 3=0.95$ to assess uncertainty in the resulting drag-over-lift surrogate $SD/L(x)$ implies that the highest of the nine fits are used in comparison with the requirement. The choices of $\beta 4=\beta 5=0$ for the lift surrogate $SL(x)$ imply that the average of the nine fits is employed in assessment.

_{L}## Results and Candidate Designs

defining the candidate designs, where the first two requirements avoid surrogates and thus $R\beta 1$ and $R\beta 2$ become immaterial. A design **x** that satisfies these inequalities is guaranteed to meet the specified requirements in a mathematically precise sense accounting for both parametric uncertainty in the manufacturing process of the hydrofoil and uncertainty in surrogates.

It is easy to check whether a design satisfies the five conditions and also to generate a collection of candidate designs. For the sake of demonstration, we randomly generate 500 designs by uniform sampling within the bounds of Table 1 and for each check the five conditions. This results in four candidate designs. (More could have been obtained by further sampling.) In the following, we concentrate on two of them, labeled D2, with (in meters) $x=(0.0834$, 0.2629, 0.4258, 0.5611, 0.0040, 0.0099, 0.0147, 0.0094, 0.0027, 0.1887, 0.3929, 0.5602, 0.0300, 0.0602, $0.0742)$, and D4, with $x=(0.1069$, 0.2724, 0.3749, 0.5962, 0.0009, 0.0072, 0.0283, 0.0084, 0.0045, 0.1877, 0.4047, 0.5058, 0.0297, 0.0578, $0.0742)$; see Figs. 6 and 7. D2 and D4 are quite different, with the two other candidate designs resembling these.

Table 2 summarizes the performance of D2 and D4 as well as the quality of the surrogates. Columns 2 and 3 give the drag-over-lift for D2 and D4, respectively. Rows 3–11 give the estimates provided by the nine surrogates obtained by partitioning the data and the overall assessment $R\beta 3(SD/L(x))$, which equals the highest of the nine numbers above, given in the second-to-last row. For the sake of validation, we compute the actual s-risk of drag-over-lift using 898 high-fidelity simulations; see the last row. We observe that although some individual surrogates (for example $S(3)(x)$) underestimate drag-over-lift, $R\beta 3(SD/L(x))$ is conservative relative to the actual drag-over-lift. This highlights the importance of considering the uncertainty in surrogates. Columns 4 and 5 give similar results for lift; again surrogates are reasonably accurate. For reference, the benchmark design has actual s-risk for drag-over-lift of 0.0880 and for lift of 0.2675, the latter being excessively high.

The s-risk of the inertia modulus for the benchmark, D2, and D4 are $7.400\xd710\u22126,\u20098.738\xd710\u22126$, and $8.48489\xd710\u22126$, respectively. We see that in face of manufacturing errors, the benchmark design fails to meet the requirement of $wmin=8.1\xd710\u22126$. The s-risk for the corresponding profile thicknesses are 0.00118, 0.00135, and 0.00185. Again, the benchmark design fails to meet the requirement of $tPmin=0.00132$.

Figures 6 and 7 show that the suction side control points of D2 and D4 have moved to achieve a thicker cavity by lowering the back profile surface. The pressure side control points are shifted closer to the leading edge. The control points of the pressure side have been changed in different ways to ensure the overall hydrodynamic performance: D2 has a single curvature shape with a maximum at about 0.42 m horizontally from the left, while D4 shows higher curvatures with a slope change at 0.28 m horizontally from the left. D4 has pressure side shape similar to that of the benchmark design, but the maximum curvature of the concave part is shifted toward the trailing edge. D2 has a single curvature face and D4 a double curvature face. For D2, the single curvature pressure surface leads to lower lift force generation, which for this particular shape has been compensated through the vertical displacement of the face cavitator. Moving the sharp edge on the pressure side has the direct effect of changing the cavity detachment position and virtually increasing the operating angle of attack. D4 is characterized by a double curvature leading to lower drag-over-lift. As shown in Fig. 7, the vertical position of the face cavitator does not move compared with the benchmark shape. Hence, the increase in lift generation is mainly due to the double curvature leading to a double pressure peak as evidenced in Fig. 8, which shows pressure as well as vapor content for all three designs. D4 experiences a second pressure peak upstream the face cavitator, locally behaving as a stagnation point. The presence of bump on the face in between the two pressure peaks generates, as seen in Fig. 8, a lower pressure region within the maximum curvature points. Both D2 and D4 present a reduced suction surface curvature with respect to the benchmark design. This feature increases cavity thickness close to the leading edge (see Fig. 8) and ensures more robust cavitating regimes than for the benchmark shape.

## Conclusions

Risk-adaptive set-based design generates robust candidate designs that tend to perform well for a range of values of uncertain parameters and therefore are superior to those obtained by methods ignoring uncertainty. We see that the obtained designs have only slightly lower efficiency than a benchmark design, due to a larger cavity thickness, but their performance is more robust to changes in geometry because of manufacturing errors.

Compared to optimization-based approaches, candidate designs obtained by the framework are less sensitive to errors and misspecifications in the formulations due to our set-based approach. Compared to reliability-based approaches (relying on failure probabilities), candidate designs have several favorable characteristics because of the superior properties of s-risk over that of failure probabilities. Their performance as well as the set itself is stable under changes in risk parameters *α*, *β*, and other quantities and they have reduced possibility of significant under-performance relative to the requirements. The set of candidates is always convex provided that $S(x)$ is convex, which offers computational and other advantages.

Risk-adaptive set-based design permits the use of any surrogate. However, the risk-adaptive surrogates differ from other surrogate-based design processes by directing the surrogate building toward approximating s-risk of QoIs and not the QoIs directly. The framework differs by assessing the uncertainty in any surrogate using s-risk instead of by the variance and related quantities. The difference between the two assessments is especially noticeable when the surrogate uncertainty is asymmetric, in which case a large variance might be due to a benign heavy *lower* tail of the corresponding distribution. Regardless of the surrogate building techniques, the framework relies on reasonably accurate estimates of the uncertainty in the chosen surrogates. As with essentially all design approaches, if such estimates are incorrect, the resulting designs may not perform as expected.

As in all approaches that achieve robustness in the presence of uncertainty, there is a potential downside in terms of bulkier, heavier, and costlier designs. The exact trade-off between cost and robustness is application dependent. RASB design facilitates such exploration because the cost can be defined as one of the QoIs. The risk parameters *α* and *β* are central tools in the exploration as their values drive the safety level and thus the level of robustness.

## Funding Data

Defense Advanced Research Projects Agency (Grant Nos. HR0011517798, HR0011517798, HR0011620572, and N66001-15-2-4055).