## Abstract

Feasibility robust optimization techniques solve optimization problems with uncertain parameters that appear only in their constraint functions. Solving such problems requires finding an optimal solution that is feasible for all realizations of the uncertain parameters. This paper presents a new feasibility robust optimization approach involving uncertain parameters defined on continuous domains. The proposed approach is based on an integration of two techniques: (i) a sampling-based scenario generation scheme and (ii) a local robust optimization approach. An analysis of the computational cost of this integrated approach is performed to provide worst-case bounds on its computational cost. The proposed approach is applied to several non-convex engineering test problems and compared against two existing robust optimization approaches. The results show that the proposed approach can efficiently find a robust optimal solution across the test problems, even when existing methods for non-convex robust optimization are unable to find a robust optimal solution. A scalable test problem is solved by the approach, demonstrating that its computational cost scales with problem size as predicted by an analysis of the worst-case computational cost bounds.

## 1 Introduction

The goal of feasibility robust optimization is to find the best solution to a problem that is feasible under all possible values that any uncertain parameters present in that problem can take. Problem 1, as shown in Eq. (1), provides a general formulation for a feasibility robust optimization problem with no uncertainty in its objective function f(x) based on the formulation given in Ref. [1], where f(x), dl(x), gi(x,u), and qj(u) are all assumed to be continuously differentiable with respect to both x and u, which are assumed to be continuous:

Problem 1: Feasibility robust optimization
$minxf(x)subjectto(ors.t.)dl(x)≤0,∀l∈{1,…,L}gi(x,u)≤0,∀i∈{1,…,I},∀u∈UU={u|qj(u)≤0,∀j=1,…,J}$
(1)

Typical methods for solving feasibility robust optimization problems represent uncertainty using sets of scenarios (sets of possible values for the uncertain parameters) [13], randomly sampled scenarios [4], or worst-case analysis [5]. However, many of these methods can become impractical for engineering design problems because they might have to deal with highly non-convex constraints, scalability limitations due to the number of uncertain parameters, or require too much computational effort to obtain a robust optimal solution.

Most existing methods [1] for solving non-convex robust optimization problems require finding a set of scenarios $U¯={u1,…,uK},U¯⊆U$ that can be used in place of $U$ in Problem 2. The resulting formulation can be referred to as being a scenario-based feasibility robust optimization problem, as given in Problem 2 in Eq. (2).

Problem 2: Scenario-based feasibility robust optimization
$minxf(x)s.t.dl(x)≤0,∀l∈{1,…,L}gi(x,uk)≤0,∀i∈{1,…,I},∀u∈U¯$
(2)

An optimal solution to Problem 2 is a robust optimal solution (an optimal solution to Problem 1) if for each constraint gi(x,u) containing uncertainty, gi(x, uk) ≤ 0 for all scenarios $uk∈U¯$ implies that gi(x, u) ≤ 0 for any possible scenario u$U$.

In this paper, a new feasibility robust optimization approach is developed for solving robust optimization problems (in the form of Problem 1) by finding a finite set of scenarios $U¯⊆U$ so that the solution to Problem 2 will be the same as the solution to Problem 1 (the robust optimal solution). By using a combination of random sampling, optimization-based scenario generation, and worst-case analysis, the new approach is able to efficiently find $U¯$ and solve Problem 2 (and thus obtain a robust optimal solution), even in problems where implementations of existing robust optimization techniques fail to efficiently find a robust optimal solution.

The rest of this paper is organized as follows. Section 2 summarizes current methods for solving non-convex robust optimization problems. Section 3 discusses the formulation for scenario robust optimization used by the proposed new approach. Section 4 details the proposed new approach. Section 5 demonstrates the new approach on five different examples and compares its performance against existing robust optimization approaches. Section 6 summarizes the conclusions of this paper. Appendix  C discusses the theoretical computational performance of the proposed new approach relative to existing robust optimization approaches.

## 2 Related Work

The most basic approach for constructing the set $U¯$ is to assume that $U¯$ consists of a single “worst-case” scenario uw and that gi(x, uw) ≤ 0 implies that gi(x, u) ≤ 0 for any possible scenario u$U$, commonly referred to as a “worst-case analysis” [6]. Bertsimas et al. [7] use a gradient ascent approach for a worst-case analysis while simultaneously solving an optimization problem. Bertsimas and Nohadani [8] develop a simulated annealing approach which extends the approach of Bertsimas et al. [7] to perform a global search for a robust optimal solution. Li et al. [9,10] develop a measure of robustness around a nominal scenario and use a genetic algorithm for determining the worst-case scenario. Zhou et al. [5,11] develop a sequential quadratic programing robust optimization algorithm, where the worst-case scenario for each constraint and the objective function are found via maximization at each sequential quadratic programing iteration. Cheng and Li [12] extend the approach of Zhou et al. [5] to solve problems for global optimality by using a differential evolution method as an outer optimizer. Similar forms of worst-case analysis are used in the context of reliability-based design optimization (RBDO, also called probabilistic or stochastic optimization) by Du and Chen [13], where the most probable point (MPP) is found via an inner optimization problem and used to ensure that constraints are satisfied at a predetermined reliability level. Liang et al. [14] develop a single loop algorithm for RBDO which avoids using the MPP by converting the RBDO problem into a deterministic problem via the first-order Karush–Kuhn–Tucker optimality conditions of the inner optimization problem. In practice, methods which rely on a worst-case or reliability analysis require assumptions that may not hold for non-convex robust optimization, where there can exist multiple “local” worst-case scenarios for a single constraint (see Example 1 in Sec. 5) and where no probability distribution exists for the uncertain parameters.

An alternative to approaches that search for worst cases is to instead construct the set $U¯$ using randomly sampled scenarios, an idea first applied by Calafiore and Campi [4] to the problem of robust control design. Chamanbaz et al. [15] and Calafiore [16,17] developed sequential optimization approaches which alternate between checking the feasibility of a candidate solution by sampling further scenarios and finding a new candidate solution when scenarios are found under which the candidate solution is infeasible. Rudnick-Cohen et al. [18] proposed an approach that generates additional scenarios from randomly sampled scenarios through a best and worst-case analysis. Rudnick-Cohen et al. [18] also proposed a method for performing scenario reduction, which can limit the number of scenarios used in a scenario robust optimization problem. Margellos et al. [19] discuss the tractability and expected number of samples needed for this class of methods. Ramponi [20] introduces the property of “essential robustness” to refer to the conditions under which such methods will asymptotically converge to a robust optimal solution. Because sampling-based approaches converge asymptotically, it is difficult for them to maintain a pre-specified constraint tolerance for feasibility under uncertainty. Note that randomly sampling scenarios are an inherently global search (every scenario is equally likely to be sampled). Thus, the robust optimal solution found by these methods can be considered to be feasible under uncertainty in a global sense, without needing to make any assumption about worst-case scenarios. Sampling-based approaches (excluding Ref. [18]) make no attempt to minimize the size of $U¯$; this can lead to a much larger optimization problem than would be considered in worst-case analysis-based approaches. While sampling-based approaches asymptotically converge to the robust optimal solution when given infinite samples, in practice they must be run with a finite number of samples or iterations. This means that in practice, sampling-based approaches can incorrectly report a non-robust optimal solution with very low worst-case constraint violations as being the robust optimal solution.

This paper presents a new feasibility robust optimization approach that can efficiently solve non-convex robust optimization problems via local search in the design variables space, while maintaining a global search over the uncertain parameters space. This method has two key components. The first component is a new scenario generation method that can both generate scenarios via sampling and refine them with a local optimization method in order to quickly reach a feasibility robust optimal solution. The second component is a new scenario-based local robust optimization method, which refines the final solution to ensure that it satisfies the desired constraint tolerance. Computational experiments demonstrate that using the proposed techniques together requires less overall computational effort in some cases than existing robust optimization approaches and that the proposed new method can solve robust optimization problems that cannot be solved with locally robust optimal techniques.

The new feasibility robust optimization approach presented is based on the framework for sampling-based robust optimization presented in Ref. [18], with five key differences: (i) the new approach uses a single improved method for scenario generation over the two methods proposed in Ref. [18], (ii) the new approach contains a new local robust optimization step for ensuring the feasibility of the final solution found, (iii) the new approach uses a more efficient formulation than Problem 2 that can contain fewer constraints than it, (iv) the new approach avoids solving scenario robust optimization problems twice per iteration as done in Ref. [18], and (v) the new approach does not use scenario reduction.

## 3 Problem Formulation

Many sampling-based approaches, such as Refs. [4,1517], use a reduced form of Problem 2, where scenarios only impose the constraints which they found a design to violate, rather than imposing all constraints containing uncertainty. This reduced scenario robust optimization (RSRO) formulation is given in Problem 3 (Eq. (3)), where $U¯$ is a finite set of scenario under which the constraints need to be imposed and where R(uk) is the set of the indices of the constraints gi(x, uk) ≤ 0 that should be imposed under scenario $uk∈U¯$.

Problem 3: RSRO
$minxf(x)s.t.dl(x)≤0,∀l∈{1,…,L}gi(x,uk)≤0,∀uk∈U¯,∀i∈R(uk)$
(3)

Problem 3 can consist of fewer constraints than Problem 2 would for the same set of scenarios $U¯$. Thus, the approaches using Problem 3 (Refs. [4,1517] and this paper) should perform better for problems with larger numbers of constraints than those using Problem 2 (such as Ref. [18]), since Problem 3 does not need to impose every constraint gi(x,u) under every scenario $uk∈U¯$. By using Problem 3 within the robust optimization framework presented in Ref. [18] and incorporating several other improvements, an efficient and scalable robust optimization approach can be developed.

Note that Problem 1 contains an infinite number of constraints, unlike Problems 2 and 3, making it impossible to solve as a non-convex optimization problem. All robust optimization approaches that use either Problem 2 or Problem 3 in place of Problem 1 assume that there exists a finite set of scenarios $U¯$ such that the feasible region of Problem 2 or 3 under the scenarios in $U¯$ is the same as the feasible region of Problem 1. When this assumption does not hold, an “infinite” number of scenarios may be necessary to solve a robust optimization problem. This paper considers problems where this assumption holds.

## 4 Proposed Approach: Scenario Generation With Local Robust Optimization

The proposed approach, called scenario generation with local robust optimization (SGLRO), solves Problem 1 and consists of two components: a scenario generation method (Sec. 4.1) and a local robust optimization method (Sec. 4.2). SGLRO starts off using a sampling-based robust optimization approach (see Fig. 1), using scenario generation in a similar manner to the approach in Ref. [18] (subsequently referred to as SGR2O). Each time a scenario is generated, it is added to $U¯$ and R, which are then used to re-solve Problem 3. This process continues for a finite number of iterations, after which SGLRO uses a local robust optimization method to obtain its final solution.

Fig. 1
Fig. 1
Close modal

Normally, a sampling-based robust optimization approach can return a non-robust optimal solution with very low worst-case constraint violations after being run for a finite number of iterations. However, a solution with very low worst-case constraint violations should be near the boundaries of the feasible region of Problem 1. Thus, locally searching for worst-case scenarios for that solution should yield additional scenarios that could be added to the set $U¯$ in Problems 2 and 3 so that their feasible regions become the same as Problem 1's. These additional scenarios will enable Problem 2 or 3 to find the robust optimal solution. From a practical standpoint, this local worst-case search is largely the same as running a robust optimization approach that searches for worst-case scenarios [5].

A simple strategy for mitigating the asymptotic convergence of sampling-based methods is thus to use their final solution as the initial conditions for a local “worst-case” based robust optimization approach. SGLRO implements this strategy once it finishes randomly sampling scenarios, by transitioning to a local “worst-case” based robust optimization approach that makes use of both the design and the scenarios generated during random sampling (the “Solve Local Worst Case Robust Optimization using $U¯$” step in Fig. 1). Thus, SGLRO is able to maintain the same global search over the uncertain parameters as a random sampling-based approach to robust optimization, without the limitations of asymptotic convergence.

An implementation of SGLRO algorithm is shown in Table 1. In Table 1, the set $U¯$ is a set of scenarios, which is initially empty (cf. line 1). “Solve RSRO” corresponds to solving Problem 3, which should return xnew. The function “Sample Possible Scenario” samples a random scenario from $U$ and returns it. As shown in Table 1, SGLRO first solves Problem 3 with no scenarios to generate a candidate design xB (cf. lines 1–2). Then, it randomly samples scenarios until a scenario is found where the candidate design is infeasible (cf. line 7). It then generates additional scenarios using scenario generation and adds all these scenarios (including the original randomly sampled one) to $U¯$, the current set of scenarios (cf. lines 14–17). SGLRO repeats these steps for a fixed number of iterations (cf. lines 4–5) and then switches to a local robust optimization method (cf. line 19). The number of iterations should be chosen to be sufficiently large such that SGLRO will sample enough scenarios to find the robust optimal solution. When xB is near (or at) the robust optimal solution after these iterations, the local robust optimization method (cf. line 19) takes care of refining xB to ensure it is the robust optimal solution. The local robust optimization method is initialized with the set of scenarios $U¯$; it attempts to find new worst-case scenarios which are not in $U¯$ and updates xB to ensure feasibility in these new worst-case scenarios. When the local robust optimization is done, SGLRO returns its current solution as the robust optimal solution.

Table 1

Algorithm 1, SGLRO

### 4.1 Sampling-Based Scenario Generation.

Maximizing constraint violations for a candidate design can be used to find a worst-case scenario, which is more likely to be one of the scenarios in $U¯$ than a randomly sampled scenario. Let V be the set of constraints violated by design xB by a randomly sampled scenario s (iV if and only if gi(xB, s) ≥ ɛ, cf. lines 9–12 in Table 1). Problem 4, shown in Eq. (4), gives the formulation from [18] for finding a new scenario u that maximizes the sum of the violated constraints in V, where ɛ is a small positive constraint violation.

Problem 4: Worst-case search
$argmaxu∑i∈Vgi(xB,u)s.t.gi(xB,u)≥ε,∀i∈Vqj(u)≤0,∀j∈{1,…,J}$
(4)

After solving Problem 4, additional constraints may now be violated for xB, which Problem 4 did not attempt to maximize. Additional scenarios can be generated by solving Problem 4 again for these new violated constraints until solving Problem 4 does not violate any constraint for xB that has not already been used in the current iteration of scenario generation. Algorithm 2 (Table 2) describes this scenario generation process, where “Solve Worst Case Search” refers to solving Problem 4 from initial point u. Algorithm 2 works by repeatedly solving Problem 4 for a set of violated constraints (Vnew) from a given scenario (ugen), and then adding Problem 4's solution as a new scenario (cf. lines 7–10, Table 2). The first scenario and set of constraints considered are V and uq (cf. lines 1–3, Table 2), which are the randomly sampled scenario from Algorithm 1 (cf. line 14, Table 1). The next scenario to be considered is the newly generated scenario found by solving Problem 4 (cf. line 7, Table 2), with Vnew being determined from the set of constraints that have yet to be violated by a scenario being generated (cf. lines 11–15, Table 2). Algorithm 2 stops when there are no constraints left that are violated (cf. line 5, Table 2).

Table 2

Algorithm 2, scenario generation $(xB,V,uq,U¯,R)$

### 4.2 Scenario-Based Local Robust Optimization.

SGLRO uses a simple scenario-based local robust optimization method that iteratively performs a local search to find the worst-case scenario for each constraint present. The implementation of the local robust optimization method is given in Appendix  A, Table 7. In each iteration, the local robust optimization method solves Problem 4 to find the worst-case scenario (cf. line 5, Appendix  A, Table 7) for each constraint. Any worst-case scenarios that do violate constraints are added to $U¯$ (cf. lines 6–9, Appendix  A). If new scenarios have been added to $U¯$, the scenario robust optimization problem is solved to obtain a new candidate robust optimal solution (cf. lines 10–11, Appendix  A, Table 7). This process repeats until no new scenarios are added to $U¯$ (cf. lines 2 and 10, Appendix  A, Table 7), after which the local robust optimization method stops and returns its current solution as the robust optimal solution.

## 5 Examples

SGLRO's performance was compared against a deterministic double loop robust optimization method (see Appendix  B, Table 8) and SGR2O [18] across five different examples of non-convex robust optimization problems. The fifth example problem was a scalable test problem, which was run for increasing numbers of design variables, uncertain parameters, and constraints. All examples considered only interval uncertainty.

Because sampling-based robust optimization methods (e.g., SGR2O and SGLRO) are inherently similar to the sampling done in Monte Carlo simulation, Monte Carlo simulation could not be used to verify the robust feasibility of the solutions found. However, in three of the five examples (1, 3, and 4), the set of worst-case scenarios for constraints at the robust optimal solution are known to consist of having uncertain parameters at combinations of their maximum and minimum values. The set of all such scenarios was used to determine the worst-case constraint violations of the approaches compared in these examples. In the remaining two examples, the worst-case constraint violations were determined through alternate analyses (graphically in Example 2 and analytically in Example 5).

All examples used the lower bounds for the design variables as the initial conditions for the approaches compared, except where noted otherwise. In all examples, the objective function was treated as being unaffected by uncertainty. In all five examples, SGR2O used Ns = 12 scenarios, NR = 10, NF = 1 (number of scenarios sampled per iteration), and ɛ = 10−6, which is the same as the constraint feasibility tolerance used by the optimization solver. The nominal scenario unom used by SGLRO was the midpoint of the range for each uncertain parameter. All methods randomly sampled scenarios from a uniform distribution between the lower and upper bounds for each uncertain parameter.

The number of iterations used for each problem was set based on the specific features of the problem. As SGR2O and SGLRO are non-deterministic, they were run 100 times for Examples 1, 2, 3, and 4. Because Example 5 has a single worst-case scenario, SGLRO's performance was deterministic, thus it was run once for each problem size. However, SGR2O was non-deterministic for Example 5, so it was run 10 times for each problem size considered. SGR2O was not run 100 times in Example 5 due to the high computational cost associated with very large problem sizes.

All optimization problems used by SGR2O and SGLRO were solved using matlab's fmincon solver with the sequential quadratic programing option [21]. However, the deterministic double loop approach used the interior point option instead, as it could not find the robust optimal solution in Example 3 when using sequential quadratic programing. When SGR2O solved the scenario reduction refinement problem detailed in Ref. [18], fmincon's “OptimalityTolerance” and “StepTolerance” settings were set to 10−3, additionally the “M” parameter from Ref. [18] was set to 106. When any method compared solved Problem 3, fmincon's “MaxIterations” setting ($Nα$) was set to 1000 and its “MaxFunctionEvaluations” setting was set to 106. All other formulations were solved using fmincon's default parameters. Gradient information was not supplied to fmincon for any of the examples.

### 5.1 Example 1: Basic Circle Problem.

Example 1 is an extremely simple non-convex robust optimization problem with a concave objective function and a single convex constraint. The problem is to find the feasible point that is the greatest distance from the origin; a point is feasible if it is inside a circle with a known radius but unknown location. Equation (5) provides the formulation for Example 1.
$minx,y−x2−y2s.t.(x−u1)2+(y−u2)2−5≤0,∀u1,u2∈[−1,1]−5≤x≤5,−5≤y≤5$
(5)

While Eq. (5) is an extremely simple optimization problem, its feasible region requires the constraint (xu1)2 + (yu2)2 − 5 ≤ 0 to be imposed for four different combinations of values for u1 and u2 (see Fig. 2). There are four locally optimal solutions to Eq. (5), (± 1, 0) and (0, ± 1), which all share the same globally optimal cost of −1.

Fig. 2
Fig. 2
Close modal

All methods compared in Example 1 used x = 0.5 and y = 0 as their initial conditions. SGR2O and SGLRO were run with NI = 100 iterations. Figure 3 shows a graphical example of how SGLRO solves Example 1. Note that because SGLRO uses local optimization, it does not need to find all four scenarios which define the feasible region depicted in Fig. 2.

Fig. 3
Fig. 3
Close modal

Table 3 lists the results of the three methods used to solve Example 1. SGLRO reliably converged to the robust optimal solution of Example 1. SGR2O found the robust optimal solution in 99 of its 100 runs. The one run where SGR2O did not converge was caused by sampling a scenario (u = [−0.94, 0.86]) that was extremely close to one of the four scenarios defining the feasible region in Fig. 2. This scenario reduced the probability of sampling a scenario which showed SGR2O's current solution to be infeasible, causing it to run out iterations before finding such a scenario. The deterministic double loop approach did not converge in Example 1, becoming trapped in an infinite loop going between the scenarios shown in Fig. 2.

Table 3

Results for Example 1

ApproachSum of all objective function callsSum of all constraint function callsLargest worst-case constraint violationFinal objective function valueFinal number of scenarios
SGR2O (mean)137.3655.40.0056−1.00155.97
SGR2O (standard deviation)6.33839.650.05570.01510.30
SGLRO (mean)56.9314.60−14.68
SGLRO (standard deviation)22.2115.801.6521 × 10−151.21
Deterministic double loopN/AN/AN/A
ApproachSum of all objective function callsSum of all constraint function callsLargest worst-case constraint violationFinal objective function valueFinal number of scenarios
SGR2O (mean)137.3655.40.0056−1.00155.97
SGR2O (standard deviation)6.33839.650.05570.01510.30
SGLRO (mean)56.9314.60−14.68
SGLRO (standard deviation)22.2115.801.6521 × 10−151.21
Deterministic double loopN/AN/AN/A

### 5.2 Example 2: Local Maxima Example.

Example 2 (Eq. (6)) is a very simple problem which contains local maxima with respect to its single uncertain parameter. Unlike Example 1, it does not contain multiple worst-case scenarios, thus it can be used to compare the performance of SGR2O and SGLRO's global searches relative to a local method like the deterministic double loop approach.
$minx,y−xs.t.34uxcos(7πu(1−x)2)2−x10≤0,∀u∈[−1,1]0≤x≤810$
(6)

There is only one robust feasible solution to Example 2, which is x = 0, all other values of x are infeasible for at least one value of u. SGR2O and SGLRO were run with NI = 100 iterations for Example 2. All approaches in Example 2 used the initial point xIC = 0.1. Figure 4 shows a plot of the constraint in Example 2 as a function of x and u, along with the solutions found by the approaches run on Example 2.

Fig. 4
Fig. 4
Close modal

Table 4 lists the results of the three approaches compared in Example 2. SGLRO and SGR2O reliably found the robust optimal solution; however, the deterministic double loop approach found an infeasible solution (see Fig. 4). This occurred because the deterministic double loop approach uses a local search to find worst-case scenarios, which caused it to find a scenario that locally maximizes the value of the constraint (u = 0.215) instead of the global maximum (u = 1). SGLRO was significantly faster than the other two approaches compared in Example 2. The deterministic double loop approach would be fastest if it used sequential quadratic programing as its solver, but it would still find the same infeasible solution shown in Fig. 4. SGR2O used scenario reduction in 15 of its 100 runs. These 15 runs required many more constraint function calls than the other 85 runs, which caused the large standard deviation in SGR2O's number of constraint function calls.

Table 4

Results for Example 2

ApproachSum of all objective function callsSum of all constraint function callsLargest worst-case constraint violationFinal objective function valueFinal number of scenarios
SGR2O (mean)17818651.2824 × 10−12−2.3544 × 10−126.34
SGR2O (standard deviation)21431448.98711 × 10−121.8123 × 10−112.78
SGLRO (mean)14.4143.52.4689 × 10−14−4.5330 × 10−142.18
SGLRO (standard deviation)4.7917.242.2012 × 10−134.0414 × 10−130.58
Deterministic double loop1181880.4659−0.7191
ApproachSum of all objective function callsSum of all constraint function callsLargest worst-case constraint violationFinal objective function valueFinal number of scenarios
SGR2O (mean)17818651.2824 × 10−12−2.3544 × 10−126.34
SGR2O (standard deviation)21431448.98711 × 10−121.8123 × 10−112.78
SGLRO (mean)14.4143.52.4689 × 10−14−4.5330 × 10−142.18
SGLRO (standard deviation)4.7917.242.2012 × 10−134.0414 × 10−130.58
Deterministic double loop1181880.4659−0.7191

### 5.3 Example 3: Robust Welded Beam.

Example 3 is a robust optimization variant of the well-known welded beam problem considered by Ragsdell and Phillips [22], taken from Refs. [18,23]. The eight uncertain parameters considered were deviations in the values of the problem's four design variables (dimensions of the weld and of the beam) and the length, load, and failure stresses of the beam. The objective function is to minimize the cost of the beam without considering uncertainty, accounting for the material cost of the beam and the cost of the weld. Example 1 has six constraints: two require that the beam does not fail under shear and bending stress and the other four limits the deflection of the beam, ensures that the beam does not buckle, requires that the weld's thickness is not larger than the beam's width, and limits the weld's thickness. SGR2O and SGLRO were run with NI = 100 iterations.

Table 5 lists the results for all three approaches in Example 3. SGLRO and the deterministic double loop approach reliably converged to the robust optimal solution, but SGR2O found the robust optimal solution in only 99 of its 100 runs. Both SGR2O and SGLRO reached the robust optimal solution after performing scenario generation twice. Note that the number of scenarios sampled by SGR2O was approximately a tenth of those used for this problem in Ref. [18], with more iterations all 100 runs of SGR2O would have converged as it did in Ref. [18]. The deterministic double loop approach was the fastest approach in Example 3.

Table 5

Results for Example 3

ApproachSum of all objective function callsSum of all constraint function callsLargest worst-case constraint violationFinal objective function valueFinal number of scenarios
SGR2O (mean)536.914,5090.0012.78594.17
SGR2O (standard deviation)34.751245.50.0013.9414 × 10−40.40
SGLRO (mean)278.25349.48.0312 × 10−92.78594.2
SGLRO (standard deviation)19.35452.023.1742 × 10−81.97 × 10−80.402
Deterministic double loop32046796.6404 × 10−62.78596
ApproachSum of all objective function callsSum of all constraint function callsLargest worst-case constraint violationFinal objective function valueFinal number of scenarios
SGR2O (mean)536.914,5090.0012.78594.17
SGR2O (standard deviation)34.751245.50.0013.9414 × 10−40.40
SGLRO (mean)278.25349.48.0312 × 10−92.78594.2
SGLRO (standard deviation)19.35452.023.1742 × 10−81.97 × 10−80.402
Deterministic double loop32046796.6404 × 10−62.78596

### 5.4 Example 4: Enhanced Robust Speed Reducer.

Example 4 is a more challenging version of the robust speed reducer design optimization problem first considered in Ref. [24], which is detailed in Eq. (7). Unlike the formulation considered in other works [5,9,24], which only considered uncertainty for two design variables, this problem included uncertain deviations for all seven design variables. The new constraint g13 constrains the allowable variation of the distance between the two shafts in the speed reducer. This new constraint relaxes constraints g5 and g6, which allows a wider range of designs than the original problem did in Ref. [24]. The upper and lower bounds for the design variables in the problem have been changed to allow a larger feasible region. The objective function is to minimize the sum of the normal stresses present in the two gears (m2 and m3). The volume of the speed reducer (m1) is constrained by constraint g10. Objective robustness (considered in Ref. [5]) is not considered. The initial conditions used are the same as the ones used in Ref. [5] ([x1, x2, x3, x4, x5, x6, x7] = [3.58, 0.71, 18, 8, 8, 3.5, 5.3]). The uncertain deviations of the design variables used in this example were [u1, u2, u3, u4, u5, u6, u7] = [Δx1, Δx2, Δx3, Δx4, Δx5, Δx6, Δx7]. Unlike the original problem, in Example 4, some constraints have multiple worst-case scenarios, which make solving the robust optimization problem more challenging. SGR2O and SGLRO were run for NI = 100 iterations. Table 6 provides the results for the approaches tested.
$minxm2(x)+m3(x)1000,g1(x)=(x1x22x32)−1−397.5−1s.t.g(x+u)≤0,∀u∈Ug2(x)=(x1x22x3)−1−27−1whereg3(x)=x43(x2x3x64)−1−1.93−1m1(x)=0.7854x1x22(10x323g4(x)=x33(x2x3x74)−1−1.93−1+14.933x3−43.0934)g5(x)=−x4+x6−0.3−1.508x1(x62+x72)g6(x)=−x5+x7−0.3+7.477(x63+x73)g7(x)=5−x1x2−1+0.7854(x4x62+x5x72)g8(x)=x1x2−1−12$
(7)
$m2(x)=(754x4/x2x3)2+1.69×1070.1x63g9(x)=x2x3−40g10(x)=m1(x)−1400g11(x)=m2(x)−1800m3(x)=(754x5/x2x3)2+1.575×1070.1x73g12(x)=m3(x)−1100g13(x)=(x4−x6)2+(x5−x7)2−0.321≤x1≤150.1≤x2≤1.58≤x3≤280.3≤x4,x5≤12.31≤x6≤81≤x7≤8−0.01≤u1≤0.01−0.01≤u2≤0.01−1≤u3≤1−0.1≤u4,u5≤0.1−0.1≤u6≤0.1−0.05≤u7≤0.05$
Table 6

Results for Example 4

ApproachSum of all objective function callsSum of all constraint function callsLargest worst-case constraint violationFinal objective function valueFinal number of scenarios
SGR2O (mean)914.357,3500.16681.8859.3
SGR2O (standard deviation)95.7212,5004.4807 × 10−45.0658 × 10−41.51
SGLRO (mean)731.111,3105.3529 × 10−91.8869.11
SGLRO (standard deviation)85.9915642.822 × 10−83.9391 × 10−61.39
Deterministic double loopN/AN/AN/A
ApproachSum of all objective function callsSum of all constraint function callsLargest worst-case constraint violationFinal objective function valueFinal number of scenarios
SGR2O (mean)914.357,3500.16681.8859.3
SGR2O (standard deviation)95.7212,5004.4807 × 10−45.0658 × 10−41.51
SGLRO (mean)731.111,3105.3529 × 10−91.8869.11
SGLRO (standard deviation)85.9915642.822 × 10−83.9391 × 10−61.39
Deterministic double loopN/AN/AN/A

Only SGLRO found the robust optimal solution every time. SGR2O reliably found an infeasible solution that is extremely close to the robust optimal solution but is not robust because small worst-case constraint violations are present in constraints g11, g12, and g13. Note that SGR2O did not perform scenario reduction in Example 4. Like Example 1, Example 4 required multiple worst-case scenarios for one of its constraints (g13), which caused the deterministic double loop approach to enter an infinite loop.

### 5.5 Example 5: Robust DTLZ9.

Example 5 is a single objective, robust version of the scalable multi-objective DTLZ9 test problem [25], which is given in Eq. (8). The objective function is to minimize the sum of the objective functions from the original DTLZ9 problem. Uncertainty is added to the problem by adding an uncertain deviation of ±0.09 for every design variable and by changing the bounds for each design variable to lie within [0.1, 0.9]. The initial conditions used were the midpoint between the bounds.
$minx∑j∈{1,…,M}fjwherefj(x)=∑i=⌊(j−1)(n/M)⌋⌊j(n/M)⌋xi0.1s.t.gk(x,u)=fM2(x+u)+fk2(x+u)−1≥0∀k∈{1,…,M−1},∀u∈U0.1≤xi≤0.9,−0.09≤ui≤0.09$
(8)

SGR2O [18], SGLRO, and the deterministic double loop approach were run for varying sizes of Example 5, ranging from n = 10 design variables to n = 250 design variables. The parameter M in the DTLZ9 test problem was chosen to be half the number of design variables (M = n/2). Example 5 has a single worst-case scenario, in which every deviation equals −0.09, so the robust optimal solution assigns a value of 0.1 to all but the last two design variables, which instead equal 0.5527. All three approaches found the robust optimal solution to Example 5 for all problem sizes considered. From the computational complexity analysis described in Appendix  C, it should be noted that the number of constraints and uncertain parameters in the DTLZ9 problem [25] increases linearly with the number of design variables, so SGR2O [18], SGLRO, and the deterministic double loop approach should all have O(n2) constraint calls relative to the number of design variables (n). Because SGR2O's behavior was not deterministic in Example 5, it was run 10 times for each problem size and the medians of the number of function calls were used for comparison. This non-deterministic behavior was caused by the scenario generation method used by SGR2O, which generates some scenarios by minimizing constraint violations.

As shown in Fig. 5, the number of objective function calls made by SGR2O, SGLRO, and the deterministic double loop approach increased linearly with problem size, which is expected as the same number of solver iterations required for most problem sizes, but computing the gradient of the objective function increased linearly in function calls as more design variables were added. SGLRO always found the worst-case on the first iteration, so it needed fewer objective function calls than the deterministic double loop approach.

Fig. 5
Fig. 5
Close modal

As shown in Fig. 6, the number of constraint function calls that SGR2O, SGLRO, and the deterministic double loop approach made increased quadratically as the size of the problem increased (R2 value for fitting a quadratic curve is 0.96 for SGR2O, 1 for SGLRO, and 0.99 for the deterministic double loop approach). This result numerically demonstrates that all three approaches have comparable scalability. This relationship also confirms the predicted O(n2) computation cost and demonstrates the correctness of the computational complexity results presented in Appendix  C. SGLRO used fewer constraint function calls than the deterministic double loop approach only when the deterministic double loop approach used matlab's interior point solver. When it used sequential quadratic programing as the solver, the deterministic double loop approach required fewer constraint function calls in Example 5 than the other approaches.

Fig. 6
Fig. 6
Close modal

### 5.6 Discussion of Results.

For the five examples considered, SGLRO was the only approach that reliably found a robust optimal solution. The deterministic double loop approach found robust optimal solutions for Examples 3 and 5, but it could not do so for Examples 1, 2, and 4. In Example 2, the local maxima present in the constraints prevented the deterministic double loop approach from finding the true worst-case scenarios for the constraints. In Examples 1 and 4, however, the deterministic double loop approach failed because both problems have some constraints with multiple worst-case scenarios. This violates the assumption that each constraint has a single worst-case scenario, which the deterministic double loop approach and other worst-case-based approaches to robust optimization [5] require. SGLRO does not require this assumption, which allows it to find all of the scenarios that are needed in order to find the robust optimal solutions to Examples 1 and 4. Additionally, SGLRO uses random sampling when initially searching for worst-case scenarios, which allows it to avoid issues with local maxima such as those present in Example 2.

Although SGR2O almost always found the robust optimal solution in Examples 1 and 3, it reliably failed to find the robust optimal solution in Example 4. The occasional failures in Examples 1 and 3 occurred because SGR2O's number of iterations was set too low. In Example 4, however, increasing the number of iterations would not improve SGR2O's performance. SGR2O failed to find a robust optimal solution in Example 4 because it found an infeasible solution for which the probability of sampling a scenario in which that solution was infeasible was extremely low. Robust optimization methods that rely solely on random sampling, such as SGR2O, are unable to distinguish between this type of solution and a robust optimal solution. SGLRO avoided this problem because its local robust optimization step can easily find a scenario where this solution is infeasible, allowing it to find the robust optimal solution to Example 4. This step also ensured that SGLRO reliably found the robust optimal solution to Examples 1 and 3 using fewer iterations than SGR2O needed.

Curiously, although Example 2 is a very small problem, using scenario reduction actually increased SGR2O's computational cost in Example 2. This occurred because the scenario reduction method proposed in Ref. [18] attempts to remove as many scenarios as possible. This causes SGR2O to remove some scenarios that it needs to find the robust optimal solution, so additional function calls are needed to find these scenarios a second time. Thus, SGLRO was faster than SGR2O when finding the robust optimal solution to Example 2.

## 6 Conclusions

This paper presented SGLRO, a new approach for solving non-convex robust optimization problems. SGLRO extends past work [18] in using sampling-based approaches to solve robust optimization problems via a new approach for scenario generation and a new local robust optimization method for refining SGLRO's final solution. The introduction of the local robust optimization method shown makes SGLRO more reliable at finding the robust optimal solution than sampling-based approaches and worst-case approaches. It was also demonstrated experimentally that SGLRO was capable of finding the robust optimal solution to several different example problems, even when existing robust optimization methods could not reliably find the robust optimal solution.

The results presented demonstrate that SGLRO can efficiently solve complex non-convex robust optimization problems with large amounts of uncertainty. However, the results also indicate several areas of potential improvement for SGLRO. SGLRO's performance could potentially be improved by fully integrating the local robust optimization method into the process of sampling scenarios, rather than running it after all scenarios are sampled. A non-uniform scenario sampling approach could make use of existing infeasible scenarios to find new infeasible scenarios more quickly when near the robust optimal solution, speeding up the rate of convergence. Alternate strategies for scenario generation could more quickly obtain useful scenarios, providing a similar benefit. Developing an approach for scenario reduction which avoids the issues observed with the method presented in Ref. [18] could also potentially provide an improvement in performance.

All of the approaches discussed require that there exists a finite set $U¯$ that can be used to find the robust optimal solution. It is possible to have a robust optimization problem where Problem 2 requires an infinite number of scenarios (such as a line or other continuous curve of scenarios) in order to reach the robust optimal solution. It may be possible to extend the framework of SGLRO to use robust feasibility cuts, such as in Ref. [26], or surrogate modeling-based techniques, such as in Ref. [27], to handle such problems.

While this paper has only discussed feasibility robust optimization (uncertainty only appearing in the constraints), uncertainty in an objective function (f(x,u) instead of f(x)) can be dealt with by moving the objective function into the constraints (see Sec. 2.1 of Ref. [1]). This concept has also been extended to multi-objective robust optimization (MORO) [28]. As presented, the proposed approach cannot be used for solving MORO problems, as MORO requires accounting for a set of designs (which trade-off between objectives) rather than just one design. Future work will explore methods for using scenario-based approaches to solve MORO problems.

## Acknowledgment

The work presented here was supported in part by the Naval Air Warfare Center (Funder ID: 10.13039/100010217) under cooperative agreement N00421132M006. Such support does not constitute an endorsement by the funding agency of the opinions expressed in the paper.

Implementations of the SGLRO algorithm, SGR2O, the double loop approach compared and all numerical examples used in this paper are available online.1

## Nomenclature

### Notation

• u =

vector of all uncertain parameters present in the optimization problem

•
• x =

vector of all design variables present in the optimization problem

•
• D =

number of design variables

•
• P =

number of uncertain parameters

•
• V =

set of violated constraints

•
• $U¯$ =

set of scenarios used to solve the scenario robust optimization problem

•
• $U$ =

set of all possible combinations of uncertain parameters (domain of uncertain parameters)

•
• xB =

current best solution for design variables

•
• $Nα$ =

maximum number of optimization solver iterations

•
• NI =

number of iterations to run SGLRO algorithm

•
• NQ =

number of iterations used by a local robust optimization method

•
• dl(x) =

lth constraint without uncertainty

•
• f(x) =

objective function

•
• gi(x,u) =

ith constraint subject to uncertainty

•
• qj(u) =

jth constraint defining the domain of uncertain parameters

•
• I, J, L =

number of constraints on design containing uncertainty, on the domain of uncertain parameters, and on design not containing uncertainty, respectively

•
• R(u) =

the set of the indices of the constraints which u should impose in a reduced scenario robust optimization problem

•
• ɛ =

user specified constraint tolerance

### Definitions

• Scenario =

a scenario assigns a value to all uncertain parameters present in a problem

•
• SGR2O =

scenario generation and reduction robust optimization, robust optimization approach of Rudnick-Cohen et al. [18]

### Implementation of the Local Robust Optimization Method

Table 7

Algorithm 3, local robust optimization $(xB,U¯,R)$

### Deterministic Double Loop Robust Optimization Method

Table 8

Algorithm 4, deterministic double loop robust optimization

### Computational Complexity of SGLRO and Other Robust Optimization Approaches

Table 9 details the computational complexity of each of the steps within SGLRO in terms of the total number of constraint function calls (total number of times that any of the constraints dl(x), gi(x,u), and qj(u) are evaluated). It is assumed that all optimization solvers use a central difference method to estimate derivatives and that the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm [29] is used to estimate the Hessian of f(x) for both the local robust optimization method and any optimization solvers which make use of Hessians (e.g., matlab's fmincon [21]), as it is more efficient than numerically computing the Hessian at every iteration via finite differences. Each entry in Table 9 is computed assuming that the step in question occurs on every iteration of SGLRO, which is why all steps except “Local Robust Optimization” are multiplied by NI. “Reduced Scenario Robust Optimization” requires at most $Nα×D×(I×NI+L)$ function calls, as Problem 3 has at most I × NI + L constraints (if a scenario is generated on every single iteration), which require D function calls to evaluate the gradient of, to a maximum of $Nα$ times. A similar expression exists for “Worst Case Search,” except using P instead of D and J instead of L; however, the maximum number of constraints used during “Worst Case Search” will never increase. The cost of “Local Robust Optimization” is the sum of the costs of “Worst Case Search” and “Reduced Scenario Robust Optimization,” except that NQ (the number of iterations “Local Robust Optimization” needs to converge) replaces NI.

Table 9

Breakdown of computational costs in SGLRO

StepUpper bound on number of function calls
Reduced scenario robust optimization$O(NI2×Nα×D×I×NI×Nα×D×L)$
Worst-case search$O(NI×Nα×P×I×J×NS)$
Feasibility checking (lines 8–13 of Algorithm 1)O(NI × I)
Local robust optimization$O(NQ×Nα×P×I×J+NQ2×Nα$$×D×I+NQ×Nα×D×L)$
StepUpper bound on number of function calls
Reduced scenario robust optimization$O(NI2×Nα×D×I×NI×Nα×D×L)$
Worst-case search$O(NI×Nα×P×I×J×NS)$
Feasibility checking (lines 8–13 of Algorithm 1)O(NI × I)
Local robust optimization$O(NQ×Nα×P×I×J+NQ2×Nα$$×D×I+NQ×Nα×D×L)$

The term $NΩ=NI+NQ$ can be used to represent the total number of iterations used by SGLRO, which simplifies its worst-case computational cost to the expression given in Table 10, which is the sum of the terms in Table 9. Table 10 also provides a comparison of SGLRO's computational cost against a basic deterministic double loop approach (see Appendix  B, Table 8 for implementation) and SGR2O [18]. NS is the maximum limit on the number of scenarios used by SGR2O.

Table 10

Computational costs of methods compared

ApproachTheoretical worst-case computational cost
SGR2O$O(NΩ×Nα×(NS×I+L)×D+NΩ×Nα×(NS×I+J)×P)$
Deterministic double loop$O(NΩ×Nα×P×I×J+NΩ×Nα×D×I+NΩ×Nα×D×L)$
SGLRO$O(NΩ2×Nα×P×I×J+NΩ2×Nα×D×I+NΩ×Nα×D×L)$
ApproachTheoretical worst-case computational cost
SGR2O$O(NΩ×Nα×(NS×I+L)×D+NΩ×Nα×(NS×I+J)×P)$
Deterministic double loop$O(NΩ×Nα×P×I×J+NΩ×Nα×D×I+NΩ×Nα×D×L)$
SGLRO$O(NΩ2×Nα×P×I×J+NΩ2×Nα×D×I+NΩ×Nα×D×L)$

From a theoretical standpoint, both SGR2O [18] and a deterministic double loop approach should be faster than SGLRO, as SGLRO has $NΩ2$ terms present. SGR2O appears faster because SGR2O uses scenario reduction to limit the maximum number of scenarios in use, which changes the cost of solving Problem 2 or Problem 3 to be $O(NI×NS×Nα×D×I×NI×Nα×D×L)$. The deterministic double loop optimization approach only considers one scenario per constraint, which provides a similar benefit. However, there exist robust optimization problems where the robust optimal solution cannot be found by only considering one scenario per constraint. Additionally, the use of scenario reduction may require additional scenarios to be generated, which can result in SGR2O requiring more constraint function calls than SGLRO.

## References

1.
Bertsimas
,
D.
,
Brown
,
D. B.
, and
Caramanis
,
C.
,
2011
, “
Theory and Applications of Robust Optimization
,”
SIAM Rev.
,
53
(
3
), pp.
464
501
. 10.1137/080734510
2.
Beyer
,
H.-G.
, and
Sendhoff
,
B.
,
2007
, “
Robust Optimization—A Comprehensive Survey
,”
Comput. Methods Appl. Mech. Eng.
,
196
(
33–34
), pp.
3190
3218
. 10.1016/j.cma.2007.03.003
3.
Ben-Tal
,
A.
, and
Nemirovski
,
A.
,
2002
, “
Robust Optimization—Methodology and Applications
,”
Math. Program.
,
92
(
3
), pp.
453
480
. 10.1007/s101070100286
4.
Calafiore
,
G. C.
, and
Campi
,
M. C.
,
2006
, “
The Scenario Approach to Robust Control Design
,”
IEEE Trans. Autom. Control
,
51
(
5
), pp.
742
753
. 10.1109/TAC.2006.875041
5.
Zhou
,
J.
,
Cheng
,
S.
, and
Li
,
M.
,
2012
, “
Sequential Quadratic Programming for Robust Optimization With Interval Uncertainty
,”
ASME J. Mech. Des.
,
134
(
10
), p.
100913
. 10.1115/1.4007392
6.
Du
,
X.
, and
Chen
,
W.
,
2000
, “
Towards a Better Understanding of Modeling Feasibility Robustness in Engineering Design
,”
ASME J. Mech. Des.
,
122
(
4
), pp.
385
394
. 10.1115/1.1290247
7.
Bertsimas
,
D.
,
,
O.
, and
Teo
,
K. M.
,
2010
, “
Nonconvex Robust Optimization for Problems With Constraints
,”
Informs J. Comput.
,
22
(
1
), pp.
44
58
. 10.1287/ijoc.1090.0319
8.
Bertsimas
,
D.
, and
,
O.
,
2010
, “
Robust Optimization With Simulated Annealing
,”
J. Global Optim.
,
48
(
2
), pp.
323
334
. 10.1007/s10898-009-9496-x
9.
Li
,
M.
,
Azarm
,
S.
, and
Aute
,
V.
,
2005
, “
A Multi-Objective Genetic Algorithm for Robust Design Optimization
,”
Proceedings of the 7th Annual Conference on Genetic and Evolutionary Computation
,
Washington, DC
,
June 25–26
,
ACM
, pp.
771
778
.
10.
Li
,
M.
,
Azarm
,
S.
, and
Boyars
,
A.
,
2006
, “
A New Deterministic Approach Using Sensitivity Region Measures for Multi-Objective Robust and Feasibility Robust Design Optimization
,”
ASME J. Mech. Des.
,
128
(
4
), pp.
874
883
. 10.1115/1.2202884
11.
Zhou
,
J.
, and
Li
,
M.
,
2014
, “
Advanced Robust Optimization With Interval Uncertainty Using a Single-Looped Structure and Sequential Quadratic Programming
,”
ASME J. Mech. Des.
,
136
(
2
), p.
021008
. 10.1115/1.4025963
12.
Cheng
,
S.
, and
Li
,
M.
,
2015
, “
Robust Optimization Using Hybrid Differential Evolution and Sequential Quadratic Programming
,”
Eng. Optim.
,
47
(
1
), pp.
87
106
. 10.1080/0305215X.2013.875164
13.
Du
,
X.
, and
Chen
,
W.
,
2004
, “
Sequential Optimization and Reliability Assessment Method for Efficient Probabilistic Design
,”
ASME J. Mech. Des.
,
126
(
2
), pp.
225
233
. 10.1115/1.1649968
14.
Liang
,
J.
,
Mourelatos
,
Z. P.
, and
Tu
,
J.
,
2004
, “
A Single-Loop Method for Reliability-Based Design Optimization
,”
ASME 2004 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
,
Salt Lake City, UT
,
Sept. 28–Oct. 2
,
American Society of Mechanical Engineers
, pp.
419
430
.
15.
Chamanbaz
,
M.
,
Dabbene
,
F.
,
Tempo
,
R.
,
Venkataramanan
,
V.
, and
Wang
,
Q.-G.
,
2016
, “
Sequential Randomized Algorithms for Convex Optimization in the Presence of Uncertainty
,”
IEEE Trans. Autom. Control
,
61
(
9
), pp.
2565
2571
. 10.1109/TAC.2015.2494875
16.
Calafiore
,
G. C.
,
2017
, “
Repetitive Scenario Design
,”
IEEE Trans. Autom. Control
,
62
(
3
), pp.
1125
1137
. 10.1109/TAC.2016.2575859
17.
Calafiore
,
G. C.
,
2010
, “
Random Convex Programs
,”
SIAM J. Optim.
,
20
(
6
), pp.
3427
3464
. 10.1137/090773490
18.
Rudnick-Cohen
,
E.
,
Herrmann
,
J. W.
, and
Azarm
,
S.
,
2018
, “
Feasibility Robust Optimization Via Scenario Generation and Reduction
,”
ASME 2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
,
,
Aug. 26–29
,
American Society of Mechanical Engineers
, p.
V02BT03A059
.
19.
Margellos
,
K.
,
Goulart
,
P.
, and
Lygeros
,
J.
,
2014
, “
On the Road Between Robust Optimization and the Scenario Approach for Chance Constrained Optimization Problems
,”
IEEE Trans. Autom. Control
,
59
(
8
), pp.
2258
2263
. 10.1109/TAC.2014.2303232
20.
Ramponi
,
F. A.
,
2018
, “
Consistency of the Scenario Approach
,”
SIAM J. Optim.
,
28
(
1
), pp.
135
162
. 10.1137/16M109819X
21.
Matlab
,
2018a
,
Matlab Optimization Toolbox
,
The MathWorks
,
Natick, MA
.
22.
Ragsdell
,
K.
, and
Phillips
,
D.
,
1976
, “
Optimal Design of a Class of Welded Structures Using Geometric Programming
,”
J. Eng. Ind.
,
98
(
3
), pp.
1021
1025
. 10.1115/1.3438995
23.
Mortazavi
,
A.
,
Azarm
,
S.
, and
Gabriel
,
S.
,
2013
, “
,”
Eng. Optim.
,
45
(
11
), pp.
1287
1307
. 10.1080/0305215X.2012.734818
24.
Gunawan
,
S.
,
2004
, “
Parameter Sensitivity Measures for Single Objective, Multi-Objective, and Feasibility Robust Design Optimization
,” PhD thesis,
Univeristy of Maryland
,
Baltimore, MD
.
25.
Deb
,
K.
,
Thiele
,
L.
,
Laumanns
,
M.
, and
Zitzler
,
E.
,
2005
, “Scalable Test Problems for Evolutionary Multiobjective Optimization,”
Evolutionary Multiobjective Optimization. Advanced Information and Knowledge Processing
,
A.
Abraham
,
L.
Jain
, and
R.
Goldberg
, eds.,
Springer
,
London
, pp.
105
145
.
26.
Siddiqui
,
S.
,
Azarm
,
S.
, and
Gabriel
,
S.
,
2011
, “
A Modified Benders Decomposition Method for Efficient Robust Optimization Under Interval Uncertainty
,”
Struct. Multidiscipl. Optim.
,
44
(
2
), pp.
259
275
. 10.1007/s00158-011-0631-1
27.
Zhou
,
Q.
,
Jiang
,
P.
,
Huang
,
X.
,
Zhang
,
F.
, and
Zhou
,
T.
,
2018
, “
A Multi-Objective Robust Optimization Approach Based on Gaussian Process Model
,”
Struct. Multidiscipl. Optim.
,
57
(
1
), pp.
213
233
. 10.1007/s00158-017-1746-9
28.
Gunawan
,
S.
, and
Azarm
,
S.
,
2005
, “
Multi-Objective Robust Optimization Using a Sensitivity Region Concept
,”
Struct. Multidiscipl. Optim.
,
29
(
1
), pp.
50
60
. 10.1007/s00158-004-0450-8
29.
Arora
,
J. S.
,
2004
,
Introduction to Optimum Design
,
Elsevier
,
New York
.