In probabilistic approaches to engineering design, including robust design, mean and variance are commonly used as the optimization objectives. This method, however, has significant limitations. For one, some mean–variance Pareto efficient designs may be stochastically dominated and should not be considered. Stochastic dominance is a mathematically rigorous concept commonly used in risk and decision analysis, based on the cumulative distribution function (CDFs), which establishes that one uncertain prospect is superior to another, while requiring minimal assumptions about the utility function of the outcome. This property makes it applicable to a wide range of engineering problems that ordinarily do not utilize techniques from normative decision analysis. In this work, we present a method to perform optimizations consistent with stochastic dominance: the Mean–Gini method. In macroeconomics, the Gini Index is the de facto metric for economic inequality, but statisticians have also proven a variant of it can be used to establish two conditions that are necessary and sufficient for both first and second-order stochastic dominance . These conditions can be used to reduce the Pareto frontier, eliminating stochastically dominated options. Remarkably, one of the conditions combines both mean and Gini, allowing for both expected outcome and uncertainty to be expressed in a single objective which, when maximized, produces a result that is not stochastically dominated given the Pareto front meets a convexity condition. We also find that, in a multi-objective optimization, the Mean–Gini optimization converges slightly faster than the mean–variance optimization.
Here, two objectives, and , are combined by the function , usually with a weighted sums formula. Optimizing with respect to two objectives drastically increases computational cost relative to a single objective, making it impossible for some problems and undesirable for others. The weighted sums approach creates a combined objective that accounts for both mean and uncertainty in outcome, greatly reducing computational cost. However, the weighted sums approach involves setting weights a priori without a mathematically rigorous method to determine suitable values for the weights.
In cases where a multi-objective optimization is feasible, and can be treated as competing objectives in a multi-objective optimization to achieve a set of Pareto efficient alternatives that can be plotted as a Pareto front. However, a decision must be made from within this set of Pareto efficient alternatives and such a decision must be informed by higher level information [17,18]. In reality, additional information that can accurately inform such a decision is often not available and designers often resort to variants of the weighted sum method applied a posteriori without any rigorous guidance with regard to what values should be selected for the weights. In such cases, formulating the weight, or preference vectors, is a highly subjective task, which is difficult to perform accurately and reliably [17,18].
The effectiveness of and also breaks down for non-normal distributions. They only fully parametrize the normal distribution, but in reality, uncertain outcomes very often follow non-normal distributions . In such cases, using as the summary statistic for uncertainty can be grossly misleading. Figure 1 shows two distributions with identical mean and variance. While they are significantly different, optimizations that use and as objectives will treat the two outcomes shown identically.
Even in the case when outcomes are normally distributed, and still do not provide any information with regard to stochastic dominance. As discussed later in this paper, it is possible for a design to be stochastically dominated, even though it is Pareto efficient. Thus, when performing optimizations under uncertainty, not all designs within the Pareto efficient set are truly valid alternatives. In this work, we present a simple technique to obtain a subset of the Pareto efficient set that stochastically dominates the rest of the Pareto efficient set, which we refer to as the stochastic dominance consistent set, from which designers should make decisions. The technique centers on replacing variance and standard deviation with a variant of the Gini Index, the metric used in macroeconomics to measure income inequality. The Gini Index and its variants have many rich properties that have led to it being applied to problems in many fields including portfolio optimization, signal reconstruction, and plant population characterization [20–26], among others. Replacing variance with Gini ultimately allows for an efficient optimization formulation that produces designs that stochastically dominate alternatives.
Stochastic Dominance and Gini's Mean Difference
An Illustration of First-Order Stochastic Dominance (FSD).
In this case, B always has a higher probability of obtaining a better outcome than A. If a higher value is always desirable, B is superior and A should not even be considered. This runs counter to conventional wisdom, in which designs within the Pareto efficient set are considered valid alternatives. However, as seen above, even designs within the Pareto efficient set may be stochastically dominated. With the multi-objective approach, stochastically dominated designs are evaluated, leading to wasted computational effort. It also allows for the pernicious possibility of a designer choosing an inferior design. This indicates a need for a means to determine stochastic dominance between design alternatives. To address this issue, Horsetail matching has been developed recently, which seeks to minimize the distance between the outcome CDF with a target . As stochastic dominance is defined by the CDF, this method prevents the selection of a stochastically dominated design by basing the evaluation criterion directly off the CDF.
Relationship to Utility Theory.
In Economics and Decision Analysis, the concept of utility theory is often employed to study optimal decision-making under risk or uncertainty [29–31]. These concepts have been applied to various engineering problems, including manufacturing process selection, aquifer design, and scientific payloads, among others [32–34]. However, in problems studied by both economists and engineers, formulating utility functions with parameters that reliably reflect real preferences can be a challenge. With stochastic dominance, however, alternatives can be eliminated with very minimal assumptions with respect to the utility functions . As a matter of fact, the only assumption necessary to establish FSD is that the utility of a quantity is monotonic and strictly increasing with respect to ; that is, . This very minimal assumption means FSD can also be applied to physical engineering problems that do not traditionally utilize utility theory, such as maximizing lift or minimizing fuel consumption. To establish second-order stochastic dominance (SSD), the marginal utility must also be strictly decreasing with respect to ; that is . When the utility function for an individual behaves with the properties necessary for SSD, such an individual is known as risk averse . FSD and SSD have different definitions and implications, but in the technique demonstrated in this paper, they are established jointly.
Formal Definition of Stochastic Dominance.
In this paper, we define stochastic dominance with slight changes in notation from the standard definitions of stochastic dominance to better suit the engineering design context [27,35]. We define FSD as shown below.
Second-order stochastic dominance also assumes that there are diminishing or neutral marginal returns to utility from the quantity of interest. That is, and . In the engineering context, SSD may be useful for utility-based optimization approaches or in a value-based formulation with a risk-averse decision maker. In the method presented in this paper, however, FSD and SSD are established jointly, and thus the distinction is not of importance. However, we do want to bring the existence of SSD as stochastic dominance in general remains unfamiliar to the field of engineering optimization.
Establishing Stochastic Dominance With Gini's Mean Difference (GMD).
In this paper, we propose an alternative metric for uncertainty in probabilistic outcomes that established necessary and sufficient conditions for stochastic dominance: Gini's Mean Difference . The definition for this is provided below.
Note that divided by is the Gini index, which is used ubiquitously in macroeconomics as a measure of income inequality [36,37]. That is, the Gini Index is a normalized variant of Yitzhaki  proved that necessary and sufficient conditions for stochastic dominance can be established with just and This proof was originally formulated for economic problems involving profit, so is considered a desirable quantity and is undesirable. For all optimizations performed, we minimize and maximize , that is we minimize the negative of The following propositions were proven by Yitzhaki.
Proposition 1b. Ifandintersect at most once, thenandare sufficient for.
In many cases, the CDFs of interest do not intersect more than once and sufficiency can be established. Previous research determined the normal, lognormal, exponential, and uniform distributions all have this property  and there are likely many others. In this study, we do not further investigate and better define the mathematical conditions under which the conditions are sufficient for stochastic dominance. Future research should attempt to develop methods to determine whether sufficiency can be established for a given problem.
These two conditions provide simple but powerful tools to improve techniques for optimization under uncertainty. In the rest of the paper, we demonstrate how the conditions can be applied to robust design optimization techniques to reduce the decision space from the classic Pareto efficient set to a subset with no stochastically dominated designs. The condition also allows for the metric for uncertainty to be combined with the mean into a single objective which, when optimized, produces a design that is not stochastically dominated. This one objective, , achieves what the weighted sums approach sought out to do in reducing the computational cost of optimization, but it has rigorous mathematical foundations without the necessity of arbitrarily setting weights a priori and produces a result consistent with stochastic dominance rules.
An Multidisciplinary Design Optimization Test Problem
As a demonstration problem, a system-level MDO simulation for a commercial satellite system was used [39,40]. This simulation captures nine design variables and 22 behavior variables divided across seven coupled subsystems, shown in the design structure matrix in Fig. 3 below. The design variables—the independent variables that are subject to optimization—and behavior variables—the dependent variables obtained through systems analysis—included in this simulation are listed in Tables 1–3 in the Appendix for completeness. All multi-objective optimizations are performed with MATLAB's multi-objective genetic algorithm.
An Uncertain Demand Model.
According to Intelsat's 2003–2010 financial data, revenue from leasing transponders followed a trend described by the parameters and . These parameter values were used for the demand simulation. The stochastic variable, , is drawn from a standard normal distribution, producing 100,000 scenarios that serve as the demand model. is chosen as 50, and a of one month is used.
Applications of the Mean–Gini Method
Reducing the Pareto Front.
In cases where it is desirable to have a Pareto front from which to make decisions, the conditions can be applied to eliminate stochastically dominated designs, arriving at a subset of the Pareto efficient set which we will refer to as the stochastic dominance efficient set, . Remarkably, for a certain class of Pareto efficient sets, can be easily visualized on the Pareto front. This Pareto front reduction technique can be performed either by optimizing with respect to mean and variance and then calculating the values for the elements of the Pareto efficient set or by directly optimizing with respect to mean and in the first place. We strongly recommend performing an optimization with respect to mean and , as we have found the optimization converges faster than when the optimization is performed with the variance. This finding is discussed in further detail later in the paper. Either way, the Pareto front should be plotted with and , as the conditions allow for a visual interpretation of the regions of the Pareto front that are stochastically dominated.
Consider some multi-objective optimization which produces the Pareto efficient set . If the slope of is strictly decreasing, can be intuitively visualized on the Pareto front. For the purposes of this paper, we define such a set as one that can be approximated by some function, , such that , across the entire domain of the function, and for some . In such cases, the following proposition holds true, the proof of which is provided in the Appendix.
Proposition 2a. Suppose there exists some designsuch that. Then, such that.
Proposition 2b. Letbe anyPareto efficient set with members having CDFs that overlap at most once. Suppose there exists some functionsuch that, thatandexists. The setis defined such thatif and only ifand. Then, no two designs withinmay stochastically dominate each other.
Thus, we have that no two designs within stochastically dominate each other, making them all valid alternatives in the absence of additional information. However, all other designs, , such that but , are stochastically dominated by the point and should thus not be considered. Note that if no design exists at the exact point where , one should merely choose as the design closest to that point. An example demonstrating this technique follows.
A multi-objective optimization of the satellite problem was performed for 100 generations with respect to and with a population size of 1000. The 426 Pareto points from the final generation of this optimization are plotted in Fig. 4 with on the x-axis and on the right-hand side y-axis. A piecewise function was found to produce the best fit for this Pareto front with a one term power function fit to the first 380 data points, corresponding to and a quadratic function fit to the remaining points. With the resulting model, the derivative is calculated numerically with the finite difference method with a step size of , which is approximately five orders of magnitude smaller than the values of . The resulting numerical derivative is plotted. The value corresponding to the point where will be known as and a vertical line denotes this value on the plot. The point on the Pareto front to the right of this vertical line is .
By visual inspection, we noted the CDFs almost always overlap only once with each other and thus we hypothesized the sufficient condition can be established. To rigorously test this hypothesis, a stochastic dominance test, developed by researchers in mathematical ecology, was applied to the set of designs in the last generation of the optimization [43,44]. Starting from the point on the Pareto front corresponding to minimum and minimum , the test moved up the Pareto front comparing the CDFs design, , with that of for . Stochastic dominance is determined numerically by a bootstrap algorithm with 1000 samples, and the result is a p-value corresponding to the likelihood that does not dominate . The results are shown in Fig. 5 with the Pareto front and p-values plotted on separate axes. Note that the p-values remain essentially at zero until , where designs start having higher p-values corresponding to a higher likelihood that stochastic dominance does not occur. We do acknowledge that the results are not particularly strong, which we suspect is due to both the weakness of stochastic dominance in our particular test case and the relatively poor resolution of the computationally expensive bootstrap algorithm. Nevertheless, a clear trend is visible that stochastic dominance breaks down after .
The CDF curves of the designs on the Pareto front are also calculated and plotted to visually demonstrate the stochastic dominance characteristics of the curve. In Fig. 6(a), designs are chosen from the stochastically dominated portion of the curve shown on the right-hand side of the vertical line. By visual inspection, one can see that their CDFs are nearly identical for the bottom decile, but above that, the designs further up the Pareto front clearly dominate those below. In contrast, Fig. 6(b) shows the designs where stochastic dominance cannot be established.
Efficiency in Optimization Convergence.
In cases where establishing a Pareto front is desirable, we propose optimizing with respect to and , instead of and as is traditionally done. We find, in our experiments, that optimizing with respect to is more efficient than optimizing with respect to . We do not provide a rigorous theoretical explanation for this improved efficiency, but we are not alone in observing it. Researchers have applied the Gini Index as a sparsity measure used in the objective of a signal reconstruction problem and found it to be more efficient than traditional sparsity measures such as the and norms [21,25]. However, they have also not been able to determine a theoretical explanation for the improved performance. Gini is also commonly used as an objective in quantitative finance for portfolio optimization [20,24,45].
The size of the dominating criterion space is then determined by uniform Monte Carlo samples per generation. A lower S-Indicator value corresponds to a smaller dominating criterion space, and thus, a better solution. The results from all 15 pairs of optimization runs, along with their aggregated mean, are shown in Fig. 7. We clearly see that by the standard of the S-Indicator, the optimization performed with Gini outperforms that performed with . After just five generations, the S-Indicator value for the Gini run is already better than the best result obtained with the run across all 100 generations. Figure 8 shows the evolving Pareto fronts across the optimization generations, where the faster convergence capability of the Gini objective is shown.
The improved performance in convergence of the Gini Index and its variants over variance remains unexplained both in this paper and other researchers who have reported its superior performance [21,25]. Previous work does provide a general direction to investigate. In a discussion of superiority of Gini's mean difference as a measure of variability for non-normal distributions, Yitzhaki mentions it is sensitive to whether marginal distributions are exchangeable and that this property may influence the results when comparing two marginal distributions . The paper, however, does not provide more detailed results. We suspect this may be a very difficult mathematical analysis problem and would like to pose it as an open problem to the community.
A Single Objective for Expected Value and Uncertainty.
Following from Proposition 2, we obtain the following corollary, the proof of which is provided in the Appendix.
Corollary 1a. Given the conditions assumed in Proposition 2, defineand. We have that.
Corollary 1b. is not stochastically dominated by any design in.
Recall that the result proven assumes the Pareto front takes on a shape such that , making it not applicable to all problems in its current form. Thus, it may often still be of interest to perform the multi-objective optimization to construct the Pareto front. It may also be of interest to perform a multi-objective optimization if the designer wishes to explore the designs that neither dominate nor are dominated by .
From the results presented in this paper, one can see that stochastic dominance, which has not been the subject of much research in the engineering design community, should be considered in performing optimizations under uncertainty. The mean–variance approach to optimization under uncertainty does not provide a way to establish stochastic dominance, potentially leading to inferior design decisions being made. We propose a variant of the Gini Index which we call γ, as an alternative optimization metric to the variance that allows for stochastic dominance to be determined when the CDFs of the designs do not intersect more than once. Future research should also investigate how stochastic dominance can be established when CDFs do intersect more than once. We also present a method to reduce the Pareto efficient set to a subset that is not stochastically dominated under the assumption that the Pareto frontier can be approximated with a convex function. Further research should expand the reduction technique to greater generality. Finally, we demonstrate a single optimization objective that accounts for both expected outcome and variability that produces a design that will not be stochastically dominated. There exists an extensive literature on stochastic dominance-based decision analysis methods that may aid in developing future optimization methods for engineering design problems. The minimal assumptions with regard to the utility functions of the outcomes needed to arrive at rigorous conclusions makes stochastic dominance-based methods a promising avenue of research for improved techniques for engineering optimization and design.
The opinions, findings, and conclusions stated herein are those of the authors and do not necessarily reflect those of the sponsor.
National Science Foundation (Grant No. CMMI-1300921).
market demand in number of transponders
function fit to the Pareto front
cumulative distribution function corresponding to design evaluated at q
probability of quantity x
mean Gini Pareto efficient set
designs produced with n generations of optimization with respect to standard deviation
designs produced with n generations of optimization with respect to Gini
stochastic dominance efficient set
simulation time step
mean annual demand growth
mean NPP values of designs in
mean NPP values of design in
= standard deviation of NPP of designs in
standard deviation of NPP of designs in
first and second stochastically dominates
Proof of Proposition 2
That is, condition for is satisfied if the inequality is true. Let be some Pareto efficient set that can be approximated with the function and . Consider some hypothetical design at the point along this function such that let us refer to this point as . Note for all designs such that , we also have that by Pareto efficiency. Thus, we have strictly . Thus both the and conditions for are met. Now consider all designs such that . Note for , the condition holds. However, at the point along the Pareto front where lies, because . Thus, and , thus the condition does not hold. Thus, it is not possible for . In fact, because the conditions are necessary for all distributions, this result applies even to CDFs that intersect more than once. Thus, we denote the set } as the stochastic dominance consistent set as no two designs within this set strictly dominate each other making all of them valid alternatives in the absence of additional information. Proof of Corollary 1. Assume all conditions established in Proposition 2. Consider the point . We will show corresponding to the maximum value of for all designs in . From Proposition 2, we have s.t. , . Now consider s.t. . By the definition of the Pareto efficient set, we have . We also have for the portion of the Pareto front where lies. Thus, . Thus, . Thus, we have shown is maximized at because . Define and returns the value for the design . Since is maximized at ,
We will now show no designs within may stochastically dominate . Consider the designs s.t. . The necessary conditions for are not met. Thus, is not stochastically dominated by . Now consider s.t. . We have from Proposition 2 that ; thus, the necessary conditions for are not met. Thus, is not stochastically dominated by . Because is not stochastically dominated by neither nor and , we conclude is not stochastically dominated by any designs in .