Abstract

Engineers and computational scientists often study the behavior of their simulations by repeated solutions with variations in their parameters, which can be, for instance, boundary values or initial conditions. Through such simulation ensembles, uncertainty in a solution is studied as a function of various input parameters. Solutions of numerical simulations are often temporal functions, spatial maps, or spatio-temporal outputs. The usual way to deal with such complex outputs is to limit the analysis to several probes in the temporal/spatial domain. This leads to smaller and more tractable ensembles of functional outputs (curves) with their associated input parameters: augmented ensembles of curves. This article describes a system for the interactive exploration and analysis of such augmented ensembles. Descriptive statistics on the functional outputs are performed by principal component analysis (PCA) projection, kernel density estimation, and the computation of high density regions. This makes possible the calculation of functional quantiles and outliers. Brushing and linking the elements of the system allows in-depth analysis of the ensemble. The system allows for functional descriptive statistics, cluster detection, and finally, for the realization of a visual sensitivity analysis via cobweb plots. We present two synthetic examples and then validate our approach in an industrial use-case concerning a marine current study using a hydraulic solver.

Introduction

In this article, simulation refers to the application of computational models to the study and prediction of physical events or the behavior of engineered systems. In this context, the modern usage of simulation tools has improved and grown to a point that has far exceeded many expectations. That remarkable change has come about mainly because of developments in the computational sciences and the rapid advances in computing equipment. Computer models help engineers to forecast the behavior of the system under investigation in conditions that cannot be reproduced in physical experiments (e.g., accidental scenarios), or when physical experiments are theoretically possible but at a very high cost. To improve and have a better hold on these tools, it is crucial to be able to analyze them under the scopes of sensitivity and uncertainty analysis [13]. In particular, sensitivity analysis aims at identifying the most influential parameters for a given output of the computer model and at evaluating the effect of uncertainty in each uncertain input variable on model output [4,5].

A probabilistic uncertainty study consists of evaluating the computer model on a large size statistical sample of model inputs (which follow a joint probability distribution), then analyzing all the results (the model outputs) with specific statistical tools. The result of such a family of runs is called ensemble, and each individual run is called a member. Ensembles are multivariate, which means that a simulation is run several times with varying parameters. Their members are multidimensional (both in space and time) and multivalued (several quantities such as temperature, pressure, or velocity are considered). The usual way to deal with these kinds of outputs (as a temporal function, a spatial map, or a spatio-temporal output) is to limit the analysis to several probes in the temporal/spatial domain [14]. To deal with this problem, taking ideas from the visualization community seems particularly interesting. Indeed, one of its current challenges is how to deal with the multivariate nature of the ensembles [68]. Furthermore, uncertainty visualization has been long advocated as one of the top challenges in visualization [911].

The goal of this work is to propose methodologies and tools for researchers and engineers performing uncertainty studies by analyzing ensembles. An example of such a strategy can be a hydraulics engineer studying results generated by a multirun finite elements simulation. In this case, the ensemble could be a fixed three-dimensional mesh for all members and a varying field (temperature, water height, pressure, etc.) that depends on the experimental design used to sample the parameters controlling the simulations. Thus, when the engineer applies a probe on a node of the mesh, she/he obtains not the evolution of a quantity (temperature, water height, pressure, etc.) over time but another smaller ensemble of functional outputs or curves. We should then not only deal with an ensemble of functional outputs but also with their associated simulation parameters. We call this kind of data an augmented ensemble of curves.

A nonaugmented ensemble of curves presents already a first problem of visual clutter, which is well known in the visualization community. When a large number of curves are superposed to one another, the overall perception of the graphs is lost, and the user cannot analyze the ensemble. As an example, Fig. 1 depicts 1500 curves coming from different runs of the same numerical simulation (from a hydraulics application). When looking at the overall behavior of an ensemble of curves, such as Fig. 1, the first set of basic questions that arise is the following:

Fig. 1
Raw visualization of curves coming from a multirun hydraulics simulation: 1500 curves of water height evolving over time
Fig. 1
Raw visualization of curves coming from a multirun hydraulics simulation: 1500 curves of water height evolving over time
  • What is the median curve?

  • Can we define some confidence interval curves containing most of the curves (as done usually for scalar random variables with the boxplot tool)?

  • Can we detect some abnormal curves, in the sense of a strong difference from the majority of the curves (as outliers for scalar variables)?

  • Are there some clusters, which correspond to different behaviors of the physical model that generated these outputs?

These questions can be answered by methods found in the recent technical literature by the way of PCA methods, with a statistical viewpoint [1214] or with a visualization viewpoint [1517]. However, for augmented ensembles, new challenges arise because a member of such ensemble consists of a set of input parameters (which drove a numerical simulation) and its associated functional output. First, the interactive exploration needs a methodology able to visually provide, to the analyst, the statistical structure of the curves and the identification of clusters. Second, if the clusters of functional outputs correspond to groups of coherent behaviors of the simulations, is it possible to visually study the relationship between these behaviors and the input parameters? This question implies the realization of a visual sensitivity analysis that we realize by linking the classical tool of the cobweb plot in sensitivity analysis [4] and the ensemble of curves visualization described before. This topic has not been addressed in papers [1217] and is the main contribution of this work.

The Background and Related Work section lists the important and main previous works on the subjects covered by this paper. The Estimating Functional Quantiles and Outliers section explains the method used for estimating functional quantiles, while the Linking With the Input Parameters section describes how to perform the visual sensitivity analysis. In the Results section, applications of the methodology are given on toy examples and an industrial example. The Software Implementation and Conclusion sections provide a discussion on software implementation and a conclusion.

Background and Related Work

First of all, our work relates to uncertainty and sensitivity analysis. In particular, global sensitivity analysis is an ensemble of techniques which aim to identify the influential and noninfluential inputs on some computer model outputs [4]. In particular, quantitative global sensitivity analysis deals with a probabilistic representation of the input parameters to consider their overall variation range. Variance-based sensitivity measures, also called Sobol' indices [18], are currently the most popular method for global sensitivity analysis [5]. The principle of Sobol' indices is to decompose the variance of the output, Y, of the simulation into fractions, which can be attributed to each of the random model input Xi (with i =1,…, p, where p is the number of inputs). When Y is a scalar output, these percentages are directly interpreted as measures of sensitivity. However, sensitivity analysis for large-scale numerical systems that simulate complex spatial and temporal evolutions remains very challenging because of the treatment of uncertainty [2], the treatment of the functional nature of the output [19,20], and the large volumes of data that could be produced [21]. Our main contribution is the realization of a visual sensitivity analysis, linking the cobweb plot (a classical graphical tool in sensitivity analysis [4]) and the ensemble of curves visualization.

One of the difficulties when visualizing several one-dimensional curves is to avoid visual clutter. An interesting solution is given by Ref. [22] which proposes a “curve density estimation” directly in the curves' space. Other visualizations may represent overlaid function graphs as envelopes [23], semitransparent graphs [24], and offer brushing techniques to highlight selected subsets of the functions [2325]. Another approach offers a re-orderable matrix of time series charts [26]. The way these methods deal with visual clutter is very different from our approach, which offers quantified statistical information by calculating quantile bands and outliers.

In our work, we use the extension of the classical boxplot to functions: the functional boxplot proposed in Refs. [12] and [27]. A boxplot for scalar variables allows summarizing the main information of a data sample: median, first and third quartiles, and an interquantile-based interval which define the limit of nonoutliers data. The first step to build such boxplot is to rank data thanks to a statistical order or data depth; such order has first to be defined for functional data [28], which has led to numerous research works in the literature. The concept of functional data depth has been generalized to contours by [15], which displays boxplots for two-dimensional simulation data in weather forecasting and computational fluid dynamics. The so-called band depth, defined by Ref. [28], is particularly relevant for the goals of Ref. [15]. Band depth is defined on an ensemble of functions, and the band depth of each function is the probability that the function lies within the band defined by a random selection of other functions from the distribution. The band depth is computed for each member of the ensemble and can be used, as described in Ref. [27], to visualize summary statistics for an ensemble of functions. Our methodology differs from Ref. [15] because we do not use data depth; the functional summary statistics are calculated through an alternative method based on a PCA projection. Furthermore, our focus is in augmented ensembles and not in ensembles of contours.

In our method, the functional curves are handled by reducing their dimension via projection using PCA as in Ref. [12] (which is limited to the two first components). Reference [13] has introduced this PCA-based approach for visualizing (but noninteractively) functional outputs of computer experiments. Later, Ref. [14] has extended the technique to selecting and modeling more than two PCA components by advanced statistical techniques. Our choice of using PCA is first motivated by our idea to jointly interact with the PCA-plane (defined in section “Projecting on the Principal Component Analysis Bivariate Plane”) in which each function is represented by a two-dimensional (2D) point. Thus, in this aspect, the method relates to dimensionality reduction techniques but using a human-in-the-loop approach; see Ref. [29] for a structured literature review and references on this field. Furthermore, the PCA-plane allows, at the same time, to calculate functional quantiles and to study the multimodal nature of ensembles. We remark that data depth based techniques of Ref. [27] do not deal with multimodality.

The PCA technique reduces the data dimension via a linear transformation. In some cases, such a transformation does not work due to the underlying structure of the data (see an engineering example in Ref. [30]). Nonlinear dimension reduction techniques (nonlinear PCA, kernel PCA, Riemannian manifold learning, locally linear embedding, etc., see Ref. [31]) can then be used with a certain increase in complexity and computational cost. The pragmatic approach consists in first applying a PCA and then turning to nonlinear methods if the data variability is not well captured by a small number of PCA components. Using nonlinear dimension reduction techniques is beyond the scope of this paper and will be studied in future works.

Reference [17] presents a method for computing streamline variability plots. It consists of the transformation, via PCA, of the set of streamlines, into a lower dimension space in which clustering can be performed. Clustering by fitting geometric medians and confidence ellipses is performed in PCA-space. Finally, medians and ellipses are transformed back to the domain space and yield the variability plot of the streamline ensemble. Reference [16], of the same authors, applies similar techniques to ensembles of isosurfaces. Our methodology also uses PCA in order to work into a lower dimension space and then reproject to the original dual space. However, the operations performed in the PCA-space are very different; Refs. [16] and [17] perform clustering while we calculate highest density regions (HDR). This operation necessitates an estimate of the empirical density function in the PCA space, which uses kernels thus avoiding fitting a parametric model (such as ellipses). HDR presents a unique and strict mathematical definition, and is at the core of the method to calculate quantitative and nonparametric variability of the data. HDR can also be used for clustering; in our current implementation, they assist the users in this task.

Brushing and linking is extensively used in our system but we have no new contribution in this area, the interactions we use were already described in classical works such as Refs. [32] and [33]. Other works have applied these classical techniques to ensembles of functions, for instance, Ref. [34] for the investigation of families of data surfaces, Ref. [35] to analyze 2D function ensembles in the development process of powertrain systems, and Ref. [36] for the interactive visual exploration of large three-dimensional scalar ensembles. These references demonstrate the necessity for a flexible visual analysis system that integrates many different linked views for making sense of this complex data. In this context, using brushing and linking and statistical aggregations, as Refs. [34] and [36] is seducing, we differentiate from these works because we do not perform any statistical aggregation, i.e., the computation of statistical moments, and prefer quantile analysis that does not introduce any hypothesis about the underlying data distribution.

We finally remark that our work is inscribed in the field of visual parameter space analysis. This approach was used by Ref. [37] in an interactive system called HyperMoVal that was designed to support model validation. These models related to the development of car engines for tasks which require a prediction of results in real time. Other examples of this visual analysis include [38], which combined a sensitivity analysis with a linked multidimensional visualization providing a way to analyze the behavior of an artificial neural network. Reference [39] addresses the problem of parameter-finding in image segmentation by visually guiding the user toward areas that need refinement, in a sparse sampled parameter space, by placing additional sample points. In a second stage, the user navigates through the parameter space in order to determine areas where the response value (goodness of segmentation) is high.

Reference [40] presents a conceptual framework in which six typical analysis tasks can be performed: optimization, partitioning, fitting, outliers, uncertainty, and sensitivity. Numerous examples exist of space analysis for optimization, such as Ref. [37] or Ref. [39]. Our work differs from most references [3438] in the analysis tasks we focus on (detecting outliers, partitioning, and visualizing sensitivities). In fact, our main contribution is the realization of a visual sensitivity study in the context of time-evolving numerical simulations.

Estimating Functional Quantiles and Outliers

The estimation of the quantiles and outliers of an ensemble of functions is performed on a plane defined by the first two vectors of its PCA. The method is divided into three main steps:

  1. Project the functions into the PCA bivariate plane.

  2. Perform the estimation of the probability density function on this plane, which allows for the estimation of HDR in which boundaries are isoprobability contours.

  3. Project the HDR boundaries back into the space of curves. The functional quantiles and outliers are then computed.

Projecting on the Principal Component Analysis Bivariate Plane.

Principal component analysis is a technique of dimensionality reduction, whose purpose is to represent the source data into a new space of lower dimensions. It is mathematically defined as an orthogonal linear transformation, which maps the data to a new coordinate system such that the greatest variance comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate (called the second principal component), and so on. Projecting the curves into the PCA bivariate plane means that only the first and second principal components are kept, thus each curve is represented by a point in a two-dimensional space (the bivariate plane).

There is an underlying condition for PCA reduction: the transformation should keep enough information about the source data, while allowing to simplify the analysis. In this article, we limit our scope to a bivariate plane for simplicity but an extension to larger dimensions is possible. Moreover, our examples have a high explained variance using the first two PCA components (which means that the two-dimensional reduced PCA basis correctly reproduces the overall variability of the ensemble of curves). This explained variance νe is calculated from the n singular values of the covariance matrix (σ1,σ2,σn) in the following way:  
ve=σ1+σ2σ1+σ2++σn

where σ1σ2σn are the ordered singular values representing the importance of the variance of each principal component. In our implementations, this quantity is systematically visualized in the diagrams containing the bivariate plane.

As said before, working with three or more PCA components is also possible. We have prepared a prototype that uses an interactive scatter plot matrix view, based on Ref. [41], for the interaction with the n-dimensional space of PCA dimensions. However, the real challenge in this case is the computational complexity associated with the estimation of densities in higher dimensions. The work of Ref. [14] allows to solve this problem.

Highest Density Regions Method.

Once the functional variable has been transformed to a fewer component space (in our case the bivariate plane), the next step is to estimate the quantiles, which also allows outliers to be detected. Conceptually, two basic operations should be performed to build the quantiles: (i) create a density map on the plane and (ii) calculate isoprobability curves of this map. So as to implement these operations, we follow the method of highest density regions of Ref. [12]. Full mathematical details are given in Ref. [12] but the principle of this method is to assimilate observations in the bivariate plane of principal components to the realizations of a random vector with density f. By calculating an estimate of the density map f, the quantiles can then be computed.

We start from the sample (Xi)i=1n which stands for n observations of the vector X of dimension p =2. The following smoothing process is used:  
f̂(X)=1ni=1nKH(XXi)
where KH is the Gaussian smoothing kernel which writes  
KH(X)=|H|1/2K(H1/2X)

with K(X)=(1/2π)exp(1/2)X,X being the “standard” Gaussian kernel and H the matrix containing the smoothing parameters (extension in p dimensions of a smoothing parameter h in dimension 1). Depending on this matrix (diagonal or not), some preferential smoothing directions can be chosen. In our implementation, we first generate a grid covering the bivariate plane, which is initialized to 100 × 100 but can be customized by means of the user interface. We subsequently apply an isotropic kernel, whose width is automatically initialized by use of the rule of Silverman [42].

Once the estimate of the density map f̂ is obtained, the HDR method gives a description of important statistical information. It is defined as  
Rα={Z:f̂(Z)fα}

where α is the order of the quantile we choose and the quantile value fα is such that Rαf̂(Z)dZ=1α that defines the region with probability coverage 1α. It means that fα is such that all points within the region Rα have a higher density estimate than any of the points outside the region, hence the name highest density region. For a density map, the HDRs can be considered as regions bounded by contours, with an expanding coverage as α decreases. In our implementation, we compute two HDRs:

  • An inner HDR with probability coverage α = 50% that corresponds to the central interquartile zone.

  • An outer HDR where α can interactively be modified via a slider (default value α = 95%).

We consider that all points excluded from the outer HDR can be some outliers. This allows to interactively changing the threshold that is used to compute the outliers.

Back Into the Curves Space.

Once both HDRs are calculated, we would like to see them in the original space of the curves. We recall that a point in the PCA plane corresponds to a curve. However, HDRs represent areas of the PCA plane in which boundaries are contours. It is then necessary to run an algorithm that converts these contours into their associated functional quantiles. We propose a first exact algorithm by traversing all points corresponding to the discretized boundary and choosing the maximum and minimum values on the curves space. This process generates functional quantiles that are not necessarily existing curves on the ensemble.

Figure 2 shows the analysis of the dataset shown in Fig. 1: the 50% interquantile area is represented in light color and dark is used for the 95% interquantile zone. The median curve (in black) is calculated by finding the point in the bivariate plane that presents the highest value of the density map.

Fig. 2
The functional quantiles and the median (black line) of the multirun hydraulics simulation curves shown in Fig. 1
Fig. 2
The functional quantiles and the median (black line) of the multirun hydraulics simulation curves shown in Fig. 1

Information Contained in the Bivariate Plane.

The bivariate PCA plane presents some important characteristics worth discussing:

  • It provides a data reduction and visually understandable representation, where each curve is represented by a point.

  • The probability density map associated with the bivariate plane allows for the calculation of the median curve, functional quantiles, and outliers.

  • The density map conveys important information about the modality of the curves dataset. Indeed, statistical multimodality is normally associated with a mixture of unimodal distributions. Each of the underlying modes defines different behaviors of the curves, and thus, the original data can be divided into clusters. The HDR exposes the mono- or multimodal nature of the dataset. If an HDR is formed by disjoint areas, the distribution is multimodal.

In the presented method, both the diagram containing the curves or functional boxplot and the bivariate PCA plane are jointly visualized by the use of a brushing and linking strategy. In our system, brushing corresponds to a selection operation. Thus, we offer tools to select individual or subsets of points in the bivariate plane, which highlights its corresponding curves. The opposite schema “select curve, highlight point” is also available. We also set meaningful limits to the exploration by drawing two isoprobability curves. First, contour is fixed at probability 50%, while the outer one is controlled by a slider in the user's interface. We finally define a blue versus red colormap to help the visual interaction with the plane (blue meaning low probability and red high probability).

Linking With the Input Parameters

The proposed methodology also allows for the study of the augmented ensemble, each member of such ensemble is a couple consisting of the input parameters and a functional output. Thus, a member is represented as a couple (pN, f), where pN is a list of N parameters and f is a function. We will constrain our current study to one-dimensional f functions (curves). We also remark that the number N of input parameters is not necessarily small. The current numerical simulations can easily present N =50. The whole ensemble is represented as (pN, f)M, where M is the number of couples or, equivalently, the number of members in the ensemble. We remark that input parameters and functional outputs both possess a common index M on the ensemble. Thus, it is technically possible to share the same selection strategy between them. Indeed, our linking strategy allows the use of fully coupled diagrams in order to interact, simultaneously, with the input parameters and functional outputs of the multirun simulations.

One important consequence of this joint visualization is that it allows for what we define as a visual sensitivity analysis. As a matter of fact, multirun simulations are often used to determine the impact of input parameters on the results of the simulations, which is called a sensitivity analysis [4]. Our system propagates the selections performed on the bivariate plane or on the functional boxplot to the diagrams associated with the input parameters. This allows the exploration of complex input–output relationships.

Results

In this section, we discuss experimental results to demonstrate the utility of the proposed PCA-based functional boxplot, which allows

  1. to study the variance of the curves generated from a multirun numerical simulation;

  2. to detect functional outliers;

  3. to identify clusters of curves that correspond to different behaviors of members of the ensemble;

  4. to perform a visual sensitivity study.

The discussion is started with two synthetic examples and then an industrial use-case concerning a marine current study using a hydraulic solver is presented.

Oscillating Tangents.

Our first synthetic example consists of an ensemble of time-oscillating analytical functions coming from the following equation:  
y(t)=atan(X1)cos(t)+atan(X2)sin(t)

where X1 and X2 are the input parameters and t represents the time, which is regularly sampled in the interval [0, 2π]. Thus, we generate an ensemble of 400 curves by Monte Carlo sampling of both X1 and X2 based on a uniform distribution in the interval [−7,7].

In Fig. 3, we show some results concerning this ensemble of temporal oscillating functions. On the top panel (a), all 400 generated curves are shown. Figure 3(b) shows the result of a user interaction with the PCA bivariate plane of the 400 curves, where a blue to red colormap is applied. The explained variance is equal to 100%. This surprising result is explained by the fact that curves of the oscillating tangents function are regular sinusoids only tuned by their amplitude and frequency. We can see that four clusters appear, indicating a multimodal structure of the oscillating curves. The analyst has selected one of these clusters, then the propagated selection on the curves is highlighted, this selection corresponds to variations of the same oscillating mode. This example demonstrates the interest of visualizing and interacting with the PCA bivariate plane in the context of a partitioning task [40]. Understanding a multimodal ensemble of curves is indeed a complex analysis task.

Fig. 3
The top panel (a) shows 400 curves generated by the temporal oscillating tangents experiment. The bottom panel (b) shows the results of a user interaction where the analyst has selected (pink points) one of the clusters of the PCA plane.
Fig. 3
The top panel (a) shows 400 curves generated by the temporal oscillating tangents experiment. The bottom panel (b) shows the results of a user interaction where the analyst has selected (pink points) one of the clusters of the PCA plane.
Fig. 4
The top panel (a) shows 400 curves generated by the modified 1D Campbell function experiment. The bottom panel (b) shows the corresponding median curve and interquantile areas at 50% and 95% probability.
Fig. 4
The top panel (a) shows 400 curves generated by the modified 1D Campbell function experiment. The bottom panel (b) shows the corresponding median curve and interquantile areas at 50% and 95% probability.

Campbell One-Dimensional Functions.

Our second synthetic example is inspired by Refs. [19] and [43]. It consists of an ensemble of analytical functions that evolve in time. This dataset is generated by the use of the following equation:  
y(τ)=10+X1exp((τ10X2)2k1X12+X32)+X2X4exp(k2X1τ)

where X1, X2, X3, and X4 the input parameters and τ is a one by one regularly sampled variable in the interval [−90, 90]. The quantities k1 and k2 are constant, fixed to 60 and 0.002, respectively. Reference [43] has introduced a slightly different version of this function in order to test simple sensitivity analysis tools when model outputs are one-dimensional (1D) curves (understanding the role of each of the four inputs on the translation from left to right of the curve, on the shape of the curve peak, and on the curve tail behavior). From this, Marrel et al. [19] have calibrated a function (called Campbell 2D) in order illustrate tools of sensitivity analysis when model outputs are 2D spatial functions (with strong spatial heterogeneities, sharp boundaries, and very different spatial distributions of the output values according to the X values).

We generate an ensemble of 100 curves by Monte Carlo sampling based on a uniform distribution in the interval [−1, 5] and the same sampling is used for all Xi. In the upper panel (a) of Fig. 4, we show all 400 curves generated by the Monte Carlo sampling. At time 80, an event occurs and part of the curves diverges from its original tendency, while the others keep with its original behavior. This can be easily understood by looking at the functional interquantile areas and to the median curve, presented in the bottom panel (b) of Fig. 4. Indeed, the median curve and the 50% interquantile area are not modified by the event, while the upper limit of the 95% interquantile area rises up. By looking at this representation, an analyst avoids visual clutter and easily understands that the event at time 80 affected only the evolution in time of the top 25% of the curves. The explained variance by the two PCA components is equal to 97%.

Once stated that there is a specific group of temporal evolving functions in which behavior is modified by the event, the analyst is interested in knowing if some of the input parameters are responsible for this behavior. It is then possible to perform a visual sensitivity study by selecting, in the functional boxplot diagrams, all the curves ending in the upper part of the 95% interquantile area. Then, the system propagates the selection to the diagrams dealing with the input parameters. The result of this operation is shown in Fig. 5. In this figure, two interactive linked diagrams are presented: the diagram on the top contains the analysis of the outputs, while the bottom parallel coordinates' diagram represents the inputs. The interpretation of the diagrams is straightforward. Indeed, it is possible to visually assess the importance of each parameter by looking at the axis of the parallel coordinates' diagram. In this case, X1 and X2 present a high degree of concentration of the selection, thus they strongly influence the outputs. Using this simple criterion of “visual dispersion,” the parameters can be ordered by importance (X1, X2, X4, and X3), which is one of the main objectives of a sensitivity analysis.

Fig. 5
Realization of a visual sensitivity study over a synthetically generated ensemble using the modified 1D Campbell function. Two interactive linked diagrams are presented: the diagram on the top contains the analysis of the outputs (the curves), while the bottom parallel coordinates diagram represents the four input parameters of the Campbell function. The analyst has selected a group of curves on the top diagram thus this selection is propagated. We superpose “left bracket” symbols to the parallel coordinates' diagram in order to visually reinforce the dispersion of the propagated selection, which is a measure of sensitivity.
Fig. 5
Realization of a visual sensitivity study over a synthetically generated ensemble using the modified 1D Campbell function. Two interactive linked diagrams are presented: the diagram on the top contains the analysis of the outputs (the curves), while the bottom parallel coordinates diagram represents the four input parameters of the Campbell function. The analyst has selected a group of curves on the top diagram thus this selection is propagated. We superpose “left bracket” symbols to the parallel coordinates' diagram in order to visually reinforce the dispersion of the propagated selection, which is a measure of sensitivity.

In our system, the analyst “asks a question” by using the selection. In this case, the question was “which parameters generated the curves ending with the highest values?” Of course, numerous other questions are possible, based on the selection on the Functional Boxplot, the bivariate plane, or other diagrams linked to the ensemble data. This example demonstrates that our system can perform a visual sensitivity analysis. This kind of fast and informative exploration could be performed before a formal sensitivity analysis, such as the computation of Sobol' indices (see Ref. [5] for this methodological point of view).

A Hydraulics Study-Case.

Our study-case concerns a maritime model of Alderney Ray (or Raz Blanchard in French), which is a strait that runs between Alderney (UK) and Cap de la Hague (France), a cape at the northwestern tip of the Cotentin peninsula in Normandy. This strait presents one of the fastest marine currents in Europe; the current is intermittent, varying with the tide, and can run up to about twelve knots during equinoctial tides.

A study was performed in order to calibrate a hydrodynamic model, which is typically an engaged and difficult process due to the complexity of the flows and their interaction with the shoreline, the seabed, and the islands. Thus, it was essential to understand the relationship between the modeling calibration parameters and the simulated state variables which are compared to the observations. A sensitivity analysis using Sobol' indices was a necessary step prior to calibration. In this context, several multirun studies were performed. In this section, we focus on a particular 1500 runs study where five parameters were varied:

  • Two coefficients of friction (CF1 and CF2) modeling the interaction with the seabed.

  • One “SeaLevel” representing the vertical distance from the surface to the seabed.

  • Two parameters for tidal modeling: the tidal range (vertical variation range) and the tidal velocity.

The maritime model includes Alderney and the tip of the Cotentin peninsula and covers an area roughly 55 km × 35 km. The finite element mesh is composed of 17,983 nodes and 35,361 triangular elements. The mesh size varies from 100 m, at the shoreline and within the areas of interest, to 1.8 km offshore (western and northern sectors of the model). The computations were performed by the open source fluid dynamics solver telemac1 [44] that generated fields such as velocity, pressure, and water height. We extracted 1500 curves of this multirun study by the use of a probe in one of the nodes of the mesh; this leads to the curves shown in Fig. 1.

Figure 6 shows the result of two interactions. A functional boxplot containing the analysis of the 1500 curves is linked to the input parameters that are represented in a parallel coordinates' diagram. In this figure, the analyst explores the relationship between the functional outputs and the parameter “Sea Level.” On the top panel (a) of Fig. 6, the analyst selects the highest values of “Sea Level,” while on the bottom panel (b) the lowest values are selected. By looking at the propagated selections on the functional boxplots (in orange), it is easily understood that “Sea Level” behaves like a vertical offset on the oscillating curves generated by the tide.

Fig. 6
Interactive exploration of the relationship between the functional outputs and the parameter “Sea Level” in a marine hydraulics multirun study, which shows that this parameter applies a vertical shift to the tide
Fig. 6
Interactive exploration of the relationship between the functional outputs and the parameter “Sea Level” in a marine hydraulics multirun study, which shows that this parameter applies a vertical shift to the tide

The analyst thus understands that “Sea Level” strongly influences the simulations results. This is coherent with the formal sensitivity analysis that was also performed. Sobol' indices were computed and they show that the parameter “Sea Level” strongly influences (around 97%) the outputs, while the others present little influence. In addition, the information shown in Fig. 6 is richer than the scalar Sobol' indices. Sobol' indices reveal the strong influence of the parameter “Sea Level,” while Fig. 6 underlines the way that this influence is performed (by applying a vertical shift to the tide).

Hydraulics engineers were also interested in using our system to study or verify which parameters do not influence the functional outputs. This step is fundamental for model reduction where a parameter is taken out of a model when it is considered as noninfluential. Figure 7 shows the result of selecting the highest values of CF1 (one of the coefficients of friction of the seabed). We observe that its propagated selection on the functional boxplot is visually disperse, which indicates that CF1 has no influence in the behavior of the outputs. This again is coherent in respect to the Sobol' indices-based sensitivity analysis. Moreover, physicist performing the study confirmed that CF1 and CF2 should be noninfluential in this case because the seabed is too deep for its friction to have an effect on the sea surface. The explained variance by the two PCA components is equal to 99%.

Fig. 7
Interactive exploration of the relationship between the functional outputs and “CF1” (coefficient of friction 1) shows that this parameter is not relevant for this study, due to the high dispersion of the propagated selection
Fig. 7
Interactive exploration of the relationship between the functional outputs and “CF1” (coefficient of friction 1) shows that this parameter is not relevant for this study, due to the high dispersion of the propagated selection

Finally, Fig. 8 shows a more subtle result. The analyst interacted with the propagated selected curves of the bottom panel (b) of Fig. 6. Our system allows refining selections, then subensembles of the curves with low “Sea Level” values were selected and a second-order or indirect effect was observed. Figure 8 illustrates this second-order effect by selecting (a) low “Sea Level” and high “Tidal Range” values, (b) low “Sea Level” and low “Tidal Range” values. Comparing the curves (in pink) selected in Fig. 8, we observe two modes of oscillation of the tide: for a fixed “Sea Level,” the “Tidal Range” controls the amplitude of oscillation of the tide. The existence of this behavior is coherent with the physics of the problem but it could not be observed in the performed Sobol-based sensitivity analysis.

Fig. 8
Interactive exploration of a subtle phenomenon involving two parameters: “sea level” and “tidal range.” The effect of “tidal rage” is not the same depending on the value of “sea level”
Fig. 8
Interactive exploration of a subtle phenomenon involving two parameters: “sea level” and “tidal range.” The effect of “tidal rage” is not the same depending on the value of “sea level”

Software Implementation

The system described in this article was developed by a collaboration between visualization scientists and statisticians. The aim is the development of mathematical tools to study and analyze multirun simulations, before integrating the more efficient algorithms in the openturns software [45]. It was decided to design and implement new interactive visual analytics methods in openturns by integrating the new developments into ParaView [46]. openturns and ParaView are both integrated in the salome open-source numerical simulation platform [47].

The original idea was to introduce a Functional Boxplot view in ParaView in order to avoid visual clutter and interactively study the outliers of an ensemble of curves. The bivariate PCA plane and the HDR were seen as a way of augmenting the information of the functional box plot view, which was an advantage over functional depth methods like [27,28]. Data depth does not allow to display data multimodality but only to calculate quantiles of functions. In our system, if the structure is multimodal, then the analyst can visually identify the clusters, which are disjoint regions of the inner HDR. Furthermore, the bivariate PCA plane could be segmented by any automatic clustering algorithm. This is straightforward in our integration in ParaView because a clustering algorithm can be added to its standard visualization pipeline. We positively tested the Paraview's native implementation of k-means.

On the other side, interacting among views introduced problems in the architecture of ParaView and, as a consequence of this work, the so-called linked views mechanism was developed. In this context, other statistical views were also implemented. We remark the implementation of an interactive Scatter Plot Matrix view that is a version of the work of Ref. [41].

Finally, our system was fully integrated in ParaView and is available from version 5.0.1. This software being Open-Source, the examples given in this article can easily be reproduced. Indeed, we include all data presented in this article as supplemental material.2

Conclusion

We have designed and implemented a system allowing the in-depth study of augmented ensembles issued from multirun numerical simulations dealing with uncertainty. These augmented ensembles are composed of functions and their associated parameters. The main contribution of our system is that a visual sensitivity study becomes possible by jointly analyzing functional outputs and their corresponding input parameters.

Figure 9 synthesizes the overall methodology. Its principal element is based on HDR computed on the PCA bivariate plane. This allows the realization of the following tasks:

Fig. 9
Scheme of the overall PCA-based methodology
Fig. 9
Scheme of the overall PCA-based methodology
  • Avoid visual clutter by visualizing interquantile areas and the median curve; interactively detect functional outliers.

  • Identify clusters of functions by means of the HDR and PCA-plane.

Combining all these elements with the linking of functional outputs to their corresponding input parameters allows the realization of a visual sensitivity study.

Two synthetic examples and one industrial use-case have allowed to demonstrate the potential of the approach which has been integrated in a software environment based on the ParaView and openturns platform. The current works turn to extend this method to larger number of components retained in the PCA step and to the visual sensitivity analysis of parameter-augmented ensembles of spatial fields. Indeed, in a lot of applications, outputs of computer codes are vectors supported by surfaces (see some examples in Refs. [2], [3], [19], and [20]). Future works will also consider nonlinear dimensionality reduction techniques [31] in order to replace the PCA one.

Footnotes

Acknowledgment

The authors are grateful to the associated editor and two anonymous reviewers whose comments helped to improve the paper.

References

References
1.
E.
De Rocquigny
,
N.
Devictor
, and
S.
Tarantola
, eds.,
2008
,
Uncertainty in Industrial Practice: A Guide to Quantitative Uncertainty Management
,
Wiley & Sons
, Chichester, UK.
2.
Smith
,
R. C.
,
2014
,
Uncertainty Quantification
,
SIAM
, Philadelphia, PA.
3.
R.
Ghanem
,
D.
Higdon
, and
H.
Owhadi
, eds.,
2017
,
Handbook of Uncertainty Quantification
,
Springer
, Cham, Switzerland.
4.
A.
Saltelli
,
K.
Chan
, and
E. M.
Scott
, eds.,
2000
,
Sensitivity Analysis
,
Wiley
, Chichester, UK.
5.
Iooss
,
B.
, and
Lemaître
,
P.
,
2015
, “
A Review on Global Sensitivity Analysis Methods
,”
Uncertainty Management in Simulation-Optimization of Complex Systems
,
C.
Meloni
, and
G.
Dellino
, eds.,
Springer
, New York, pp.
101
122
.
6.
Love
,
A. L.
,
Pang
,
A.
, and
Kao
,
D. L.
,
2005
, “
Visualizing Spatial Multivalue Data
,”
IEEE Comput. Graph. Appl.
,
25
(
3
), pp.
69
79
.10.1109/MCG.2005.71
7.
Sanyal
,
J.
,
Zhang
,
S.
,
Dyer
,
J.
,
Mercer
,
A.
,
Amburn
,
P.
, and
Moorhead
,
R.
,
2010
, “
Noodles: A Tool for Visualization of Numerical Weather Model Ensemble Uncertainty
,”
IEEE Trans. Visualization Comput. Graph.
,
16
(
6
), pp.
1421
1430
.10.1109/TVCG.2010.181
8.
Potter
,
K.
,
Wilson
,
A.
,
Bremer
,
P. T.
,
Williams
,
D.
,
Doutriaux
,
C.
,
Pascucci
,
V.
, and
Johnson
,
C. R.
,
2009
, “
Ensemble-Vis: A Framework for the Statistical Visualization of Ensemble Data
,”
IEEE International Conference on Data Mining Workshops
(
ICDMW'09
), Miami, FL, Dec. 6,
pp.
233
240
.10.1109/ICDMW.2009.55
9.
Lodha
,
S. K.
,
Wilson
,
C. M.
, and
Sheehan
,
R. E.
,
1996
, “
LISTEN: Sounding Uncertainty Visualization
,”
Proceedings of the Seventh Conference on Visualization'96
,
IEEE Computer Society Press
, San Francisco, CA, Oct. 27–Nov. 1.10.1109/VISUAL.1996.568105
10.
Pang
,
A. T.
,
Wittenbrink
,
C. M.
, and
Lodha
,
S. K.
,
1997
, “
Approaches to Uncertainty Visualization
,”
Visual Comput.
,
13
(
8
), pp.
370
390
.10.1007/s003710050111
11.
Johnson
,
C. R.
, and
Sanderson
,
A. R.
,
2003
, “
A Next Step: Visualizing Errors and Uncertainty
,”
IEEE Comput. Graph. Appl.
,
23
(
5
), pp.
6
10
.10.1109/MCG.2003.1231171
12.
Hyndman
,
R. J.
, and
Shang
,
H. L.
,
2010
, “
Rainbow Plots, Bagplots, and Boxplots for Functional Data
,”
J. Comput. Graphical Stat.
,
19
(
1
), pp.
29
45
.10.1198/jcgs.2009.08158
13.
Popelin
,
A.-L.
, and
Iooss
,
B.
,
2013
, “
Visualization Tools for Uncertainty and Sensitivity Analyses on Thermal-Hydraulic Transients
,”
Proceedings of Joint International Conference on Supercomputing in Nuclear Applications and Monte Carlo 2013 (SNA + MC 2013)
,
Paris, France
, Oct. 27–31, p.
03403
.
14.
Nanty
,
S.
,
Helbert
,
C.
,
Marrel
,
A.
,
Pérot
,
N.
, and
Prieur
,
C.
,
2016
, “
Uncertainty Quantification for Functional Dependent Random Variables
,”
Comput. Stat.
, 32(2), pp.
559
583
.https://link.springer.com/article/10.1007/s00180-016-0676-0
15.
Whitaker
,
R. T.
,
Mirzargar
,
M.
, and
Kirby
,
R. M.
,
2013
, “
Contour Boxplots: A Method for Characterizing Uncertainty in Feature Sets From Simulation Ensembles
,”
IEEE Trans. Visualization Comput. Graph.
,
19
(
12
), pp.
2713
2722
.10.1109/TVCG.2013.143
16.
Ferstl
,
F.
,
Kanzler
,
M.
,
Rautenhaus
,
M.
, and
Westermann
,
R.
,
2016
, “
Visual Analysis of Spatial Variability and Global Correlations in Ensembles of Iso‐Contours
,”
Comput. Graph. Forum
,
35
(
3
), pp.
221
230
.10.1111/cgf.12898
17.
Ferstl
,
F.
,
Bürger
,
K.
, and
Westermann
,
R.
,
2016
, “
Streamline Variability Plots for Characterizing the Uncertainty in Vector Field Ensembles
,”
IEEE Trans. Visualization Comput. Graph.
,
22
(
1
), pp.
767
776
.10.1109/TVCG.2015.2467204
18.
Sobol
,
I. M.
,
1993
, “
Sensitivity Estimates for Nonlinear Mathematical Models
,”
Math. Modell. Comput. Exp.
,
1
(
4
), pp.
407
14
. https://pdfs.semanticscholar.org/d339/b9cc42d6a7286d96814e6713fd13cdde87e7.pdf
19.
Marrel
,
A.
,
Iooss
,
B.
,
Jullien
,
M.
,
Laurent
,
B.
, and
Volkova
,
E.
,
2011
, “
Global Sensitivity Analysis for Models With Spatially Dependent Output
,”
Environmetrics
,
22
(
3
), pp.
383
397
.10.1002/env.1071
20.
Marrel
,
A.
,
Saint-Geours
,
N.
, and
De Lozzo
,
M.
,
2017
, “
Sensitivity Analysis of Spatial and/or Temporal Phenomena
,”
Handbook of Uncertainty Quantification
,
R.
Ghanem
,
D.
Higdon
, and
H.
Owhadi
, eds.,
Springer
, Cham, Switzerland.
21.
Terraz
,
T.
,
Ribés
,
A.
,
Fournier
,
Y.
,
Iooss
,
B.
, and
Raffin
,
B.
,
2017
, “
Melissa: Large Scale in Transit Sensitivity Analysis Avoiding Intermediate Files
,” The International Conference for High Performance Computing, Networking, Storage and Analysis (
Supercomputing
), Denver, CO.https://www.researchgate.net/publication/319529077_Melissa_Large_Scale_In_Transit_Sensitivity_Analysis_Avoiding_Intermediate_Files
22.
Lampe
,
O. D.
, and
Hauser
,
H.
,
2011
, “
Curve Density Estimates
,”
Comput. Graph. Forum
,
30
(
3
), pp.
633
642
.10.1111/j.1467-8659.2011.01912.x
23.
Hochheiser
,
H.
, and
Shneiderman
,
B.
,
2004
, “
Dynamic Query Tools for Time Series Data Sets: Timebox Widgets for Interactive Exploration
,”
Inf. Visualization
,
3
(
1
), pp.
1
8
.10.1057/palgrave.ivs.9500061
24.
Konyha
,
Z.
,
Matkovic
,
K.
,
Gracanin
,
D.
,
Jelovic
,
M.
, and
Hauser
,
H.
,
2006
, “
Interactive Visual Analysis of Families of Function Graphs
,”
IEEE Trans. Visualization Comput. Graph.
,
12
(
6
), pp.
1373
1385
.10.1109/TVCG.2006.99
25.
Muigg
,
P.
,
Kehrer
,
J.
,
Oeltze
,
S.
,
Piringer
,
H.
,
Doleisch
,
H.
,
Preim
,
B.
, and
Hauser
,
H.
,
2008
, “
A Four‐Level Focus+ Context Approach to Interactive Visual Analysis of Temporal Features in Large Scientific Data
,”
Comput. Graph. Forum
,
27
(
3
), pp.
775
782
.10.1111/j.1467-8659.2008.01207.x
26.
McLachlan
,
P.
,
Munzner
,
T.
,
Koutsofios
,
E.
, and
North
,
S.
,
2008
, “
LiveRAC: Interactive Visual Exploration of System Management Time-Series Data
,”
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
, Florence, Italy, Apr. 5–10, pp.
1483
1492
.https://www.cs.ubc.ca/labs/imager/tr/2008/liverac/liverac.pdf
27.
Sun
,
Y.
, and
Genton
,
M. G.
,
2011
, “
Functional Boxplots
,”
J. Comput. Graphical Stat.
,
20
(
2
), pp.
316
334
.10.1198/jcgs.2011.09224
28.
López-Pintado
,
S.
, and
Romo
,
J.
,
2009
, “
On the Concept of Depth for Functional Data
,”
J. Am. Stat. Assoc.
,
104
(
486
), pp.
718
734
.10.1198/jasa.2009.0108
29.
Sacha
,
D.
,
Zhang
,
L.
,
Sedlmair
,
M.
,
Lee
,
J. A.
,
Peltonen
,
J.
,
Weiskopf
,
D.
,
North
,
S. C.
, and
Keim
,
D. A.
,
2017
, “
Visual Interaction With Dimensionality Reduction: A Structured Literature Analysis
,”
IEEE Trans. Visualization Comput. Graphics
,
23
(
1
), pp.
241
250
.10.1109/TVCG.2016.2598495
30.
Auder
,
B.
,
de Crecy
,
A.
,
Iooss
,
B.
, and
Marquès
,
M.
,
2012
, “
Screening and Metamodeling of Computer Experiments With Functional Outputs. Application to Thermal-Hydraulic Computations
,”
Reliab. Eng. Syst. Saf.
,
107
, pp.
122
131
.10.1016/j.ress.2011.10.017
31.
Lee
,
J. A.
, and
Verleysen
,
M.
,
2007
,
Nonlinear Dimensionality Reduction
,
Springer
, Berlin.
32.
Becker
,
R. A.
, and
Cleveland
,
W. S.
,
1987
, “
Brushing Scatterplots
,”
Technometrics
,
29
(
2
), pp.
127
142
.10.1080/00401706.1987.10488204
33.
Keim
,
D. A.
,
2002
, “
Information Visualization and Visual Data Mining
,”
IEEE Trans. Visualization Comput. Graphics
,
8
(
1
), pp.
1
8
.10.1109/2945.981847
34.
Matkovic
,
K.
,
Gracanin
,
D.
,
Klarin
,
B.
, and
Hauser
,
H.
,
2009
, “
Interactive Visual Analysis of Complex Scientific Data as Families of Data Surfaces
,”
IEEE Trans. Visualization Comput. Graphics
,
15
(
6
), pp.
1351
1358
.10.1109/TVCG.2009.155
35.
Piringer
,
H.
,
Pajer
,
S.
,
Berger
,
W.
, and
Teichmann
,
H.
,
2012
, “
Comparative Visual Analysis of 2D Function Ensembles
,”
Comput. Graphics Forum
,
31
(
3 pt. 3
), pp.
1195
1204
.10.1111/j.1467-8659.2012.03112.x
36.
Demir
,
I.
,
Dick
,
C.
, and
Westermann
,
R.
,
2014
, “
Multi-Charts for Comparative 3D Ensemble Visualization
,”
IEEE Trans. Visualization Comput. Graphics
,
20
(
12
), pp.
2694
2703
.10.1109/TVCG.2014.2346448
37.
Piringer
,
H.
,
Berger
,
W.
, and
Krasser
,
J.
,
2010
, “
Hypermoval: Interactive Visual Validation of Regression Models for Real-Time Simulation
,”
Comput. Graphics Forum
,
29
(
3
), pp.
983
992
.10.1111/j.1467-8659.2009.01684.x
38.
Theron
,
R.
, and
De Paz
,
J. F.
,
2006
, “
Visual Sensitivity Analysis for Artificial Neural Networks
,”
International Conference on Intelligent Data Engineering and Automated Learning
,
Springer
,
Berlin
, pp.
191
198
.
39.
Torsney-Weir
,
T.
,
Saad
,
A.
,
Moller
,
T.
,
Hege
,
H.
,
Weber
,
B.
,
Verbavatz
,
J.
, and
Bergner
,
S.
,
2011
, “
Tuner: Principled Parameter Finding for Image Segmentation Algorithms Using Visual Response Surface Exploration
,”
IEEE Trans. Visualization Comput. Graphics
,
17
(
12
), pp.
1892
1901
.10.1109/TVCG.2011.248
40.
Sedlmair
,
M.
,
Heinzl
,
C.
,
Bruckner
,
S.
,
Piringer
,
H.
, and
Möller
,
T.
,
2014
, “
Visual Parameter Space Analysis: A Conceptual Framework
,”
IEEE Trans. Visualization Comput. Graphics
,
20
(
12
), pp.
2161
2170
.10.1109/TVCG.2014.2346321
41.
Elmqvist
,
N.
,
Dragicevic
,
P.
, and
Fekete
,
J. D.
,
2008
, “
Rolling the Dice: Multidimensional Visual Exploration Using Scatterplot Matrix Navigation
,”
IEEE Trans. Visualization Comput. Graphics
,
14
(
6
), pp.
1539
1148
.10.1109/TVCG.2008.153
42.
Silverman
,
B. W.
,
1981
, “
Using Kernel Density Estimates to Investigate Multimodality
,”
J. R. Stat. Soc., Ser. B
,
1
, pp.
97
99
.10.1111/j.2517-6161.1981.tb01155.x
43.
Campbell
,
K.
,
McKay
,
M. D.
, and
Williams
,
B. J.
,
2006
, “
Sensitivity Analysis When Model Outputs Are Functions
,”
Reliab. Eng. Syst. Saf.
,
91
(
10
), pp.
1468
1472
.10.1016/j.ress.2005.11.049
44.
Hervouet
,
J.-M.
,
2000
, “
TELEMAC Modelling System: An Overview
,”
Hydrol. Processes
,
14
(
13
), pp.
2209
2210
.10.1002/1099-1085(200009)14:13<2209::AID-HYP23>3.0.CO;2-6
45.
Baudin
,
M.
,
Dutfoy
,
A.
,
Iooss
,
B.
, and
Popelin
,
A.-L.
,
2017
, “
Open TURNS: An Industrial Software for Uncertainty Quantification in Simulation
,”
Handbook of Uncertainty Quantification
,
R.
Ghanem
,
D.
Higdon
, and
H.
Owhadi
, eds.,
Springer
, Cham, Switzerland.
46.
Ahrens
,
J.
,
Geveci
,
B.
, and
Law
,
C.
,
2005
, “
ParaView: An End-User Tool for Large Data Visualization
,”
Visualization Handbook
, C. D. Hansen and C. R. Johnson, eds., Vol.
717
, Butterworth-Heinemann, Oxford, UK.
47.
Ribés
,
A.
, and
Bruneton
,
A.
,
2014
, “
Visualizing Results in the SALOME Platform for Large Numerical Simulations: An Integration of ParaView
,”
IEEE Fourth Symposium on Large Data Analysis and Visualization
(
LDAV
), Paris, France, Nov. 9–10, pp.
119
120
.10.1109/LDAV.2014.7013218