The paper by Stern et al. proposes a comprehensive approach to verification and validation of computational fluid dynamics simulations. Although the authors present a new perspective for quantifying verification and validation, I believe there is a conceptual flaw in the proposed approach to validation. Three criticisms follow.

1 The authors define verification as “…a process for assessing simulation numerical uncertainty…” I agree with the authors when they say that their definition of verification is not contradictory with the broader definitions developed by Roache 1 and the AIAA Guide 2. However, as pointed out clearly by Roache and others, there are two other important facets of verification: code verification and software quality assurance. Code verification deals with assessing the correctness of the computer program in implementing the discrete form of the partial differential equations, as well as the numerical algorithms needed to solve the discrete equations. Software quality assurance deals with topics such as code robustness, version control, static and dynamic testing, and documentation. In this paper, the authors only address the issue of solution verification, neglecting to mention these two other topics of equal importance. For a paper claiming to present a comprehensive approach, this is misleading.

*T*is the “truth,” $\delta SM$ is the simulation modeling error, and $\epsilon SN$ is the estimated numerical error from the simulation. Although the authors do not clearly state what the “truth” is in this expression, the only interpretation that makes sense, based on their discussion starting with Eq. (1), is that

*T*is the true value resulting from experimental measurement. Discussing verification in terms of experimental measurements and simulation modeling error causes a great deal of confusion when people are trying to understand the fundamental differences between verification and validation. As Roache 1 lucidly puts it, “Verification deals with mathematics, validation deals with physics.”

*M*is defined as the exact, or analytical, solution to the continuum partial differential equations. This equation appears to be consistent with accepted definitions of verification. However, in order to get to Eq. (10), the authors had to define the error in the corrected solution, $\delta SC,$ as

*E*is the comparison error, $UD$ is the uncertainty in an individual experimental measurement, $USPD$ is the uncertainty in the simulation model due to use of previous data, and $USN$ is the uncertainty in the numerical error estimate.

*E*is defined as $D\u2212S,$ where

*D*is the result obtained from an individual experimental measurement, and

*S*is the result from a numerical simulation.

The authors’ implementation of validation does not embody their definition of validation because of the way they define the comparison error *E*. First, they define the comparison error using an individual experimental measurement *D*. This is in contrast to using the true experimental value *T*. As the number of experimental measurement samples increases, the statistical mean converges to the true value, ignoring systematic (bias) error. That is, as more experimental realizations are obtained, the key issue becomes: how does the simulation compare to the mean as opposed to any individual measurement?

Second, they define the comparison error using an individual numerical simulation result *S*. Since *S* can have an arbitrary magnitude of numerical error, it is not a reflection of the true value from the model, which is *M*. Validation should measure how well the true value from the model compares with the experiment, not how well a simulation value polluted by numerical error compares with the experiment.

Because of the way in which they define the comparison error *E*, the authors’ implementation of validation is forced into the following situation. The simulation can be declared validated by increasing the right side of their validation equation. The right side can be increased by: (a) increasing the experimental uncertainty; (b) increasing the uncertainty in data used from previous analyses; or (c) increasing the numerical uncertainty in a given simulation. As pointed out by Roache 3 and Oberkampf and Trucano 4, this makes no sense.

The authors responded to Roache’s criticism in a previous Author’s Closure 3, as well as in the subject paper, by saying that what is really important is the magnitude of the validation measure required by the application of the code. In my opinion, this is sidestepping the criticism because the criticism is directed at the way the validation measure *itself* is defined, not how it might be used. Resorting to the use of application requirements in defining the validation measure is contrary to the fundamental meaning of validation. This misunderstanding, widespread in the community, presumes that validation means assessing whether the simulation has “passed” or “failed” an application requirement.

Validation, as defined by the AIAA Guide 2, and earlier by the Defense Modeling and Simulation Office of the Department of Defense 5, is: “The process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model.” Stated differently, validation is *only* a measure of the agreement between simulation and experiment. The magnitude of the measure is not an issue as far as validation is concerned. This may seem contradictory to people who are new to the terminology of verification and validation. Validation is defined in this way for two reasons. First, the required magnitude of the validation measure varies from one application to another. For example, the magnitude of a validation measure that is satisfactory for one application may be a factor of ten or more larger than what is needed for another application. Second, in multi-physics simulations one does not know before hand what level of validation is needed for each component of physics. There is actually an interaction of validation measure requirements, and trade-off between requirements, in order to achieve the accuracy required for the particular system response quantity of interest. That is why application requirements cannot be used to defend a particular implementation of a validation measure.

## Acknowledgment

Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under Contract DE-AC04-94AL85000.

*Verification and Validation in Computational Science and Engineering*, Hermosa Publishers, Albuquerque, NM.