We begin this expository essay by reviewing, with examples from the materials and fabrication testing literature, what a typical engineer already knows about statistics. We then consider a central question in engineering decision making, i.e., given a computer simulation of high-consequence systems, how do we verify and validate (V & V) and what are the margins of errors of all the important predicted results? To answer this question, we assert that we need three basic tools that already exist in statistical and metrological sciences: (A) Error Analysis. (B) Experimental Design. (C) Uncertainty Analysis. Those three tools, to be known as A B C of statistics, were developed through a powerful linkage between the statistical and metrological sciences. By extending the key concepts of this linkage from physical experiments to numerical simulations, we propose a new approach to answering the V & V question posed earlier. The key concepts are: (1) Uncertainty as defined in ISO Guide to the Expression of Uncertainty in Measurement (1993). (2) Design of experiments prior to data collection in a randomized or orthogonal scheme to evaluate interactions among model variables. (3) Standard reference benchmarks for calibration, and inter-laboratory studies for “weighted” consensus mean. To illustrate the need for and to discuss the plausibility of this metrology-based approach to V & V, two example problems are presented: (a) the verification of 12 simulations of the deformation of a cantilever beam, and (b) the calculation of a mean time to failure for a uniformly loaded 100-column single-floor steel grillage on fire.

This content is only available via PDF.
You do not currently have access to this content.