Abstract
Deterministic integrated metrics for quantitative comparison of simulated images and experimental images, e.g., RMS error, are agnostic to structures that can emerge in highly nonlinear complex systems. Similarly, simple probabilistic metrics, such as direct comparisons of image data distributions, also do not explicitly account for salient structures. Normalizing flow architectures are probabilistic generative deep learning algorithms that leverage the nonlinear pattern recognition capacity of neural networks with variational Bayesian methods to assign likelihood values to images with respect to a “target” probability density learned from training images. If a normalizing flow is trained on simulation image data, then it can be used to quantify the probability that an experiment image could have been sampled from the unknown high dimensional distribution that describes the simulated images and vice versa. We demonstrate this validation method using the real non-volume-preserving (RealNVP) normalizing flow architecture and MNIST, corrupted MNIST, Wingdings, and blurred Wingdings data sets. Normalizing flows, and consequently our validation method, are not limited to two-dimensional data and may be applied to higher dimensions with appropriate modifications. Applications include, but are not limited to, turbulent flow simulations, proton radiography simulations, multi-phase flow simulations, and medical radiology.
This material is declared a work of the U.S. Government and is not subject to Copyright protection in the United States.