Deep neural networks (DNNs) have demonstrated good performance in learning highly non-linear relationships in large datasets, thus have been considered as a promising surrogate modeling tool for parametric partial differential equations (PDEs). On the other hand, quantifying the predictive uncertainty in DNNs is still a challenging problem. The Bayesian neural network (BNN), a sophisticated method assuming the weights of the DNNs follow certain uncertainty distributions, is considered as a state-of-the-art method for the UQ of DNNs. However, the method is too computationally expensive to be used in complicated DNN architectures.

In this work, we utilized two more methods for the UQ of complicated DNNs, i.e. Monte Carlo dropout and deep ensemble. Both methods are computationally efficient and scalable compared to BNN. We applied these two methods to a densely connected convolutional network, which is developed and trained as a coarse-mesh turbulence closure relation for reactor safety analysis. In comparison, the corresponding BNN with the same architecture is also developed and trained. The computational cost and uncertainty evaluation performance of these three UQ methods are comprehensively investigated. It is found that the deep ensemble method is able to produce reasonable uncertainty estimates with good scalability and relatively low computational cost compared to BNN.

This content is only available via PDF.