The major drawback of the Bayesian approach to model calibration is the computational burden involved in describing the posterior distribution of the unknown model parameters arising from the fact that typical Markov chain Monte Carlo (MCMC) samplers require thousands of forward model evaluations. In this work, we develop a variational Bayesian approach to model calibration which uses an information theoretic criterion to recast the posterior problem as an optimization problem. Specifically, we parameterize the posterior using the family of Gaussian mixtures and seek to minimize the information loss incurred by replacing the true posterior with an approximate one. Our approach is of particular importance in underdetermined problems with expensive forward models in which both the classical approach of minimizing a potentially regularized misfit function and MCMC are not viable options. We test our methodology on two surrogate-free examples and show that it dramatically outperforms MCMC methods.
Computationally Efficient Variational Approximations for Bayesian Inverse Problems
Manuscript received September 15, 2015; final manuscript received July 5, 2016; published online July 26, 2016. Editor: Ashley F. Emery.
- Views Icon Views
- Share Icon Share
- Cite Icon Cite
- Search Site
Tsilifis, P., Bilionis, I., Katsounaros, I., and Zabaras, N. (July 26, 2016). "Computationally Efficient Variational Approximations for Bayesian Inverse Problems." ASME. J. Verif. Valid. Uncert. September 2016; 1(3): 031004. https://doi.org/10.1115/1.4034102
Download citation file:
- Ris (Zotero)
- Reference Manager