Abstract

Gaussian process (GP) regression is an important scientific machine learning (ML) tool that naturally embeds uncertainty quantification (UQ) in a Bayesian context. Thanks to its rigorous mathematical foundation and not-so-many hyperparameters, GP has been one of the most commonly used ML tools for a wide range of engineering applications, including UQ, sensitivity analysis, and design optimization, among others. As same for many similar functional approximation methods, GP also suffers from the curse of dimensionality, where its accuracy degrades exponentially as the number of input dimensions grows given a fixed amount of training data. In this paper, we revisit a variant of GP called the additive GP, which employs high-dimensional model representation to decompose the kernel additively, and benchmark the additive GP over a range of numerical functions, dimensionalities, and the number of training data points. A drawback of the additive GP is that the computational cost is proportional to the dimensionality in constructing the co-variance, due to the number of additive terms considered. Numerical results show that for functions that can be additively decomposed to multiple lower-order functions, additive GP can approximate those functions very well, even at high dimensionality (d > 50).

This content is only available via PDF.
You do not currently have access to this content.