Adaptive Cholesky Gaussian Processes
Research output: Contribution to journal › Conference article › Research › peer-review
Standard
Adaptive Cholesky Gaussian Processes. / Bartels, Simon; Stensbo-Smidt, Kristoffer; Moreno-Muñoz, Pablo; Boomsma, Wouter; Frellsen, Jes; Hauberg, Søren.
In: Proceedings of Machine Learning Research, Vol. 206, 2023, p. 408--452.Research output: Contribution to journal › Conference article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Adaptive Cholesky Gaussian Processes
AU - Bartels, Simon
AU - Stensbo-Smidt, Kristoffer
AU - Moreno-Muñoz, Pablo
AU - Boomsma, Wouter
AU - Frellsen, Jes
AU - Hauberg, Søren
PY - 2023
Y1 - 2023
N2 - We present a method to approximate Gaussian process regression models to large datasets by considering only a subset of the data. Our approach is novel in that the size of the subset is selected on the fly during exact inference with little computational overhead. From an empirical observation that the log-marginal likelihood often exhibits a linear trend once a sufficient subset of a dataset has been observed, we conclude that many large datasets contain redundant information that only slightly affects the posterior. Based on this, we provide probabilistic bounds on the full model evidence that can identify such subsets. Remarkably, these bounds are largely composed of terms that appear in intermediate steps of the standard Cholesky decomposition, allowing us to modify the algorithm to adaptively stop the decomposition once enough data have been observed.
AB - We present a method to approximate Gaussian process regression models to large datasets by considering only a subset of the data. Our approach is novel in that the size of the subset is selected on the fly during exact inference with little computational overhead. From an empirical observation that the log-marginal likelihood often exhibits a linear trend once a sufficient subset of a dataset has been observed, we conclude that many large datasets contain redundant information that only slightly affects the posterior. Based on this, we provide probabilistic bounds on the full model evidence that can identify such subsets. Remarkably, these bounds are largely composed of terms that appear in intermediate steps of the standard Cholesky decomposition, allowing us to modify the algorithm to adaptively stop the decomposition once enough data have been observed.
M3 - Conference article
VL - 206
SP - 408
EP - 452
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
SN - 2640-3498
T2 - 26th International Conference on Artificial Intelligence and Statistics, AISTATS 2023
Y2 - 25 April 2023 through 27 April 2023
ER -
ID: 344671585