Bayesian active learning for maximal information gain on model parameters

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

Bayesian active learning for maximal information gain on model parameters. / Arnavaz, Kasra; Feragen, Aasa; Krause, Oswin; Loog, Marco.

Proceedings of ICPR 2020 - 25th International Conference on Pattern Recognition. IEEE, 2020. s. 10524-10531 9411962.

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

Arnavaz, K, Feragen, A, Krause, O & Loog, M 2020, Bayesian active learning for maximal information gain on model parameters. i Proceedings of ICPR 2020 - 25th International Conference on Pattern Recognition., 9411962, IEEE, s. 10524-10531, 25th International Conference on Pattern Recognition, ICPR 2020, Virtual, Milan, Italien, 10/01/2021. https://doi.org/10.1109/ICPR48806.2021.9411962

APA

Arnavaz, K., Feragen, A., Krause, O., & Loog, M. (2020). Bayesian active learning for maximal information gain on model parameters. I Proceedings of ICPR 2020 - 25th International Conference on Pattern Recognition (s. 10524-10531). [9411962] IEEE. https://doi.org/10.1109/ICPR48806.2021.9411962

Vancouver

Arnavaz K, Feragen A, Krause O, Loog M. Bayesian active learning for maximal information gain on model parameters. I Proceedings of ICPR 2020 - 25th International Conference on Pattern Recognition. IEEE. 2020. s. 10524-10531. 9411962 https://doi.org/10.1109/ICPR48806.2021.9411962

Author

Arnavaz, Kasra ; Feragen, Aasa ; Krause, Oswin ; Loog, Marco. / Bayesian active learning for maximal information gain on model parameters. Proceedings of ICPR 2020 - 25th International Conference on Pattern Recognition. IEEE, 2020. s. 10524-10531

Bibtex

@inproceedings{aca808cc94a742c68206fa8e8f2967a6,
title = "Bayesian active learning for maximal information gain on model parameters",
abstract = "The fact that machine learning models, despite their advancements, are still trained on randomly gathered data is proof that a lasting solution to the problem of optimal data gathering has not yet been found. In this paper, we investigate whether a Bayesian approach to the classification problem can provide assumptions under which one is guaranteed to perform at least as good as random sampling. For a logistic regression model, we show that maximal expected information gain on model parameters is a promising criterion for selecting samples, assuming that our classification model is well-matched to the data. Our derived criterion is closely related to the maximum model change. We experiment with data sets which satisfy this assumption to varying degrees to see how sensitive our performance is to the violation of our assumption in practice.",
author = "Kasra Arnavaz and Aasa Feragen and Oswin Krause and Marco Loog",
year = "2020",
doi = "10.1109/ICPR48806.2021.9411962",
language = "English",
pages = "10524--10531",
booktitle = "Proceedings of ICPR 2020 - 25th International Conference on Pattern Recognition",
publisher = "IEEE",
note = "25th International Conference on Pattern Recognition, ICPR 2020 ; Conference date: 10-01-2021 Through 15-01-2021",

}

RIS

TY - GEN

T1 - Bayesian active learning for maximal information gain on model parameters

AU - Arnavaz, Kasra

AU - Feragen, Aasa

AU - Krause, Oswin

AU - Loog, Marco

PY - 2020

Y1 - 2020

N2 - The fact that machine learning models, despite their advancements, are still trained on randomly gathered data is proof that a lasting solution to the problem of optimal data gathering has not yet been found. In this paper, we investigate whether a Bayesian approach to the classification problem can provide assumptions under which one is guaranteed to perform at least as good as random sampling. For a logistic regression model, we show that maximal expected information gain on model parameters is a promising criterion for selecting samples, assuming that our classification model is well-matched to the data. Our derived criterion is closely related to the maximum model change. We experiment with data sets which satisfy this assumption to varying degrees to see how sensitive our performance is to the violation of our assumption in practice.

AB - The fact that machine learning models, despite their advancements, are still trained on randomly gathered data is proof that a lasting solution to the problem of optimal data gathering has not yet been found. In this paper, we investigate whether a Bayesian approach to the classification problem can provide assumptions under which one is guaranteed to perform at least as good as random sampling. For a logistic regression model, we show that maximal expected information gain on model parameters is a promising criterion for selecting samples, assuming that our classification model is well-matched to the data. Our derived criterion is closely related to the maximum model change. We experiment with data sets which satisfy this assumption to varying degrees to see how sensitive our performance is to the violation of our assumption in practice.

U2 - 10.1109/ICPR48806.2021.9411962

DO - 10.1109/ICPR48806.2021.9411962

M3 - Article in proceedings

AN - SCOPUS:85110513578

SP - 10524

EP - 10531

BT - Proceedings of ICPR 2020 - 25th International Conference on Pattern Recognition

PB - IEEE

T2 - 25th International Conference on Pattern Recognition, ICPR 2020

Y2 - 10 January 2021 through 15 January 2021

ER -

ID: 286999036