Confidence measures for deep learning in domain adaptation

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Standard

Confidence measures for deep learning in domain adaptation. / Bonechi, Simone; Andreini, Paolo; Bianchini, Monica; Pai, Akshay; Scarselli, Franco.

I: Applied Sciences, Bind 9, Nr. 11, 2192, 2019.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Harvard

Bonechi, S, Andreini, P, Bianchini, M, Pai, A & Scarselli, F 2019, 'Confidence measures for deep learning in domain adaptation', Applied Sciences, bind 9, nr. 11, 2192. https://doi.org/10.3390/app9112192

APA

Bonechi, S., Andreini, P., Bianchini, M., Pai, A., & Scarselli, F. (2019). Confidence measures for deep learning in domain adaptation. Applied Sciences, 9(11), [2192]. https://doi.org/10.3390/app9112192

Vancouver

Bonechi S, Andreini P, Bianchini M, Pai A, Scarselli F. Confidence measures for deep learning in domain adaptation. Applied Sciences. 2019;9(11). 2192. https://doi.org/10.3390/app9112192

Author

Bonechi, Simone ; Andreini, Paolo ; Bianchini, Monica ; Pai, Akshay ; Scarselli, Franco. / Confidence measures for deep learning in domain adaptation. I: Applied Sciences. 2019 ; Bind 9, Nr. 11.

Bibtex

@article{48c68ec2a6e44bdd9b897106761dd7fd,
title = "Confidence measures for deep learning in domain adaptation",
abstract = "In recent years, Deep Neural Networks (DNNs) have led to impressive results in a wide variety of machine learning tasks, typically relying on the existence of a huge amount of supervised data. However, in many applications (e.g., bio-medical image analysis), gathering large sets of labeled data can be very difficult and costly. Unsupervised domain adaptation exploits data from a source domain, where annotations are available, to train a model able to generalize also to a target domain, where labels are unavailable. Recent research has shown that Generative Adversarial Networks (GANs) can be successfully employed for domain adaptation, although deciding when to stop learning is a major concern for GANs. In this work, we propose some confidence measures that can be used to early stop the GAN training, also showing how such measures can be employed to predict the reliability of the network output. The effectiveness of the proposed approach has been tested in two domain adaptation tasks, with very promising results.",
keywords = "Confidence measures, Generative Adversarial Networks, Uncertainty estimation, Unsupervised domain adaptation",
author = "Simone Bonechi and Paolo Andreini and Monica Bianchini and Akshay Pai and Franco Scarselli",
year = "2019",
doi = "10.3390/app9112192",
language = "English",
volume = "9",
journal = "Applied Sciences",
issn = "1454-5101",
publisher = "Politechnica University of Bucharest",
number = "11",

}

RIS

TY - JOUR

T1 - Confidence measures for deep learning in domain adaptation

AU - Bonechi, Simone

AU - Andreini, Paolo

AU - Bianchini, Monica

AU - Pai, Akshay

AU - Scarselli, Franco

PY - 2019

Y1 - 2019

N2 - In recent years, Deep Neural Networks (DNNs) have led to impressive results in a wide variety of machine learning tasks, typically relying on the existence of a huge amount of supervised data. However, in many applications (e.g., bio-medical image analysis), gathering large sets of labeled data can be very difficult and costly. Unsupervised domain adaptation exploits data from a source domain, where annotations are available, to train a model able to generalize also to a target domain, where labels are unavailable. Recent research has shown that Generative Adversarial Networks (GANs) can be successfully employed for domain adaptation, although deciding when to stop learning is a major concern for GANs. In this work, we propose some confidence measures that can be used to early stop the GAN training, also showing how such measures can be employed to predict the reliability of the network output. The effectiveness of the proposed approach has been tested in two domain adaptation tasks, with very promising results.

AB - In recent years, Deep Neural Networks (DNNs) have led to impressive results in a wide variety of machine learning tasks, typically relying on the existence of a huge amount of supervised data. However, in many applications (e.g., bio-medical image analysis), gathering large sets of labeled data can be very difficult and costly. Unsupervised domain adaptation exploits data from a source domain, where annotations are available, to train a model able to generalize also to a target domain, where labels are unavailable. Recent research has shown that Generative Adversarial Networks (GANs) can be successfully employed for domain adaptation, although deciding when to stop learning is a major concern for GANs. In this work, we propose some confidence measures that can be used to early stop the GAN training, also showing how such measures can be employed to predict the reliability of the network output. The effectiveness of the proposed approach has been tested in two domain adaptation tasks, with very promising results.

KW - Confidence measures

KW - Generative Adversarial Networks

KW - Uncertainty estimation

KW - Unsupervised domain adaptation

UR - http://www.scopus.com/inward/record.url?scp=85067242027&partnerID=8YFLogxK

U2 - 10.3390/app9112192

DO - 10.3390/app9112192

M3 - Journal article

AN - SCOPUS:85067242027

VL - 9

JO - Applied Sciences

JF - Applied Sciences

SN - 1454-5101

IS - 11

M1 - 2192

ER -

ID: 223251221