Latent Multi-Task Architecture Learning
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Standard
Latent Multi-Task Architecture Learning. / Ruder, Sebastian; Bingel, Joachim; Augenstein, Isabelle; Søgaard, Anders.
Proceedings of 33nd AAAI Conference on Artificial Intelligence, AAAI 2019. AAAI Press, 2019. p. 4822-4829.Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Latent Multi-Task Architecture Learning
AU - Ruder, Sebastian
AU - Bingel, Joachim
AU - Augenstein, Isabelle
AU - Søgaard, Anders
PY - 2019
Y1 - 2019
N2 - Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)–(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15% average error reductions over common approaches to MTL.
AB - Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers or subspaces that benefit from sharing, (b) the appropriate amount of sharing, and (c) the appropriate relative weights of the different task losses. Recent work has addressed each of the above problems in isolation. In this work we present an approach that learns a latent multi-task architecture that jointly addresses (a)–(c). We present experiments on synthetic data and data from OntoNotes 5.0, including four different tasks and seven different domains. Our extension consistently outperforms previous approaches to learning latent architectures for multi-task problems and achieves up to 15% average error reductions over common approaches to MTL.
U2 - 10.1609/aaai.v33i01.33014822
DO - 10.1609/aaai.v33i01.33014822
M3 - Article in proceedings
SP - 4822
EP - 4829
BT - Proceedings of 33nd AAAI Conference on Artificial Intelligence, AAAI 2019
PB - AAAI Press
T2 - 33rd AAAI Conference on Artificial Intelligence - AAAI 2019
Y2 - 27 January 2019 through 1 February 2019
ER -
ID: 240627841