Why is unsupervised alignment of English embeddings from different algorithms so hard?
Publikation: Bidrag til bog/antologi/rapport › Konferencebidrag i proceedings › Forskning › fagfællebedømt
Standard
Why is unsupervised alignment of English embeddings from different algorithms so hard? / Hartmann, Mareike; Kementchedjhieva, Yova Radoslavova; Søgaard, Anders.
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 2018. s. 582–586.Publikation: Bidrag til bog/antologi/rapport › Konferencebidrag i proceedings › Forskning › fagfællebedømt
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Why is unsupervised alignment of English embeddings from different algorithms so hard?
AU - Hartmann, Mareike
AU - Kementchedjhieva, Yova Radoslavova
AU - Søgaard, Anders
PY - 2018
Y1 - 2018
N2 - This paper presents a challenge to the community:Generative adversarial networks (GANs)can perfectly align independent English wordembeddings induced using the same algorithm,based on distributional informationalone; but fails to do so, for two different embeddingsalgorithms. Why is that? We believeunderstanding why, is key to understand bothmodern word embedding algorithms and thelimitations and instability dynamics of GANs.This paper shows that (a) in all these cases,where alignment fails, there exists a lineartransform between the two embeddings (so algorithmbiases do not lead to non-linear differences),and (b) similar effects can not easilybe obtained by varying hyper-parameters. Oneplausible suggestion based on our initial experimentsis that the differences in the inductivebiases of the embedding algorithms lead toan optimization landscape that is riddled withlocal optima, leading to a very small basin ofconvergence, but we present this more as achallenge paper than a technical contribution.
AB - This paper presents a challenge to the community:Generative adversarial networks (GANs)can perfectly align independent English wordembeddings induced using the same algorithm,based on distributional informationalone; but fails to do so, for two different embeddingsalgorithms. Why is that? We believeunderstanding why, is key to understand bothmodern word embedding algorithms and thelimitations and instability dynamics of GANs.This paper shows that (a) in all these cases,where alignment fails, there exists a lineartransform between the two embeddings (so algorithmbiases do not lead to non-linear differences),and (b) similar effects can not easilybe obtained by varying hyper-parameters. Oneplausible suggestion based on our initial experimentsis that the differences in the inductivebiases of the embedding algorithms lead toan optimization landscape that is riddled withlocal optima, leading to a very small basin ofconvergence, but we present this more as achallenge paper than a technical contribution.
M3 - Article in proceedings
SP - 582
EP - 586
BT - Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
PB - Association for Computational Linguistics
Y2 - 31 October 2018 through 4 November 2018
ER -
ID: 214760789