Lessons learned in multilingual grounded language learning

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

  • Ákos Kádár
  • Elliott, Desmond
  • Marc-Alexandre Côté
  • Grzegorz Chrupala
  • Afra Alishahi
Recent work has shown how to learn bettervisual-semantic embeddings by leveraging imagedescriptions in more than one language.Here, we investigate in detail which conditionsaffect the performance of this type of grounded language learning model. We show that multilingual training improves over bilingual training, and that low-resource languages benefit from training with higher-resource languages. We demonstrate that a multilingual model can be trained equally well on either translations or comparable sentence pairs, and that annotating the same set of images in multiple language enables further improvements via an additional caption-caption ranking objective
OriginalsprogEngelsk
TitelProceedings of the 22nd Conference on Computational Natural Language Learning
Antal sider11
ForlagAssociation for Computational Linguistics
Publikationsdato2018
Sider402-412
ISBN (Trykt)978-1-948087-72-8
StatusUdgivet - 2018
Begivenhed22nd Conference on Computational Natural Language Learning (CoNLL 2018) - Brussels, Belgien
Varighed: 31 okt. 20181 nov. 2018

Konference

Konference22nd Conference on Computational Natural Language Learning (CoNLL 2018)
LandBelgien
ByBrussels
Periode31/10/201801/11/2018

ID: 230797458