Spurious Correlations in Cross-Topic Argument Mining

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 1,55 MB, PDF-dokument

Recent work in cross-topic argument mining attempts to learn models that generalise across topics rather than merely relying on within-topic spurious correlations. We examine the effectiveness of this approach by analysing the output of single-task and multi-task models for cross-topic argument mining, through a combination of linear approximations of their decision boundaries, manual feature grouping, challenge examples, and ablations across the input vocabulary. Surprisingly, we show that cross-topic models still rely mostly on spurious correlations and only generalise within closely related topics, e.g., a model trained only on closed-class words and a few common open-class words outperforms a state-of-the-art cross-topic model on distant target topics.
OriginalsprogEngelsk
TitelProceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics
ForlagAssociation for Computational Linguistics
Publikationsdato2021
Sider263-277
DOI
StatusUdgivet - 2021
BegivenhedTenth Joint Conference on Lexical and Computational Semantics - SEM 2021 - Online
Varighed: 5 aug. 20216 aug. 2021

Konference

KonferenceTenth Joint Conference on Lexical and Computational Semantics - SEM 2021
ByOnline
Periode05/08/202106/08/2021

ID: 300082790