Can AMR Assist Legal and Logical Reasoning?
Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research
Standard
Can AMR Assist Legal and Logical Reasoning? / Schrack, Nikolaus; Cui, Ruixiang; López, Hugo A.; Hershcovich, Daniel.
Findings of the Association for Computational Linguistics: EMNLP 2022. Association for Computational Linguistics, 2022. p. 1555-1568.Research output: Chapter in Book/Report/Conference proceeding › Article in proceedings › Research
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - Can AMR Assist Legal and Logical Reasoning?
AU - Schrack, Nikolaus
AU - Cui, Ruixiang
AU - López, Hugo A.
AU - Hershcovich, Daniel
N1 - Publisher Copyright: © 2022 Association for Computational Linguistics.
PY - 2022
Y1 - 2022
N2 - Meaning Representation (AMR) has been shown to be useful for many downstream tasks. In this work, we explore the use of AMR for legal and logical reasoning. Specifically, we investigate if AMR can help capture logical relationships on multiple choice question answering (MCQA) tasks. We propose neural architectures that utilize linearised AMR graphs in combination with pre-trained language models. While these models are not able to outperform text-only baselines, they correctly solve different instances than the text models, suggesting complementary abilities. Error analysis further reveals that AMR parsing quality is the most prominent challenge, especially regarding inputs with multiple sentences. We conduct a theoretical analysis of how logical relations are represented in AMR and conclude it might be helpful in some logical statements but not for others.
AB - Meaning Representation (AMR) has been shown to be useful for many downstream tasks. In this work, we explore the use of AMR for legal and logical reasoning. Specifically, we investigate if AMR can help capture logical relationships on multiple choice question answering (MCQA) tasks. We propose neural architectures that utilize linearised AMR graphs in combination with pre-trained language models. While these models are not able to outperform text-only baselines, they correctly solve different instances than the text models, suggesting complementary abilities. Error analysis further reveals that AMR parsing quality is the most prominent challenge, especially regarding inputs with multiple sentences. We conduct a theoretical analysis of how logical relations are represented in AMR and conclude it might be helpful in some logical statements but not for others.
UR - http://www.scopus.com/inward/record.url?scp=85149815433&partnerID=8YFLogxK
M3 - Article in proceedings
AN - SCOPUS:85149815433
SP - 1555
EP - 1568
BT - Findings of the Association for Computational Linguistics: EMNLP 2022
PB - Association for Computational Linguistics
T2 - 2022 Findings of the Association for Computational Linguistics: EMNLP 2022
Y2 - 7 December 2022 through 11 December 2022
ER -
ID: 339845185