Prompt, Condition, and Generate: Classification of Unsupported Claims with In-Context Learning
Research output: Working paper › Preprint › Research
Standard
Prompt, Condition, and Generate : Classification of Unsupported Claims with In-Context Learning. / Christensen, Peter Ebert; Yadav, Srishti; Belongie, Serge.
arXiv.org, 2023.Research output: Working paper › Preprint › Research
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - UNPB
T1 - Prompt, Condition, and Generate
T2 - Classification of Unsupported Claims with In-Context Learning
AU - Christensen, Peter Ebert
AU - Yadav, Srishti
AU - Belongie, Serge
PY - 2023
Y1 - 2023
N2 - Unsupported and unfalsifiable claims we encounter in our daily lives can influence our view of the world. Characterizing, summarizing, and -- more generally -- making sense of such claims, however, can be challenging. In this work, we focus on fine-grained debate topics and formulate a new task of distilling, from such claims, a countable set of narratives. We present a crowdsourced dataset of 12 controversial topics, comprising more than 120k arguments, claims, and comments from heterogeneous sources, each annotated with a narrative label. We further investigate how large language models (LLMs) can be used to synthesise claims using In-Context Learning. We find that generated claims with supported evidence can be used to improve the performance of narrative classification models and, additionally, that the same model can infer the stance and aspect using a few training examples. Such a model can be useful in applications which rely on narratives , e.g. fact-checking.
AB - Unsupported and unfalsifiable claims we encounter in our daily lives can influence our view of the world. Characterizing, summarizing, and -- more generally -- making sense of such claims, however, can be challenging. In this work, we focus on fine-grained debate topics and formulate a new task of distilling, from such claims, a countable set of narratives. We present a crowdsourced dataset of 12 controversial topics, comprising more than 120k arguments, claims, and comments from heterogeneous sources, each annotated with a narrative label. We further investigate how large language models (LLMs) can be used to synthesise claims using In-Context Learning. We find that generated claims with supported evidence can be used to improve the performance of narrative classification models and, additionally, that the same model can infer the stance and aspect using a few training examples. Such a model can be useful in applications which rely on narratives , e.g. fact-checking.
M3 - Preprint
BT - Prompt, Condition, and Generate
PB - arXiv.org
ER -
ID: 384868256