What makes the difference? An empirical comparison of fusion strategies for multimodal language analysis

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt


  • Fulltext

    Indsendt manuskript, 1,1 MB, PDF-dokument

Multimodal video sentiment analysis is a rapidly growing area. It combines verbal (i.e., linguistic) and non-verbal modalities (i.e., visual, acoustic) to predict the sentiment of utterances. A recent trend has been geared towards different modality fusion models utilizing various attention, memory and recurrent components. However, there lacks a systematic investigation on how these different components contribute to solving the problem as well as their limitations. This paper aims to fill the gap, marking the following key innovations. We present the first large-scale and comprehensive empirical comparison of eleven state-of-the-art (SOTA) modality fusion approaches in two video sentiment analysis tasks, with three SOTA benchmark corpora. An in-depth analysis of the results shows that the attention mechanisms are the most effective for modelling crossmodal interactions, yet they are computationally expensive. Second, additional levels of crossmodal interaction decrease performance. Third, positive sentiment utterances are the most challenging cases for all approaches. Finally, integrating context and utilizing the linguistic modality as a pivot for non-verbal modalities improve performance. We expect that the findings would provide helpful insights and guidance to the development of more effective modality fusion models.

TidsskriftInformation Fusion
Sider (fra-til)184-197
StatusUdgivet - 2021

Bibliografisk note

Funding Information:
This study is supported by the Quantum Information Access and Retrieval Theory (QUARTZ) project, which has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 721321 , and Natural Science Foundation of China (grant No.: U1636203 ).

Publisher Copyright:
© 2020 Elsevier B.V.

ID: 306691667