Multimodal Sentiment Analysis: Addressing Key Issues and Setting Up the Baselines

Soujanya Poria, Navonil Majumder, Devamanyu Hazarika, Erik Cambria, Alexander Gelbukh, Amir Hussain

Research output: Contribution to journalArticlepeer-review

141 Scopus citations

Abstract

We compile baselines, along with dataset split, for multimodal sentiment analysis. In this paper, we explore three different deep-learning-based architectures for multimodal sentiment classification, each improving upon the previous. Further, we evaluate these architectures with multiple datasets with fixed train/test partition. We also discuss some major issues, frequently ignored in multimodal sentiment analysis research, e.g., the role of speaker-exclusive models, the importance of different modalities, and generalizability. This framework illustrates the different facets of analysis to be considered while performing multimodal sentiment analysis and, hence, serves as a new benchmark for future research in this emerging field.

Original languageEnglish
Article number8636432
Pages (from-to)17-25
Number of pages9
JournalIEEE Intelligent Systems
Volume33
Issue number6
DOIs
StatePublished - 1 Nov 2018

Fingerprint

Dive into the research topics of 'Multimodal Sentiment Analysis: Addressing Key Issues and Setting Up the Baselines'. Together they form a unique fingerprint.

Cite this