Multimodal Sentiment Analysis: Addressing Key Issues and Setting Up the Baselines

Soujanya Poria, Navonil Majumder, Devamanyu Hazarika, Erik Cambria, Alexander Gelbukh, Amir Hussain

Research output: Contribution to journalArticle

8 Citations (Scopus)

Abstract

© 2018 IEEE. We compile baselines, along with dataset split, for multimodal sentiment analysis. In this paper, we explore three different deep-learning-based architectures for multimodal sentiment classification, each improving upon the previous. Further, we evaluate these architectures with multiple datasets with fixed train/test partition. We also discuss some major issues, frequently ignored in multimodal sentiment analysis research, e.g., the role of speaker-exclusive models, the importance of different modalities, and generalizability. This framework illustrates the different facets of analysis to be considered while performing multimodal sentiment analysis and, hence, serves as a new benchmark for future research in this emerging field.
Original languageAmerican English
Pages (from-to)17-25
Number of pages14
JournalIEEE Intelligent Systems
DOIs
StatePublished - 1 Nov 2018

Fingerprint

Deep learning

Cite this

Poria, Soujanya ; Majumder, Navonil ; Hazarika, Devamanyu ; Cambria, Erik ; Gelbukh, Alexander ; Hussain, Amir. / Multimodal Sentiment Analysis: Addressing Key Issues and Setting Up the Baselines. In: IEEE Intelligent Systems. 2018 ; pp. 17-25.
@article{9760d04bb0e4400c853ae801695e0f49,
title = "Multimodal Sentiment Analysis: Addressing Key Issues and Setting Up the Baselines",
abstract = "{\circledC} 2018 IEEE. We compile baselines, along with dataset split, for multimodal sentiment analysis. In this paper, we explore three different deep-learning-based architectures for multimodal sentiment classification, each improving upon the previous. Further, we evaluate these architectures with multiple datasets with fixed train/test partition. We also discuss some major issues, frequently ignored in multimodal sentiment analysis research, e.g., the role of speaker-exclusive models, the importance of different modalities, and generalizability. This framework illustrates the different facets of analysis to be considered while performing multimodal sentiment analysis and, hence, serves as a new benchmark for future research in this emerging field.",
author = "Soujanya Poria and Navonil Majumder and Devamanyu Hazarika and Erik Cambria and Alexander Gelbukh and Amir Hussain",
year = "2018",
month = "11",
day = "1",
doi = "10.1109/MIS.2018.2882362",
language = "American English",
pages = "17--25",
journal = "IEEE Intelligent Systems",
issn = "1541-1672",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

Multimodal Sentiment Analysis: Addressing Key Issues and Setting Up the Baselines. / Poria, Soujanya; Majumder, Navonil; Hazarika, Devamanyu; Cambria, Erik; Gelbukh, Alexander; Hussain, Amir.

In: IEEE Intelligent Systems, 01.11.2018, p. 17-25.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Multimodal Sentiment Analysis: Addressing Key Issues and Setting Up the Baselines

AU - Poria, Soujanya

AU - Majumder, Navonil

AU - Hazarika, Devamanyu

AU - Cambria, Erik

AU - Gelbukh, Alexander

AU - Hussain, Amir

PY - 2018/11/1

Y1 - 2018/11/1

N2 - © 2018 IEEE. We compile baselines, along with dataset split, for multimodal sentiment analysis. In this paper, we explore three different deep-learning-based architectures for multimodal sentiment classification, each improving upon the previous. Further, we evaluate these architectures with multiple datasets with fixed train/test partition. We also discuss some major issues, frequently ignored in multimodal sentiment analysis research, e.g., the role of speaker-exclusive models, the importance of different modalities, and generalizability. This framework illustrates the different facets of analysis to be considered while performing multimodal sentiment analysis and, hence, serves as a new benchmark for future research in this emerging field.

AB - © 2018 IEEE. We compile baselines, along with dataset split, for multimodal sentiment analysis. In this paper, we explore three different deep-learning-based architectures for multimodal sentiment classification, each improving upon the previous. Further, we evaluate these architectures with multiple datasets with fixed train/test partition. We also discuss some major issues, frequently ignored in multimodal sentiment analysis research, e.g., the role of speaker-exclusive models, the importance of different modalities, and generalizability. This framework illustrates the different facets of analysis to be considered while performing multimodal sentiment analysis and, hence, serves as a new benchmark for future research in this emerging field.

UR - https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=85061228693&origin=inward

UR - https://www.scopus.com/inward/citedby.uri?partnerID=HzOxMe3b&scp=85061228693&origin=inward

U2 - 10.1109/MIS.2018.2882362

DO - 10.1109/MIS.2018.2882362

M3 - Article

SP - 17

EP - 25

JO - IEEE Intelligent Systems

JF - IEEE Intelligent Systems

SN - 1541-1672

ER -