Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets

Grigori Sidorov, Fazlourrahman Balouchzahi, Sabur Butt, Alexander Gelbukh

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

In this paper, we analyzed the performance of different transformer models for regret and hope speech detection on two novel datasets. For the regret detection task, we compared the averaged macro-scores of the transformer models to the previous state-of-the-art results. We found that the transformer models outperformed the previous approaches. Specifically, the roberta-based model achieved the highest averaged macro F1-score of 0.83, beating the previous state-of-the-art score of 0.76. For the hope speech detection task, the bert-based, uncased model achieved the highest averaged-macro F1-score of 0.72 among the transformer models. However, the specific performance of each model varied slightly depending on the task and dataset. Our findings highlight the effectiveness of transformer models for hope speech and regret detection tasks, and the importance of considering the effects of context, specific transformer architectures, and pre-training on their performance.

Original languageEnglish
Article number3983
JournalApplied Sciences (Switzerland)
Volume13
Issue number6
DOIs
StatePublished - Mar 2023

Keywords

  • contextual embedding
  • hope speech detection
  • regret detection
  • text classification
  • transformers

Fingerprint

Dive into the research topics of 'Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets'. Together they form a unique fingerprint.

Cite this