Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets

Grigori Sidorov, Fazlourrahman Balouchzahi, Sabur Butt, Alexander Gelbukh

Producción científica: Contribución a una revistaArtículorevisión exhaustiva

2 Citas (Scopus)

Resumen

In this paper, we analyzed the performance of different transformer models for regret and hope speech detection on two novel datasets. For the regret detection task, we compared the averaged macro-scores of the transformer models to the previous state-of-the-art results. We found that the transformer models outperformed the previous approaches. Specifically, the roberta-based model achieved the highest averaged macro F1-score of 0.83, beating the previous state-of-the-art score of 0.76. For the hope speech detection task, the bert-based, uncased model achieved the highest averaged-macro F1-score of 0.72 among the transformer models. However, the specific performance of each model varied slightly depending on the task and dataset. Our findings highlight the effectiveness of transformer models for hope speech and regret detection tasks, and the importance of considering the effects of context, specific transformer architectures, and pre-training on their performance.

Idioma originalInglés
Número de artículo3983
PublicaciónApplied Sciences (Switzerland)
Volumen13
N.º6
DOI
EstadoPublicada - mar. 2023

Huella

Profundice en los temas de investigación de 'Regret and Hope on Transformers: An Analysis of Transformers on Regret and Hope Speech Detection Datasets'. En conjunto forman una huella única.

Citar esto