Topic-Based Image Caption Generation

Sandeep Kumar Dash, Shantanu Acharya, Partha Pakray, Ranjita Das, Alexander Gelbukh

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

Image captioning is to generate captions for a given image based on the content of the image. To describe an image efficiently, it requires extracting as much information from it as possible. Apart from detecting the presence of objects and their relative orientation, the respective purpose intending the topic of the image is another vital information which can be incorporated with the model to improve the efficiency of the caption generation system. The sole aim is to put extra thrust on the context of the image imitating human approach, as the mere presence of objects which may not be related to the context representing the image should not be a part of the generated caption. In this work, the focus is on detecting the topic concerning the image so as to guide a novel deep learning-based encoder–decoder framework to generate captions for the image. The method is compared with some of the earlier state-of-the-art models based on the result obtained from MSCOCO 2017 training data set. BLEU, CIDEr, ROGUE-L, METEOR scores are used to measure the efficacy of the model which show improvement in performance of the caption generation process.

Original languageEnglish
Pages (from-to)3025-3034
Number of pages10
JournalArabian Journal for Science and Engineering
Volume45
Issue number4
DOIs
StatePublished - 1 Apr 2020

Keywords

  • Deep learning
  • Image caption generation
  • Topic modelling

Fingerprint

Dive into the research topics of 'Topic-Based Image Caption Generation'. Together they form a unique fingerprint.

Cite this