Topic-Based Image Caption Generation

Sandeep Kumar Dash, Shantanu Acharya, Partha Pakray, Ranjita Das, Alexander Gelbukh

Producción científica: Contribución a una revistaArtículorevisión exhaustiva

12 Citas (Scopus)

Resumen

Image captioning is to generate captions for a given image based on the content of the image. To describe an image efficiently, it requires extracting as much information from it as possible. Apart from detecting the presence of objects and their relative orientation, the respective purpose intending the topic of the image is another vital information which can be incorporated with the model to improve the efficiency of the caption generation system. The sole aim is to put extra thrust on the context of the image imitating human approach, as the mere presence of objects which may not be related to the context representing the image should not be a part of the generated caption. In this work, the focus is on detecting the topic concerning the image so as to guide a novel deep learning-based encoder–decoder framework to generate captions for the image. The method is compared with some of the earlier state-of-the-art models based on the result obtained from MSCOCO 2017 training data set. BLEU, CIDEr, ROGUE-L, METEOR scores are used to measure the efficacy of the model which show improvement in performance of the caption generation process.

Idioma originalInglés
Páginas (desde-hasta)3025-3034
Número de páginas10
PublicaciónArabian Journal for Science and Engineering
Volumen45
N.º4
DOI
EstadoPublicada - 1 abr. 2020

Huella

Profundice en los temas de investigación de 'Topic-Based Image Caption Generation'. En conjunto forman una huella única.

Citar esto