Semantic segmentation in egocentric video frames with deep learning for recognition of activities of daily living

José A. Zamorano Raya, Mireya S. García Vázquez, Juan C. Jaimes Méndez, Abraham Montoya Obeso, Jorge L. Compean Aguirre, Alejandro A. Ramírez Acosta

Producción científica: Contribución a una conferenciaArtículo

Resumen

© COPYRIGHT SPIE. Downloading of the abstract is permitted for personal use only. The analysis of videos for the recognition of Instrumental Activities of Daily Living (IADL) through the detection of objects and the context analysis, applied for the evaluation of patient's capacity with Alzheimer's disease and age related dementia, has recently gained a lot of interest. The incorporation of human perception in the recognition tasks, search, detection and visual content understanding has become one of the main tools for the development of systems and technologies that support the performance of people in their daily life activities. In this paper we propose a model of automatic segmentation of the saliency region where the objects of interest are found in egocentric video using fully convolutional networks (FCN). The segmentation is performed with the information regarding to human perception, obtaining a better segmentation at pixel level. This segmentation involves objects of interest and the salient region in egocentric videos, providing precise information to detection systems and automatic indexing of objects in video, where these systems have improved their performance in the recognition of IADL. To measure models segmentation performance of the salient region, we benchmark two databases; first, Georgia-Tech-Egocentric-Activity database and second, our own database. Results show that the method achieves a significantly better performance in the precision of the semantic segmentation of the region where the objects of interest are located, compared with GBVS (Graph-Based Visual Saliency) method
Idioma originalInglés estadounidense
DOI
EstadoPublicada - 1 ene. 2019
EventoProceedings of SPIE - The International Society for Optical Engineering -
Duración: 1 ene. 2019 → …

Conferencia

ConferenciaProceedings of SPIE - The International Society for Optical Engineering
Período1/01/19 → …

Huella

Profundice en los temas de investigación de 'Semantic segmentation in egocentric video frames with deep learning for recognition of activities of daily living'. En conjunto forman una huella única.

Citar esto