Semantic segmentation in egocentric video frames with deep learning for recognition of activities of daily living

José A. Zamorano Raya, Mireya S. García Vázquez, Juan C. Jaimes Méndez, Abraham Montoya Obeso, Jorge L. Compean Aguirre, Alejandro A. Ramírez Acosta

Research output: Contribution to conferencePaper

Abstract

© COPYRIGHT SPIE. Downloading of the abstract is permitted for personal use only. The analysis of videos for the recognition of Instrumental Activities of Daily Living (IADL) through the detection of objects and the context analysis, applied for the evaluation of patient's capacity with Alzheimer's disease and age related dementia, has recently gained a lot of interest. The incorporation of human perception in the recognition tasks, search, detection and visual content understanding has become one of the main tools for the development of systems and technologies that support the performance of people in their daily life activities. In this paper we propose a model of automatic segmentation of the saliency region where the objects of interest are found in egocentric video using fully convolutional networks (FCN). The segmentation is performed with the information regarding to human perception, obtaining a better segmentation at pixel level. This segmentation involves objects of interest and the salient region in egocentric videos, providing precise information to detection systems and automatic indexing of objects in video, where these systems have improved their performance in the recognition of IADL. To measure models segmentation performance of the salient region, we benchmark two databases; first, Georgia-Tech-Egocentric-Activity database and second, our own database. Results show that the method achieves a significantly better performance in the precision of the semantic segmentation of the region where the objects of interest are located, compared with GBVS (Graph-Based Visual Saliency) method
Original languageAmerican English
DOIs
StatePublished - 1 Jan 2019
EventProceedings of SPIE - The International Society for Optical Engineering -
Duration: 1 Jan 2019 → …

Conference

ConferenceProceedings of SPIE - The International Society for Optical Engineering
Period1/01/19 → …

Fingerprint

Dive into the research topics of 'Semantic segmentation in egocentric video frames with deep learning for recognition of activities of daily living'. Together they form a unique fingerprint.

Cite this