Semi-automatic annotation with predicted visual saliency maps for object recognition in wearable video

J. Benois-Pineau, M. S.García Vázquez, L. A.Oropesa Morales, A. A.Ramirez Acosta

Research output: Contribution to conferencePaper

2 Scopus citations

Abstract

© 2017 Association for Computing Machinery. Recognition of objects1 of a given category in visual content is one of the key problems in computer vision and multimedia. It is strongly needed in wearable video shooting for a wide range of important applications in society. Supervised learning approaches are proved to be the most efficient in this task. They require available ground truth for training models. It is specifically true for Deep Convolution Networks, but is also hold for other popular models such as SVM on visual signatures. Annotation of ground truth when drawing bounding boxes (BB) is a very tedious task requiring important human resource. The research in prediction of visual attention in images and videos has attained maturity, specifically in what concerns bottom-up visual attention modeling. Hence, instead of annotating the ground truth manually with BB we propose to use automatically predicted salient areas as object locators for annotation. Such a prediction of saliency is not perfect, nevertheless. Hence active contours models on saliency maps are used in order to isolate the most prominent areas covering the objects. The approach is tested in the framework of a well-studied supervised learning model by SVM with psycho-visual weighted Bag-of-Words. An egocentric GTEA dataset was used in the experiment. The difference in mAP (mean average precision) is less than 10 percent while the mean annotation time is 36% lower.
Original languageAmerican English
Pages10-14
Number of pages8
DOIs
StatePublished - 6 Jun 2017
EventWearMMe 2017 - Proceedings of the 2017 Workshop on Wearable MultiMedia, co-located with ICMR 2017 -
Duration: 6 Jun 2017 → …

Conference

ConferenceWearMMe 2017 - Proceedings of the 2017 Workshop on Wearable MultiMedia, co-located with ICMR 2017
Period6/06/17 → …

    Fingerprint

Cite this

Benois-Pineau, J., Vázquez, M. S. G., Morales, L. A. O., & Acosta, A. A. R. (2017). Semi-automatic annotation with predicted visual saliency maps for object recognition in wearable video. 10-14. Paper presented at WearMMe 2017 - Proceedings of the 2017 Workshop on Wearable MultiMedia, co-located with ICMR 2017, . https://doi.org/10.1145/3080538.3080541