Semi-automatic annotation with predicted visual saliency maps for object recognition in wearable video

J. Benois-Pineau, M. S.García Vázquez, L. A.Oropesa Morales, A. A.Ramirez Acosta

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

Recognition of objects1 of a given category in visual content is one of the key problems in computer vision and multimedia. It is strongly needed in wearable video shooting for a wide range of important applications in society. Supervised learning approaches are proved to be the most efficient in this task. They require available ground truth for training models. It is specifically true for Deep Convolution Networks, but is also hold for other popular models such as SVM on visual signatures. Annotation of ground truth when drawing bounding boxes (BB) is a very tedious task requiring important human resource. The research in prediction of visual attention in images and videos has attained maturity, specifically in what concerns bottom-up visual attention modeling. Hence, instead of annotating the ground truth manually with BB we propose to use automatically predicted salient areas as object locators for annotation. Such a prediction of saliency is not perfect, nevertheless. Hence active contours models on saliency maps are used in order to isolate the most prominent areas covering the objects. The approach is tested in the framework of a well-studied supervised learning model by SVM with psycho-visual weighted Bag-of-Words. An egocentric GTEA dataset was used in the experiment. The difference in mAP (mean average precision) is less than 10 percent while the mean annotation time is 36% lower.

Original languageEnglish
Title of host publicationWearMMe 2017 - Proceedings of the 2017 Workshop on Wearable MultiMedia, co-located with ICMR 2017
PublisherAssociation for Computing Machinery, Inc
Pages10-14
Number of pages5
ISBN (Electronic)9781450350334
DOIs
StatePublished - 6 Jun 2017
Event2017 Workshop on Wearable Multimedia, WearMMe 2017 - Bucharest, Romania
Duration: 6 Jun 2017 → …

Publication series

NameWearMMe 2017 - Proceedings of the 2017 Workshop on Wearable MultiMedia, co-located with ICMR 2017

Conference

Conference2017 Workshop on Wearable Multimedia, WearMMe 2017
Country/TerritoryRomania
CityBucharest
Period6/06/17 → …

Keywords

  • Active contour
  • Object recognition
  • Saliency maps
  • Visual object annotation

Fingerprint

Dive into the research topics of 'Semi-automatic annotation with predicted visual saliency maps for object recognition in wearable video'. Together they form a unique fingerprint.

Cite this