TY - GEN
T1 - Video annotations of Mexican nature in a collaborative environment
AU - Morales, Lester Arturo Oropesa
AU - Obeso, Abraham Montoya
AU - García, Rosaura Hernández
AU - Almeda, Sara Ivonne Cocolán
AU - Vázquez, Mireya Saraí García
AU - Benois-Pineau, Jenny
AU - Fuentes, Luis Miguel Zamudio
AU - Nuño, Jesús A.Martinez
AU - Acosta, Alejandro Alvaro Ramírez
N1 - Publisher Copyright:
© 2015 SPIE.
PY - 2015
Y1 - 2015
N2 - Multimedia content production and storage in repositories are now an increasingly widespread practice. Indexing concepts for search in multimedia libraries are very useful for users of the repositories. However the search tools of content-based retrieval and automatic video tagging, still do not have great consistency. Regardless of how these systems are implemented, it is of vital importance to possess lots of videos that have concepts tagged with ground truth (training and testing sets). This paper describes a novel methodology to make complex annotations on video resources through ELAN software. The concepts are annotated and related to Mexican nature in a High Level Features (HLF) from development set of TRECVID 2014 in a collaborative environment. Based on this set, each nature concept observed is tagged on each video shot using concepts of the TRECVid 2014 dataset. We also propose new concepts,-like tropical settings, urban scenes, actions, events, weather, places for name a few. We also propose specific concepts that best describe video content of Mexican culture. We have been careful to get the database tagged with concepts of nature and ground truth. It is evident that a collaborative environment is more suitable for annotation of concepts related to ground truth and nature. As a result a Mexican nature database was built. It also is the basis for testing and training sets to automatically classify new multimedia content of Mexican nature.
AB - Multimedia content production and storage in repositories are now an increasingly widespread practice. Indexing concepts for search in multimedia libraries are very useful for users of the repositories. However the search tools of content-based retrieval and automatic video tagging, still do not have great consistency. Regardless of how these systems are implemented, it is of vital importance to possess lots of videos that have concepts tagged with ground truth (training and testing sets). This paper describes a novel methodology to make complex annotations on video resources through ELAN software. The concepts are annotated and related to Mexican nature in a High Level Features (HLF) from development set of TRECVID 2014 in a collaborative environment. Based on this set, each nature concept observed is tagged on each video shot using concepts of the TRECVid 2014 dataset. We also propose new concepts,-like tropical settings, urban scenes, actions, events, weather, places for name a few. We also propose specific concepts that best describe video content of Mexican culture. We have been careful to get the database tagged with concepts of nature and ground truth. It is evident that a collaborative environment is more suitable for annotation of concepts related to ground truth and nature. As a result a Mexican nature database was built. It also is the basis for testing and training sets to automatically classify new multimedia content of Mexican nature.
KW - Mexican nature
KW - annotation
KW - database
KW - multimedia indexing
KW - semantic indexing
KW - video
UR - http://www.scopus.com/inward/record.url?scp=84951310020&partnerID=8YFLogxK
U2 - 10.1117/12.2186138
DO - 10.1117/12.2186138
M3 - Contribución a la conferencia
AN - SCOPUS:84951310020
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - Optics and Photonics for Information Processing IX
A2 - Matin, Mohammad A.
A2 - Awwal, Abdul A. S.
A2 - Marquez, Andres
A2 - Vazquez, Mireya Garcia
A2 - Iftekharuddin, Khan M.
A2 - Vazquez, Mireya Garcia
A2 - Iftekharuddin, Khan M.
A2 - Awwal, Abdul A. S.
A2 - Matin, Mohammad A.
A2 - Marquez, Andres
PB - SPIE
T2 - 9th Conference of Optics and Photonics for Information Processing
Y2 - 10 August 2015 through 12 August 2015
ER -