Dropping Activations in Convolutional Neural Networks with Visual Attention Maps

Abraham Montoya Obeso, Jenny Benois-Pineau, Mireya Sarai Garcia Vazquez, Alejandro A.Ramirez Acosta

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

The introduction of visual attention models in data selection and features selection in CNNs for the task of image classification is an intensive and interesting research topic. In CNNs, the strategy of dropping activations, after features extraction layers, shown an increase in the generalization gap in large-scale datasets and avoiding over-fitting. Dropout has been studied in the literature in a fully-randomized manner to take down activations during training. In this paper, we introduce a saliency-based dropping strategy to take down activations in our AlexNet-like architecture. Our experiments are conducted for the specific task of specific Mexican architectural recognition, in 67 categories. The results are promising: The proposed approach outperformed other models reducing training time and reaching a higher accuracy.

Original languageEnglish
Title of host publication2019 International Conference on Content-Based Multimedia Indexing, CBMI 2019 - Proceedings
PublisherIEEE Computer Society
ISBN (Electronic)9781728146737
DOIs
StatePublished - Sep 2019
Event17th International Conference on Content-Based Multimedia Indexing, CBMI 2019 - Dublin, Ireland
Duration: 4 Sep 20196 Sep 2019

Publication series

NameProceedings - International Workshop on Content-Based Multimedia Indexing
Volume2019-September
ISSN (Print)1949-3991

Conference

Conference17th International Conference on Content-Based Multimedia Indexing, CBMI 2019
Country/TerritoryIreland
CityDublin
Period4/09/196/09/19

Keywords

  • Cultural Heritage
  • Deep Learning
  • Dropping Activations

Fingerprint

Dive into the research topics of 'Dropping Activations in Convolutional Neural Networks with Visual Attention Maps'. Together they form a unique fingerprint.

Cite this