TY - GEN
T1 - The Role of the Number of Examples in Convolutional Neural Networks with Hebbian Learning
AU - Aguilar-Canto, Fernando
AU - Calvo, Hiram
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - Both synaptic plasticity rules (the so-called Hebbian rules) and Convolutional Neural Networks are based on or inspired by well-established models of Computational Neuroscience about mammal vision. There are some theoretical advantages associated with these frameworks, including online learning in Hebbian Learning. In the case of Convolutional Neural Networks, such advantages have been translated into remarkable results in image classification in the last decade. Nevertheless, such success is not shared in Hebbian Learning. In this paper, we explore the hypothesis of the necessity of a wider dataset for the classification of mono-instantiated objects, this is, objects that can be represented in a single cluster in the feature space. By using 15 mono-instantiated classes, the Adam optimizer reaches the maximum accuracy with fewer examples but using more epochs. In comparison, Hebbian rule BCM demands more examples but keeps using real-time learning. This result is a positive answer to the principal hypothesis and enlights how Hebbian learning can find a niche in the mainstream of Deep Learning.
AB - Both synaptic plasticity rules (the so-called Hebbian rules) and Convolutional Neural Networks are based on or inspired by well-established models of Computational Neuroscience about mammal vision. There are some theoretical advantages associated with these frameworks, including online learning in Hebbian Learning. In the case of Convolutional Neural Networks, such advantages have been translated into remarkable results in image classification in the last decade. Nevertheless, such success is not shared in Hebbian Learning. In this paper, we explore the hypothesis of the necessity of a wider dataset for the classification of mono-instantiated objects, this is, objects that can be represented in a single cluster in the feature space. By using 15 mono-instantiated classes, the Adam optimizer reaches the maximum accuracy with fewer examples but using more epochs. In comparison, Hebbian rule BCM demands more examples but keeps using real-time learning. This result is a positive answer to the principal hypothesis and enlights how Hebbian learning can find a niche in the mainstream of Deep Learning.
UR - http://www.scopus.com/inward/record.url?scp=85142605123&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-19493-1_19
DO - 10.1007/978-3-031-19493-1_19
M3 - Contribución a la conferencia
AN - SCOPUS:85142605123
SN - 9783031194924
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 225
EP - 238
BT - Advances in Computational Intelligence - 21st Mexican International Conference on Artificial Intelligence, MICAI 2022, Proceedings
A2 - Pichardo Lagunas, Obdulia
A2 - Martínez Seis, Bella
A2 - Martínez-Miranda, Juan
PB - Springer Science and Business Media Deutschland GmbH
T2 - 21st Mexican International Conference on Artificial Intelligence, MICAI 2022
Y2 - 24 October 2022 through 29 October 2022
ER -