Hybrid neural networks for big data classification

Gerardo Hernández, Erik Zamora, Humberto Sossa, Germán Téllez, Federico Furlán

Producción científica: Contribución a una revistaArtículorevisión exhaustiva

90 Citas (Scopus)

Resumen

Two new hybrid neural architectures combining morphological neurons and perceptrons are introduced in this paper. The first architecture, called Morphological - Linear Neural Network (MLNN) consists of a hidden layer of morphological neurons and an output layer of classical perceptrons has the capability of extracting features. The second architecture, called Linear-Morphological Neural Network (LMNN) is composed of one or several perceptron layers as a feature extractor, it is then followed by an output layer of morphological neurons for non-linear classification. Both architectures are trained by stochastic gradient descent. One of the main contributions of this paper is to show that the morphological layer offers a greater capacity to extract features than the perceptron layer. This claim is supported both theoretically and experimentally. We prove that the morphological layer possesses a greater capacity per computation unit to segment the 2D input space than the perceptron layer. In other words, adding more hyper-boxes produces more response regions than adding hyperplanes. From an empirical point of view, we test the two new models on 25 standard datasets at low dimensionality and one big data dataset. The result is that MLNN requires a lesser number of learning parameters than the other tested architectures while achieving better accuracies.

Idioma originalInglés
Páginas (desde-hasta)327-340
Número de páginas14
PublicaciónNeurocomputing
Volumen390
DOI
EstadoPublicada - 21 may. 2020

Huella

Profundice en los temas de investigación de 'Hybrid neural networks for big data classification'. En conjunto forman una huella única.

Citar esto