© 2018 IEEE. This paper proposes a model of a neural network called Morphological-Linear Neural Network (MLNN). The model consists on merging two different types of neural layers: a hidden layer of morphological neurons and an output layer of classical perceptrons. The model has the ability to separate patterns using hyperboxes and hyperplanes on the part of morphological neurons and the part of perceptrons, respectively. This hybrid model is completely trained by stochastic gradient descent. We compared multilayer perceptrons and dendrite morphological neurons with this hybrid arquitecture. The experimental results over 25 real datasets show that this hybrid model has a classification accuracy about 7.5% higher than the multilayer perceptrons requiring 5.25 times fewer learning parameters, and 2% higher than the dendrite morphological neurons requiring 3.39 times fewer learning parameters.
|Original language||American English|
|State||Published - 12 Oct 2018|
|Event||IEEE International Conference on Fuzzy Systems - |
Duration: 12 Oct 2018 → …
|Conference||IEEE International Conference on Fuzzy Systems|
|Period||12/10/18 → …|