TY - JOUR
T1 - Efficient training for dendrite morphological neural networks
AU - Sossa, Humberto
AU - Guevara, Elizabeth
N1 - Funding Information:
H. Sossa would like to thank SIP-IPN, CONACYT and ICYTDF under Grants 20121311 , 20131182 , 15014 and 325/2011 for the economical support to carry out this research. E. Guevara thanks CONACYT for the scholarship granted to pursue her doctoral studies.
PY - 2014/5/5
Y1 - 2014/5/5
N2 - This paper introduces an efficient training algorithm for a dendrite morphological neural network (DMNN). Given p classes of patterns, Ck, k=1, 2, ..., p, the algorithm selects the patterns of all the classes and opens a hyper-cube HCn (with n dimensions) with a size such that all the class elements remain inside HCn. The size of HCn can be chosen such that the border elements remain in some of the faces of HCn, or can be chosen for a bigger size. This last selection allows the trained DMNN to be a very efficient classification machine in the presence of noise at the moment of testing, as we will see later. In a second step, the algorithm divides the HCn into 2n smaller hyper-cubes and verifies if each hyper-cube encloses patterns for only one class. If this is the case, the learning process is stopped and the DMNN is designed. If at least one hyper-cube HCn encloses patterns of more than one class, then HCn is divided into 2n smaller hyper-cubes. The verification process is iteratively repeated onto each smaller hyper-cube until the stopping criterion is satisfied. At this moment the DMNN is designed. The algorithm was tested for benchmark problems and compare its performance against some reported algorithms, showing its superiority.
AB - This paper introduces an efficient training algorithm for a dendrite morphological neural network (DMNN). Given p classes of patterns, Ck, k=1, 2, ..., p, the algorithm selects the patterns of all the classes and opens a hyper-cube HCn (with n dimensions) with a size such that all the class elements remain inside HCn. The size of HCn can be chosen such that the border elements remain in some of the faces of HCn, or can be chosen for a bigger size. This last selection allows the trained DMNN to be a very efficient classification machine in the presence of noise at the moment of testing, as we will see later. In a second step, the algorithm divides the HCn into 2n smaller hyper-cubes and verifies if each hyper-cube encloses patterns for only one class. If this is the case, the learning process is stopped and the DMNN is designed. If at least one hyper-cube HCn encloses patterns of more than one class, then HCn is divided into 2n smaller hyper-cubes. The verification process is iteratively repeated onto each smaller hyper-cube until the stopping criterion is satisfied. At this moment the DMNN is designed. The algorithm was tested for benchmark problems and compare its performance against some reported algorithms, showing its superiority.
KW - Classification
KW - Dendrite morphological neural network
KW - Efficient training
KW - Pattern recognition
UR - http://www.scopus.com/inward/record.url?scp=84894083994&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2013.10.031
DO - 10.1016/j.neucom.2013.10.031
M3 - Artículo
SN - 0925-2312
VL - 131
SP - 132
EP - 142
JO - Neurocomputing
JF - Neurocomputing
ER -