Back-propagation vs particle swarm optimization algorithm: Which algorithm is better to adjust the synaptic weights of a feed-forward ANN?

Beatriz A. Garro, Humberto Sossa, Roberto A. Vázquez

Research output: Contribution to journalArticlepeer-review

18 Scopus citations

Abstract

Bio-inspired algorithms have shown their usefulness in different non-linear optimization problems. Due to their efficiency and adaptability, these algorithms have been applied to a wide range of problems. In this paper we compare two ways of training an artificial neural network (ANN): Particle Swarm Optimization (PSO) algorithms against classical training algorithms such as: back-propagation (BP) and Levenberg Marquardt method. The main contribution of this paper is to answer the next question: is PSO really better than classical training algorithms in adjusting the synaptic weights of an ANN? First of all, we explain how the ANN training phase could be seen as an optimization problem. Then, it is explained how PSO could be applied to find the best synaptic weights of the ANN. Finally, we perform a comparison among different classical methods and PSO approach when an ANN is applied to different non-linear problems and to a real object recognition problem.

Original languageEnglish
Pages (from-to)208-218
Number of pages11
JournalInternational Journal of Artificial Intelligence
Volume7
Issue number11 A
StatePublished - Oct 2011

Keywords

  • Artificial neural networks
  • Particle swarm intelligence
  • Pattern recognition

Fingerprint

Dive into the research topics of 'Back-propagation vs particle swarm optimization algorithm: Which algorithm is better to adjust the synaptic weights of a feed-forward ANN?'. Together they form a unique fingerprint.

Cite this