Lyapunov stable learning laws for multilayer recurrent neural networks

Producción científica: Contribución a una revistaArtículorevisión exhaustiva

4 Citas (Scopus)

Resumen

This study aims to develop stable learning laws (in the sense of Lyapunov) for a general class of multilayer recurrent neural network (RNN) non-parametric identifier. The application of control Lyapunov functions in the discrete-time domain ensures the ultimate boundedness for the identification error enforcing it to reach a boundary set related to the power of parametric uncertainties and the modeling error. This work presents a general algorithm to design RNNs with n layers: one input layer, n-2 hidden layers, and one output layer. Numerical simulations support the theoretical results showing the advantages of increasing the number of layers in an RNN structure. A first example identifies the unknown states of the Van Der Pol oscillator. Then, a second example demonstrates the behavior of an RNN in a third-order system with high nonlinear dynamics describing an ozonation system of a single contaminant. For both cases, the application of the developed learning laws succeeds in estimating the uncertain dynamics. Moreover, the numerical simulations show how the identification error decreases as the number of layers increases.

Idioma originalInglés
Páginas (desde-hasta)644-657
Número de páginas14
PublicaciónNeurocomputing
Volumen491
DOI
EstadoPublicada - 28 jun. 2022

Huella

Profundice en los temas de investigación de 'Lyapunov stable learning laws for multilayer recurrent neural networks'. En conjunto forman una huella única.

Citar esto