Recurrent neural networks training with stable risk-sensitive Kalman filter algorithm

Wen Yu, José De Jesús Rubio, Xiaoou Li

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations

Abstract

Compared to normal learning algorithms, for example backpropagation, Kalman filter-based algorithm has some better properties, such as faster convergence. In this paper, Kalman filter is modified with a risk-sensitive cost criterion, we call it as risk-sensitive Kalman filter. This new algorithm is applied to train recurrent neural networks for nonlinear system identification. Input-to-state stability is used to prove that the risk-sensitive Kalman filter training is stable. The contributions of this paper are: 1) the risk-sensitive Kalman filter is used for the state-space recurrent neural networks training, 2) the stability of the risk-sensitive Kalman filter is proved.

Original languageEnglish
Title of host publicationProceedings of the International Joint Conference on Neural Networks, IJCNN 2005
Pages700-705
Number of pages6
DOIs
StatePublished - 2005
Externally publishedYes
EventInternational Joint Conference on Neural Networks, IJCNN 2005 - Montreal, QC, Canada
Duration: 31 Jul 20054 Aug 2005

Publication series

NameProceedings of the International Joint Conference on Neural Networks
Volume2

Conference

ConferenceInternational Joint Conference on Neural Networks, IJCNN 2005
Country/TerritoryCanada
CityMontreal, QC
Period31/07/054/08/05

Fingerprint

Dive into the research topics of 'Recurrent neural networks training with stable risk-sensitive Kalman filter algorithm'. Together they form a unique fingerprint.

Cite this