Projectional Learning Laws for Differential Neural Networks Based on Double-Averaged Sub-Gradient Descent Technique

Isaac Chairez, Alexander Poznyak, Alexander Nazin, Tatyana Poznyak

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

© 2019, Springer Nature Switzerland AG. A new method to design learning laws for neural networks with continuous dynamics is proposed in this study. The learning method is based on the so-called double-averaged descendant technique (DASGDT), which is a variant of the gradient-descendant method. The learning law implements a double averaged algorithm which filters the effect of uncertainties of the states, which are continuously measurable. The learning law overcomes the classical assumption on the strict convexity of the functional with respect to the weights. The photocatalytic ozonation process of a single contaminant is estimated using the learning law design proposed in this study.
Original languageAmerican English
Title of host publicationProjectional Learning Laws for Differential Neural Networks Based on Double-Averaged Sub-Gradient Descent Technique
Pages28-38
Number of pages11
ISBN (Electronic)9783030227951
DOIs
StatePublished - 1 Jan 2019
EventLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) -
Duration: 1 Jan 2019 → …

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume11554 LNCS
ISSN (Print)0302-9743

Conference

ConferenceLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Period1/01/19 → …

Fingerprint

Dive into the research topics of 'Projectional Learning Laws for Differential Neural Networks Based on Double-Averaged Sub-Gradient Descent Technique'. Together they form a unique fingerprint.

Cite this