Continuous-time gradient-like descent algorithm for constrained convex unknown functions: Penalty method application

Cesar U. Solis, Julio B. Clempner, Alexander S. Poznyak

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

This paper suggests a novel continuous-time gradient descent algorithm of a constrained convex unknown function with a stochastic noise in the observed data. The penalty function approach is employed to introduce the restrictions of the system and to provide a successful optimization process. The solution is restricted to a static scheme dealing with the class of strongly convex functions subject to a set of constraints. To estimate the stochastic gradient we employ a modified version of the synchronous detection method. All the parameters of the proposed approach are decreasing in time to both compensate noise effect in the observations and to provide the mean-square convergence of the suggested approach. We present two different numerical examples to validate the contributions of the paper.

Original languageEnglish
Pages (from-to)268-282
Number of pages15
JournalJournal of Computational and Applied Mathematics
Volume355
DOIs
StatePublished - 1 Aug 2019

Keywords

  • Gradient descent
  • Penalty function
  • Real-time optimization
  • Synchronous detection method

Fingerprint

Dive into the research topics of 'Continuous-time gradient-like descent algorithm for constrained convex unknown functions: Penalty method application'. Together they form a unique fingerprint.

Cite this