A Lyapunov approach for stable reinforcement learning

Título traducido de la contribución: Un enfoque de Lyapunov para el aprendizaje por refuerzo estable

Producción científica: Contribución a una revistaArtículorevisión exhaustiva

2 Citas (Scopus)

Resumen

Our strategy is based on a novel reinforcement-learning (RL) Lyapunov methodology. We propose a method for constructing Lyapunov-like functions using a feed-forward Markov decision process. These functions are important for assuring the stability of a behavior policy throughout the learning process. We show that the cost sequence, which corresponds to the best approach, is frequently non-monotonic, implying that convergence cannot be guaranteed. For any Markov-ergodic process, our technique generates a Lyapunov-like function, implying an one-to-one correspondence between the present cost-function and the suggested function, resulting in a monotonically non-increase behavior on the trajectories under optimum strategy realization. We show that the system’s dynamics and trajectory converge. We show how to employ the Lyapunov technique to solve RL problems. We explain how to employ the Lyapunov method to RL. We test the proposed approach to demonstrate its efficacy.

Título traducido de la contribuciónUn enfoque de Lyapunov para el aprendizaje por refuerzo estable
Idioma originalInglés
Número de artículo279
PublicaciónComputational and Applied Mathematics
Volumen41
N.º6
DOI
EstadoPublicada - sep. 2022

Huella

Profundice en los temas de investigación de 'Un enfoque de Lyapunov para el aprendizaje por refuerzo estable'. En conjunto forman una huella única.

Citar esto