Mono-objective function analysis using an optimization approach

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

In this paper we propose an evolutionary technique based in a Lyapunov method (instead of Pareto) for mono-objective optimization, that associate to every Markov-ergodic process a Lyapunov-like mono-objective function. We show that for a class of controllable finite Markov Chains supplied by a given objective-function the system and the trajectory dynamics converge. For representing the trajectory-dynamics properties local-optimal policies are defined to minimize the one-step decrement of the cost-function. We propose a non-converging state-value function that increase and decrease between states of the decision process. Then, we show that a Lyapunov mono-objective function, which can only decrease (or remain the same) over time, can be built for this Markov decision processes. The Lyapunov mono-objective functions analyzed in this paper represent the most frequent type of behavior applied in practice in problems of evolutionary and real coded genetic algorithms considered within the Artificial Intelligence research area. They are naturally related with the, so-called, fixed-local-optimal actions or, in other words, with one-step ahead optimization algorithms widely used in the modern optimization theory. For illustration purposes, we present a simulated experiment that shows the trueness of the suggested method. © 2003-2012 IEEE.
Original languageAmerican English
Pages (from-to)300-305
Number of pages269
JournalIEEE Latin America Transactions
DOIs
StatePublished - 1 Mar 2014

Fingerprint

Markov Chains
Artificial Intelligence
Markov processes
Trajectories
Lyapunov methods
Costs and Cost Analysis
Cost functions
Artificial intelligence
Research
Genetic algorithms
Experiments

Cite this

@article{f1c5d99108ec4052a1b55a64693fa0d3,
title = "Mono-objective function analysis using an optimization approach",
abstract = "In this paper we propose an evolutionary technique based in a Lyapunov method (instead of Pareto) for mono-objective optimization, that associate to every Markov-ergodic process a Lyapunov-like mono-objective function. We show that for a class of controllable finite Markov Chains supplied by a given objective-function the system and the trajectory dynamics converge. For representing the trajectory-dynamics properties local-optimal policies are defined to minimize the one-step decrement of the cost-function. We propose a non-converging state-value function that increase and decrease between states of the decision process. Then, we show that a Lyapunov mono-objective function, which can only decrease (or remain the same) over time, can be built for this Markov decision processes. The Lyapunov mono-objective functions analyzed in this paper represent the most frequent type of behavior applied in practice in problems of evolutionary and real coded genetic algorithms considered within the Artificial Intelligence research area. They are naturally related with the, so-called, fixed-local-optimal actions or, in other words, with one-step ahead optimization algorithms widely used in the modern optimization theory. For illustration purposes, we present a simulated experiment that shows the trueness of the suggested method. {\circledC} 2003-2012 IEEE.",
author = "Clempner, {J. B.}",
year = "2014",
month = "3",
day = "1",
doi = "10.1109/TLA.2014.6749552",
language = "American English",
pages = "300--305",
journal = "IEEE Latin America Transactions",
issn = "1548-0992",
publisher = "IEEE Computer Society",

}

Mono-objective function analysis using an optimization approach. / Clempner, J. B.

In: IEEE Latin America Transactions, 01.03.2014, p. 300-305.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Mono-objective function analysis using an optimization approach

AU - Clempner, J. B.

PY - 2014/3/1

Y1 - 2014/3/1

N2 - In this paper we propose an evolutionary technique based in a Lyapunov method (instead of Pareto) for mono-objective optimization, that associate to every Markov-ergodic process a Lyapunov-like mono-objective function. We show that for a class of controllable finite Markov Chains supplied by a given objective-function the system and the trajectory dynamics converge. For representing the trajectory-dynamics properties local-optimal policies are defined to minimize the one-step decrement of the cost-function. We propose a non-converging state-value function that increase and decrease between states of the decision process. Then, we show that a Lyapunov mono-objective function, which can only decrease (or remain the same) over time, can be built for this Markov decision processes. The Lyapunov mono-objective functions analyzed in this paper represent the most frequent type of behavior applied in practice in problems of evolutionary and real coded genetic algorithms considered within the Artificial Intelligence research area. They are naturally related with the, so-called, fixed-local-optimal actions or, in other words, with one-step ahead optimization algorithms widely used in the modern optimization theory. For illustration purposes, we present a simulated experiment that shows the trueness of the suggested method. © 2003-2012 IEEE.

AB - In this paper we propose an evolutionary technique based in a Lyapunov method (instead of Pareto) for mono-objective optimization, that associate to every Markov-ergodic process a Lyapunov-like mono-objective function. We show that for a class of controllable finite Markov Chains supplied by a given objective-function the system and the trajectory dynamics converge. For representing the trajectory-dynamics properties local-optimal policies are defined to minimize the one-step decrement of the cost-function. We propose a non-converging state-value function that increase and decrease between states of the decision process. Then, we show that a Lyapunov mono-objective function, which can only decrease (or remain the same) over time, can be built for this Markov decision processes. The Lyapunov mono-objective functions analyzed in this paper represent the most frequent type of behavior applied in practice in problems of evolutionary and real coded genetic algorithms considered within the Artificial Intelligence research area. They are naturally related with the, so-called, fixed-local-optimal actions or, in other words, with one-step ahead optimization algorithms widely used in the modern optimization theory. For illustration purposes, we present a simulated experiment that shows the trueness of the suggested method. © 2003-2012 IEEE.

UR - https://www.scopus.com/inward/record.uri?partnerID=HzOxMe3b&scp=84900659782&origin=inward

UR - https://www.scopus.com/inward/citedby.uri?partnerID=HzOxMe3b&scp=84900659782&origin=inward

U2 - 10.1109/TLA.2014.6749552

DO - 10.1109/TLA.2014.6749552

M3 - Article

SP - 300

EP - 305

JO - IEEE Latin America Transactions

JF - IEEE Latin America Transactions

SN - 1548-0992

ER -