TY - JOUR
T1 - A Dynamic Mechanism Design for Controllable and Ergodic Markov Games
AU - Clempner, Julio B.
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2023/3
Y1 - 2023/3
N2 - This paper suggests an analytical method for computing Bayesian incentive-compatible mechanisms where the private information is revealed following a class of controllable Markov games. We consider a dynamic environment where decision are taken after several finite periods. Our method incorporates a new variable that represents the product of the mechanism design, the strategies, and the distribution vector. We derive the relations to analytically compute the variables of interest. The introduction of this variable makes the problem computationally tractable. The method involves a Reinforcement Learning approach which computes the near-optimal mechanism in equilibrium with the resulting strategy of the game with high profit maximization. We use the standard notion of Bayesian–Nash equilibrium as the equilibrium concept for our game. An interesting challenge is that for the objective of profit maximization there is no single optimal mechanism because there are multiple equilibria. We use Tikhonov’s method to provide a regularization parameter to solve this problem. We demonstrate the game’s equilibrium and convergence to a single incentive-compatible mechanism. This generates novel and considerably better findings for many game theory problem areas, as well as incentive-compatible mechanisms that match the game’s equilibrium. We present a numerical example in the realm of a dynamic public finance model with partial information to demonstrate the suggested technique.
AB - This paper suggests an analytical method for computing Bayesian incentive-compatible mechanisms where the private information is revealed following a class of controllable Markov games. We consider a dynamic environment where decision are taken after several finite periods. Our method incorporates a new variable that represents the product of the mechanism design, the strategies, and the distribution vector. We derive the relations to analytically compute the variables of interest. The introduction of this variable makes the problem computationally tractable. The method involves a Reinforcement Learning approach which computes the near-optimal mechanism in equilibrium with the resulting strategy of the game with high profit maximization. We use the standard notion of Bayesian–Nash equilibrium as the equilibrium concept for our game. An interesting challenge is that for the objective of profit maximization there is no single optimal mechanism because there are multiple equilibria. We use Tikhonov’s method to provide a regularization parameter to solve this problem. We demonstrate the game’s equilibrium and convergence to a single incentive-compatible mechanism. This generates novel and considerably better findings for many game theory problem areas, as well as incentive-compatible mechanisms that match the game’s equilibrium. We present a numerical example in the realm of a dynamic public finance model with partial information to demonstrate the suggested technique.
KW - Bayesian equilibrium
KW - Dynamic mechanism design
KW - Incentive-compatible mechanisms
KW - Markov games with private information
KW - Regularization
UR - http://www.scopus.com/inward/record.url?scp=85124886747&partnerID=8YFLogxK
U2 - 10.1007/s10614-022-10240-y
DO - 10.1007/s10614-022-10240-y
M3 - Artículo
AN - SCOPUS:85124886747
SN - 0927-7099
VL - 61
SP - 1151
EP - 1171
JO - Computational Economics
JF - Computational Economics
IS - 3
ER -