A Dynamic Mechanism Design for Controllable and Ergodic Markov Games

Research output: Contribution to journalArticlepeer-review

Abstract

This paper suggests an analytical method for computing Bayesian incentive-compatible mechanisms where the private information is revealed following a class of controllable Markov games. We consider a dynamic environment where decision are taken after several finite periods. Our method incorporates a new variable that represents the product of the mechanism design, the strategies, and the distribution vector. We derive the relations to analytically compute the variables of interest. The introduction of this variable makes the problem computationally tractable. The method involves a Reinforcement Learning approach which computes the near-optimal mechanism in equilibrium with the resulting strategy of the game with high profit maximization. We use the standard notion of Bayesian–Nash equilibrium as the equilibrium concept for our game. An interesting challenge is that for the objective of profit maximization there is no single optimal mechanism because there are multiple equilibria. We use Tikhonov’s method to provide a regularization parameter to solve this problem. We demonstrate the game’s equilibrium and convergence to a single incentive-compatible mechanism. This generates novel and considerably better findings for many game theory problem areas, as well as incentive-compatible mechanisms that match the game’s equilibrium. We present a numerical example in the realm of a dynamic public finance model with partial information to demonstrate the suggested technique.

Original languageEnglish
JournalComputational Economics
DOIs
StateAccepted/In press - 2022

Keywords

  • Bayesian equilibrium
  • Dynamic mechanism design
  • Incentive-compatible mechanisms
  • Markov games with private information
  • Regularization

Fingerprint

Dive into the research topics of 'A Dynamic Mechanism Design for Controllable and Ergodic Markov Games'. Together they form a unique fingerprint.

Cite this