Previous |  Up |  Next

Article

Keywords:
zero-sum stopping game; equality of the upper and lower value functions; contractive operator; hitting time; stationary strategy
Summary:
This work concerns a class of discrete-time, zero-sum games with two players and Markov transitions on a denumerable space. At each decision time player II can stop the system paying a terminal reward to player I and, if the system is no halted, player I selects an action to drive the system and receives a running reward from player II. Measuring the performance of a pair of decision strategies by the total expected discounted reward, under standard continuity-compactness conditions it is shown that this stopping game has a value function which is characterized by an equilibrium equation, and such a result is used to establish the existence of a Nash equilibrium. Also, the method of successive approximations is used to construct approximate Nash equilibria for the game.
References:
[1] Altman, E., Shwartz, A.: Constrained Markov Games: Nash Equilibria. In: Annals of Dynamic Games (V. Gaitsgory, J. Filar and K. Mizukami, eds.) 6 (2000), pp. 213-221, Birkhauser, Boston. MR 1764491 | Zbl 0957.91014
[2] Atar, R., Budhiraja, A.: A stochastic differential game for the inhomogeneous infinty-Laplace equation. Ann. Probab. 2 (2010), 498-531. DOI 10.1214/09-AOP494 | MR 2642884
[3] Bielecki, T., Hernández-Hernández, D., Pliska, S. R.: Risk sensitive control of finite state Markov chains in discrete time, with applications to portfolio management. Mathe. Methods Oper. Res. 50 (1999), 167-188. DOI 10.1007/s001860050094 | MR 1732397 | Zbl 0959.91029
[4] Dynkin, E. B.: The optimum choice for the instance for stopping Markov process. Soviet. Math. Dokl. 4 (1963), 627-629.
[5] Kolokoltsov, V. N., Malafeyev, O. A.: Understanding Game Theory. World Scientific, Singapore 2010. MR 2666863 | Zbl 1189.91001
[6] Peskir, G.: On the American option problem. Math. Finance 15 (2010), 169-181. DOI 10.1111/j.0960-1627.2005.00214.x | MR 2116800 | Zbl 1109.91028
[7] Peskir, G., Shiryaev, A.: Optimal Stopping and Free-Boundary Problems. Birkhauser, Boston 2010. MR 2256030 | Zbl 1115.60001
[8] Puterman, M.: Markov Decision Processes. Wiley, New York 1994. MR 1270015 | Zbl 1184.90170
[9] Shiryaev, A.: Optimal Stopping Rules. Springer, New York 1978. MR 0468067 | Zbl 1138.60008
[10] Sladký, K.: Ramsey Growth model under uncertainty. In: Proc. 27th International Conference Mathematical Methods in Economics (H. Brozová, ed.), Kostelec nad Černými lesy 2009, pp. 296-300.
[11] Sladký, K.: Risk-sensitive Ramsey Growth model. In: Proc. of 28th International Conference on Mathematical Methods in Economics (M. Houda and J. Friebelová, eds.) České Budějovice 2010.
[12] Shapley, L. S.: Stochastic games. Proc. Nat. Acad. Sci. U.S.A. 39 (1953), 1095-1100. DOI 10.1073/pnas.39.10.1095 | MR 0061807 | Zbl 1180.91042
[13] Wal, J. van der: Discounted Markov games: Successive approximation and stopping times. Internat. J. Game Theory 6 (1977), 11-22. DOI 10.1007/BF01770870 | MR 0456797
[14] Wal, J. van der: Discounted Markov games: Generalized policy iteration method. J. Optim. Theory Appl. 25 (1978), 125-138. DOI 10.1007/BF00933260 | MR 0526244
[15] White, D. J.: Real applications of Markov decision processes. Interfaces 15 (1985), 73-83. DOI 10.1287/inte.15.6.73
[16] White, D. J.: Further real applications of Markov decision processes. Interfaces 18 (1988), 55-61. DOI 10.1287/inte.18.5.55
[17] Zachrisson, L. E.: Markov games. In: Advances in Game Theory (M. Dresher, L. S.Shapley and A. W. Tucker, eds.), Princeton Univ. Press, Princeton 1964, pp. 211-253. MR 0170729 | Zbl 0126.36507
Partner of
EuDML logo