Concave game
In game theory, a concave game is a generalization of the normal-form game defined by Rosen. He extended the theorem on existence of a Nash equilibrium, which John Nash originally proved for normal-form games, to concave games.
From normal-form games to concave games
We will describe the generalization step by step.1. In a normal-form game, each player i can choose one of mi pure actions. The set of strategies available to each player is the set of lotteries over the pure actions, which is a simplex in Rmi.
- In a concave game, the set of strategies available to each player may be any convex set in Rmi.
- In a concave game, the set of strategy profiles may be any convex set in Rm. This means that the constraints on each player's choice are coupled - each player's available strategies may depend on what other players choose.
- In a concave game, each ui can be any continuous function that is concave in the strategy of agent i.
- To state this property more explicitly, fix some strategies for all players except i; denote them by x1,...,xi-1,xi+1,...,xn, or using the shorthand notation x-i. Fix some two possible strategies for player i; denote them by xi, yi, such that both and are in S. Choose some value t in . Note that, since S is convex, every convex combination of xi, yi is in S too; particularly, *xi+''t*yi is in S''. The concavity requirement is that ui ≥ *ui + t*''ui''.
Existence of equilibrium
If the above conditions hold, then an equilibrium exists. The proof uses the Kakutani fixed-point theorem.Uniqueness of equilibrium
Rosen also proved that, under certain technical conditions which include strict concavity, the equilibrium is unique. The conditions are explained below.Assume that the space S of allowed strategy profiles is defined by some k functions, that is,. In a normal-form game, the constraint functions are linear functions defining the simpices of the players; in a concave game, the hj can be any concave function of x. For the uniqueness proof, it is also required that each hj has continuous first derivatives at any x in S.
Assume also that, for each player i, the utility function ui has continuous first derivatives at the components of xi.
For any fixed positive vector r>0 in Rn, define the r-weighted-sum of the players' payoffs:
Define the pseudogradient of f:
where is the gradient of ui with respect to the xi components. So is a vector in Rmi and g is a vector in Rm.
We say that f is diagonally-strictly-concave at r if for every x,y in S:
- For orthogonal constraint sets, if there exists some r>0 for which f is diagonally-strictly-concave at r, then the concave game has a unique equilibrium.
- For coupled constraint sets, if there exists a convex subset Q of Rm such that, for every r in Q, f is diagonally-strictly-concave at r, then for each r in Q, the game has a unique r-normalized equilibrium.
Convergence to equilibrium
Consider a dynamic model of a concave game in which each player changes his strategy such that the strategy-profile remains in S, and subject to that, his utility increases. Each player i changes his strategy at rate ri in the direction of the gradient defining the maximum utility increase subject to the constraints. This results in a set of n differential equations. For any concave game and any staring point in S, this set of differential equations has a continuous solution x that remains in S for all t>0.If, in addition, the matrix G+''GT is negative definite for all x'' in S, then the solution x converges to an equilibrium point as t approaches infinity.
If, in addition, the matrix G+''GT is negative definite for all x'' in S, then the solution x converges to the unique r-normalized equilibrium point.
Correlated equilibrium
Takashi Ui shows that the same condition that guarantees uniqueness of a Nash equilibrium also guarantees uniqueness of a Correlated equilibrium. Moreover, an even weaker condition guarantees the uniqueness of a correlated equilibrium - a generalization of a condition proved by Abraham Neyman.Computation of equilibrium
Based on the above results, it is possible to compute equilibrium points for concave games using gradient methods for convex optimization.Theory
Papadimitriou, Vlatakis-Gkaragkounis and Zampetakis prove that computing an equilibrium in a concave game is PPAD-complete. In fact, they prove that the problem is in PPAD even for general concave games, and it is PPAD-hard even in the special case of strongly-concave utilities that can be expressed as multivariate polynomials of a constant degree with axis-aligned box constraints.Practice
Arrow and Hurwicz presented gradient methods for solving two-player zero-sum games with non-linear utilities.Krawczyk and Uryasev studied infinite games with nonlinear utilities and coupled constraints. Starting from an existing Relaxation method, they improved it by adding steepest-descent step-size control and other improvements, and proved that it indeed converges to an equilibrium. They tested their algorithm numerically on several applications, such as pollution of a river basin, and showed that it converges quickly on a wide range of parameters.
Krawczyk explains numerical methods converging to an equilibrium, focusing on the case of coupled constraints. He presents several application examples using a Matlab suite called NIRA.
Chernov presents two numerical search approaches for computing equilibrium points, that have guaranteed convergence without additional requirements on the objective functions: using the Hooke-Jeeves method for residual function minimization an intermediate between the relaxation algorithm and the Hooke-Jeeves method of configurations. Convergence is proved for one-dimensional sets of players strategies. The approaches are tested using numerical experiments.
Variants
Flam and Ruzcynzki define a convex-concave game. This is a game in which the space S of strategy profiles is convex, as in Rosen's definition. But instead of requiring smoothness and other conditions on g, they allow non-smooth data, and only require that the following function is convex in x and concave in y: .For such convex-concave games, they present two algorithms for finding equilibria, both using partial regularizations and relaxed subgradient projections. They prove that these algorithms converge.