Fixed-point computation
Fixed-point computation refers to the process of computing an exact or approximate fixed point of a given function. In its most common form, the given function satisfies the condition to the Brouwer fixed-point theorem: that is, is continuous and maps the unit d-cube to itself. The Brouwer fixed-point theorem guarantees that has a fixed point, but the proof is not constructive. Various algorithms have been devised for computing an approximate fixed point. Such algorithms are used in various tasks, such as
- Nash equilibrium computation,
- Market equilibrium computation,
- Dynamic system analysis.
Definitions
A fixed point of is a point in such that. By the Brouwer fixed-point theorem, any continuous function from to itself has a fixed point. But for general functions, it is impossible to compute a fixed point precisely, since it can be an arbitrary real number. Fixed-point computation algorithms look for approximate fixed points. There are several criteria for an approximate fixed point. Several common criteria are:
- The residual criterion: given an approximation parameter , An -residual fixed-point of ' is a point in ' such that, where here denotes the maximum norm. That is, all coordinates of the difference should be at most.
- The absolute criterion: given an approximation parameter, A δ-absolute fixed-point of ' is a point in such that, where is any fixed-point of.
- The relative criterion: given an approximation parameter, A δ-relative fixed-point of '' is a point x in such that, where is any fixed-point of.
The most basic step of a fixed-point computation algorithm is a value query: given any in, the algorithm is provided with an oracle to that returns the value. The accuracy of the approximate fixed-point depends upon the error in the oracle.
The function is accessible via evaluation queries: for any, the algorithm can evaluate. The run-time complexity of an algorithm is usually given by the number of required evaluations.
Contractive functions
A Lipschitz-continuous function with constant is called contractive if ; it is called weakly-contractive if. Every contractive function satisfying Brouwer's conditions has a unique fixed point. Moreover, fixed-point computation for contractive functions is easier than for general functions.The first algorithm for fixed-point computation was the fixed-point iteration algorithm of Banach. Banach's fixed-point theorem implies that, when fixed-point iteration is applied to a contraction mapping, the error after iterations is in. Therefore, the number of evaluations required for a -relative fixed-point is approximately. Sikorski and Wozniakowski showed that Banach's algorithm is optimal when the dimension is large. Specifically, when, the number of required evaluations of any algorithm for -relative fixed-point is larger than 50% the number of evaluations required by the iteration algorithm. Note that when approaches 1, the number of evaluations approaches infinity. No finite algorithm can compute a -absolute fixed point for all functions with.
When < 1 and d = 1, the optimal algorithm is the Fixed Point Envelope algorithm of Sikorski and Wozniakowski. It finds a δ-relative fixed point using queries, and a δ-absolute fixed point using queries. This is faster than the fixed-point iteration algorithm.
When but not too large, and, the optimal algorithm is the interior-ellipsoid algorithm. It finds an -residual fixed-point using evaluations. When, it finds a -absolute fixed point using evaluations.
Shellman and Sikorski presented an algorithm called BEFix for computing an -residual fixed-point of a two-dimensional function with ', using only queries. They later presented an improvement called BEDFix, with the same worst-case guarantee but better empirical performance. When, BEDFix can also compute a -absolute fixed-point using queries.
Shellman and Sikorski presented an algorithm called PFix for computing an -residual fixed-point of a d-dimensional function with L ≤ 1, using queries. When < 1, PFix can be executed with, and in that case, it computes a δ-absolute fixed-point, using queries. It is more efficient than the iteration algorithm when is close to 1. The algorithm is recursive: it handles a d-dimensional function by recursive calls on -dimensional functions.
Algorithms for differentiable functions
When the function is differentiable, and the algorithm can evaluate its derivative, the Newton method can be used and it is much faster.General functions: one dimension
For functions with Lipschitz constant > 1, computing a fixed-point is much harder.For a 1-dimensional function, a -absolute fixed-point can be found using queries using the bisection method: start with the interval ; at each iteration, let be the center of the current interval, and compute ; if then recurse on the sub-interval to the right of ; otherwise, recurse on the interval to the left of. Note that the current interval always contains a fixed point, so after queries, any point in the remaining interval is a -absolute fixed-point of Setting, where is the Lipschitz constant, gives an -residual fixed-point, using queries.
General functions: two or more dimensions
For functions in two or more dimensions, the problem is much more challenging. Shellman and Sikorski proved that for any integers d ≥ 2 and > 1, finding a δ-absolute fixed-point of d-dimensional -Lipschitz functions might require infinitely many evaluations. The proof idea is as follows. For any integer T > 1 and any sequence of T of evaluation queries, one can construct two functions that are Lipschitz-continuous with constant, and yield the same answer to all these queries, but one of them has a unique fixed-point at and the other has a unique fixed-point at. Any algorithm using T evaluations cannot differentiate between these functions, so cannot find a δ-absolute fixed-point. This is true for any finite integer T.Several algorithms based on function evaluations have been developed for finding an -residual fixed-point.
Simplicial method
The first algorithm to approximate a fixed point of a general function was developed by Herbert Scarf in 1967. Scarf's algorithm finds an -residual fixed-point by finding a fully labeled "primitive set", in a construction similar to Sperner's lemma.A later algorithm by Harold Kuhn used simplices and simplicial partitions instead of primitive sets.
Developing the simplicial approach further, Orin Harrison Merrill presented the restart algorithm.
Homotopy method
B. Curtis Eaves presented the homotopy method, based on the concept of homotopy.Given a function f, for which we want to find a fixed point, the algorithm works by starting with an affine function that approximates f, and deforming it towards f while following the fixed point.
The homotopy method has been used for market equilibrium computation.
The method is further explained in a book by Michael Todd, which surveys various algorithms developed until 1976.
Other algorithms
- David Gale showed that computing a fixed point of an n-dimensional function is equivalent to deciding who is the winner in a d-dimensional game of Hex. Given the desired accuracy
- * Construct a Hex board of size kd, where. Each vertex z corresponds to a point z/''k in the unit n''-cube.
- * Compute the difference - z/''k; note that the difference is an n''-vector.
- * Label the vertex z by a label in 1,..., d, denoting the largest coordinate in the difference vector.
- * The resulting labeling corresponds to a possible play of the d-dimensional Hex game among d players. This game must have a winner, and Gale presents an algorithm for constructing the winning path.
- * In the winning path, there must be a point in which fi - z/''k is positive, and an adjacent point in which fi - z''/k is negative. This means that there is a fixed point of between these two points.
Query complexity
Hirsch, Papadimitriou and Vavasis proved that any algorithm based on function evaluations, that finds an -residual fixed-point of f, requires function evaluations, where is the Lipschitz constant of the function . More precisely:- For a 2-dimensional function, they prove a tight bound.
- For any d ≥ 3, finding an -residual fixed-point of a d-dimensional function requires queries and queries.
Discrete fixed-point computation
A discrete function is a function defined on a subset of '. There are several discrete fixed-point theorems, stating conditions under which a discrete function has a fixed point. For example, the Iimura-Murota-Tamura theorem states that if is a function from a rectangle subset of ' to itself, and is hypercubic direction-preserving, then has a fixed point.Let be a direction-preserving function from the integer cube to itself. Chen and Deng prove that, for any d ≥ 2 and n > 48d, computing such a fixed point requires function evaluations.
Chen and Deng define a different discrete-fixed-point problem, which they call 2D-BROUWER. It considers a discrete function on such that, for every x on the grid, - x is either or or. The goal is to find a square in the grid, in which all three labels occur. The function must map the square to itself, so it must map the lines x = 0 and y = 0 to either or ; the line x = n to either or ; and the line y = n to either or. The problem can be reduced to 2D-SPERNER, and therefore it is PPAD-complete. This implies that computing an approximate fixed-point is PPAD-complete even for very simple functions.