Center-of-gravity method
The center-of-gravity method is a theoretic algorithm for convex optimization. It can be seen as a generalization of the bisection method from one-dimensional functions to multi-dimensional functions. It is theoretically important as it attains the optimal convergence rate. However, it has little practical value as each step is very computationally expensive.
Input
Our goal is to solve a convex optimization problem of the form:minimize f s.t. x in G,where f is a convex function, and G is a convex subset of a Euclidean space Rn.
We assume that we have a "subgradient oracle": a routine that can compute a subgradient of f at any given point.
Method
The method is iterative. At each iteration t, we keep a convex region Gt, which surely contains the desired minimum. Initially we have G0 = G. Then, each iteration t proceeds as follows.- Let xt be the center of gravity of Gt.
- Compute a subgradient at xt, denoted f
'. - * By definition of a subgradient, the graph of f is above the subgradient, so for all x in Gt: f−''f ≥ Tf'.
- If f''
'=0, then the above implies that xt is an exact minimum point, so we terminate and return xt. - Otherwise, let Gt+1 :=.
Convergence
It can be proved that.Therefore,
.In other words, the method has linear convergence of the residual objective value, with convergence rate. To get an ε-approximation to the objective value, the number of required steps is at most.