Luus–Jaakola
In computational engineering, Luus–Jaakola denotes a heuristic for global optimization of a real-valued function. In engineering use, LJ is not an algorithm that terminates with an optimal solution; nor is it an iterative method that generates a sequence of points that converges to an optimal solution. However, when applied to a twice continuously differentiable function, the LJ heuristic is a proper iterative method, that generates a sequence that has a convergent subsequence; for this class of problems, Newton's method is recommended and enjoys a quadratic rate of convergence, while no convergence rate analysis has been given for the LJ heuristic. In practice, the LJ heuristic has been recommended for functions that need be neither convex nor differentiable nor locally Lipschitz: The LJ heuristic does not use a gradient or subgradient when one be available, which allows its application to non-differentiable and non-convex problems.
Proposed by Luus and Jaakola, LJ generates a sequence of iterates. The next iterate is selected from a sample from a neighborhood of the current position using a uniform distribution. With each iteration, the neighborhood decreases, which forces a subsequence of iterates to converge to a cluster point.
Luus has applied LJ in optimal control, transformer design, metallurgical processes, and chemical engineering.
Motivation
At each step, the LJ heuristic maintains a box from which it samples points randomly, using a uniform distribution on the box. For a unimodal function, the probability of reducing the objective function decreases as the box approach a minimum. The picture displays a one-dimensional example.Heuristic
Let be the fitness or cost function which must be minimized. Let designate a position or candidate solution in the search-space. The LJ heuristic iterates the following steps:- Initialize x ~ U with a random uniform position in the search-space, where blo and bup are the lower and upper boundaries, respectively.
- Set the initial sampling range to cover the entire search-space : d = bup − blo
- Until a termination criterion is met, repeat the following:
- * Pick a random vector a ~ U
- * Add this to the current position x to create the new potential position y = x + a
- * If < f) then move to the new position by setting x = y, otherwise decrease the sampling-range: d = 0.95 d
- Now x holds the best-found position.
Variations
- Procedure of generating random trial points.
- Number of internal loops.
- Number of cycles.
- Contraction coefficient of the search region size.
- * Whether the region reduction rate is the same for all variables or a different rate for each variable.
- * Whether the region reduction rate is a constant or follows another distribution.
- Whether to incorporate a line search.
- Whether to consider constraints of the random points as acceptance criteria, or to incorporate a quadratic penalty.
Convergence
The worst-case complexity of minimization on the class of unimodal functions grows exponentially in the dimension of the problem, according to the analysis of Yudin and Nemirovsky, however. The Yudin-Nemirovsky analysis implies that no method can be fast on high-dimensional problems that lack convexity:
"The catastrophic growth as shows that it is meaningless to pose the question of constructing universal methods of solving... problems of any appreciable dimensionality 'generally'. It is interesting to note that the same holds for... problems generated by uni-extremal functions."
When applied to twice continuously differentiable problems, the LJ heuristic's rate of convergence decreases as the number of dimensions increases.