Luus–Jaakola Explained

In computational engineering, Luus–Jaakola (LJ) denotes a heuristic for global optimization of a real-valued function. In engineering use, LJ is not an algorithm that terminates with an optimal solution; nor is it an iterative method that generates a sequence of points that converges to an optimal solution (when one exists). However, when applied to a twice continuously differentiable function, the LJ heuristic is a proper iterative method, that generates a sequence that has a convergent subsequence; for this class of problems, Newton's method is recommended and enjoys a quadratic rate of convergence, while no convergence rate analysis has been given for the LJ heuristic. In practice, the LJ heuristic has been recommended for functions that need be neither convex nor differentiable nor locally Lipschitz: The LJ heuristic does not use a gradient or subgradient when one be available, which allows its application to non-differentiable and non-convex problems.

Proposed by Luus and Jaakola, LJ generates a sequence of iterates. The next iterate is selected from a sample from a neighborhood of the current position using a uniform distribution. With each iteration, the neighborhood decreases, which forces a subsequence of iterates to converge to a cluster point.

Luus has applied LJ in optimal control, transformer design, metallurgical processes, and chemical engineering.

Motivation

At each step, the LJ heuristic maintains a box from which it samples points randomly, using a uniform distribution on the box. For a unimodal function, the probability of reducing the objective function decreases as the box approach a minimum. The picture displays a one-dimensional example.

Heuristic

Let

f:Rn\rarrR

be the fitness or cost function which must be minimized. Let

bf{x}\isinRn

designate a position or candidate solution in the search-space. The LJ heuristic iterates the following steps:

Variations

Luus notes that ARS (Adaptive Random Search) algorithms proposed to date differ in regard to many aspects.[1]

Convergence

Nair proved a convergence analysis. For twice continuously differentiable functions, the LJ heuristic generates a sequence of iterates having a convergent subsequence. For this class of problems, Newton's method is the usual optimization method, and it has quadratic convergence (regardless of the dimension of the space, which can be a Banach space, according to Kantorovich's analysis).

The worst-case complexity of minimization on the class of unimodal functions grows exponentially in the dimension of the problem, according to the analysis of Yudin and Nemirovsky, however. The Yudin-Nemirovsky analysis implies that no method can be fast on high-dimensional problems that lack convexity:

"The catastrophic growth [in the number of iterations needed to reach an approximate solution of a given accuracy] as [the number of dimensions increases to infinity] shows that it is meaningless to pose the question of constructing universal methods of solving ... problems of any appreciable dimensionality 'generally'. It is interesting to note that the same [conclusion] holds for ... problems generated by uni-extremal [that is, unimodal] (but not convex) functions."[2]
When applied to twice continuously differentiable problems, the LJ heuristic's rate of convergence decreases as the number of dimensions increases.

See also

Notes and References

  1. Book: Luus . Rein . Rangalah . Gade Pandu . Stochastic Global Optimization: Techniques and Applications in Chemical Engineering . 2010 . World Scientific Pub Co Inc . 978-9814299206 . 17-56 . Formulation and Illustration of Luus-Jaakola Optimization Procedure.
  2. Book: 702836 . Nemirovsky. A. S.. Yudin. D. B.. Problem complexity and method efficiency in optimization. Translated by E. R. Dawson from the (1979) Russian (Moscow: Nauka). Wiley-Interscience Series in Discrete Mathematics. John Wiley & Sons, Inc.. New York. 1983 . 0-471-10345-4 . 7. Page 7 summarizes the later discussion of .