Wolfe conditions explained

In the unconstrained minimization problem, the Wolfe conditions are a set of inequalities for performing inexact line search, especially in quasi-Newton methods, first published by Philip Wolfe in 1969.[1] [2]

f\colonRn\toR

. Each step often involves approximately solving the subproblem\min_ f(\mathbf_k + \alpha \mathbf_k)where

xk

is the current best guess,

pk\inRn

is a search direction, and

\alpha\inR

is the step length.

The inexact line searches provide an efficient way of computing an acceptable step length

\alpha

that reduces the objective function 'sufficiently', rather than minimizing the objective function over

\alpha\inR+

exactly. A line search algorithm can use Wolfe conditions as a requirement for any guessed

\alpha

, before finding a new search direction

pk

.

Armijo rule and curvature

A step length

\alphak

is said to satisfy the Wolfe conditions, restricted to the direction

pk

, if the following two inequalities hold:with

0<c1<c2<1

. (In examining condition (ii), recall that to ensure that

pk

is a descent direction, we have
T
p
k

\nablaf(xk)<0

, as in the case of gradient descent, where

pk=-\nablaf(xk)

, or Newton–Raphson, where

pk=-H-1\nablaf(xk)

with

H

positive definite.)

c1

is usually chosen to be quite small while

c2

is much larger; Nocedal and Wright give example values of

c1=10-4

and

c2=0.9

for Newton or quasi-Newton methods and

c2=0.1

for the nonlinear conjugate gradient method.[3] Inequality i) is known as the Armijo rule[4] and ii) as the curvature condition; i) ensures that the step length

\alphak

decreases

f

'sufficiently', and ii) ensures that the slope has been reduced sufficiently. Conditions i) and ii) can be interpreted as respectively providing an upper and lower bound on the admissible step length values.

Strong Wolfe condition on curvature

Denote a univariate function

\varphi

restricted to the direction

pk

as

\varphi(\alpha)=f(xk+\alphapk)

. The Wolfe conditions can result in a value for the step length that is not close to a minimizer of

\varphi

. If we modify the curvature condition to the following,

then i) and iii) together form the so-called strong Wolfe conditions, and force

\alphak

to lie close to a critical point of

\varphi

.

Rationale

The principal reason for imposing the Wolfe conditions in an optimization algorithm where

xk+1=xk+\alphapk

is to ensure convergence of the gradient to zero. In particular, if the cosine of the angle between

pk

and the gradient, \cos \theta_k = \frac is bounded away from zero and the i) and ii) conditions hold, then

\nablaf(xk)0

.

An additional motivation, in the case of a quasi-Newton method, is that if

pk=

-1
-B
k

\nablaf(xk)

, where the matrix

Bk

is updated by the BFGS or DFP formula, then if

Bk

is positive definite ii) implies

Bk+1

is also positive definite.

Comments

Wolfe's conditions are more complicated than Armijo's condition, and a gradient descent algorithm based on Armijo's condition has a better theoretical guarantee than one based on Wolfe conditions (see the sections on "Upper bound for learning rates" and "Theoretical guarantee" in the Backtracking line search article).

See also

Further reading

Notes and References

  1. Wolfe . P. . Convergence Conditions for Ascent Methods . 10.1137/1011036 . . 11 . 2 . 226–235 . 1969 . 2028111.
  2. Wolfe . P. . Convergence Conditions for Ascent Methods. II: Some Corrections . 10.1137/1013035 . . 13 . 2 . 185–188 . 1971 . 2028821 .
  3. Book: Numerical Optimization . Nocedal . Jorge . Jorge Nocedal . Wright . Stephen . 1999 . 38 .
  4. Armijo . Larry . 1966 . Minimization of functions having Lipschitz continuous first partial derivatives . Pacific J. Math. . 16 . 1 . 1–3 . 10.2140/pjm.1966.16.1. free .