Broyden–Fletcher–Goldfarb–Shanno algorithm explained

In numerical optimization, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm is an iterative method for solving unconstrained nonlinear optimization problems. Like the related Davidon–Fletcher–Powell method, BFGS determines the descent direction by preconditioning the gradient with curvature information. It does so by gradually improving an approximation to the Hessian matrix of the loss function, obtained only from gradient evaluations (or approximate gradient evaluations) via a generalized secant method.

Since the updates of the BFGS curvature matrix do not require matrix inversion, its computational complexity is only

l{O}(n2)

, compared to

l{O}(n3)

in Newton's method. Also in common use is L-BFGS, which is a limited-memory version of BFGS that is particularly suited to problems with very large numbers of variables (e.g., >1000). The BFGS-B variant handles simple box constraints.

The algorithm is named after Charles George Broyden, Roger Fletcher, Donald Goldfarb and David Shanno.

Rationale

The optimization problem is to minimize

f(x)

, where

x

is a vector in

Rn

, and

f

is a differentiable scalar function. There are no constraints on the values that

x

can take.

The algorithm begins at an initial estimate

x0

for the optimal value and proceeds iteratively to get a better estimate at each stage.

The search direction pk at stage k is given by the solution of the analogue of the Newton equation:

Bkpk=-\nablaf(xk),

where

Bk

is an approximation to the Hessian matrix at

xk

, which is updated iteratively at each stage, and

\nablaf(xk)

is the gradient of the function evaluated at xk. A line search in the direction pk is then used to find the next point xk+1 by minimizing

f(xk+\gammapk)

over the scalar

\gamma>0.

The quasi-Newton condition imposed on the update of

Bk

is

Bk+1(xk+1-xk)=\nablaf(xk+1)-\nablaf(xk).

Let

yk=\nablaf(xk+1)-\nablaf(xk)

and

sk=xk+1-xk

, then

Bk+1

satisfies

Bk+1sk=yk

,

which is the secant equation.

The curvature condition

\top
s
k

yk>0

should be satisfied for

Bk+1

to be positive definite, which can be verified by pre-multiplying the secant equation with
T
s
k
. If the function is not strongly convex, then the condition has to be enforced explicitly e.g. by finding a point xk+1 satisfying the Wolfe conditions, which entail the curvature condition, using line search.

Instead of requiring the full Hessian matrix at the point

xk+1

to be computed as

Bk+1

, the approximate Hessian at stage k is updated by the addition of two matrices:

Bk+1=Bk+Uk+Vk.

Both

Uk

and

Vk

are symmetric rank-one matrices, but their sum is a rank-two update matrix. BFGS and DFP updating matrix both differ from its predecessor by a rank-two matrix. Another simpler rank-one method is known as symmetric rank-one method, which does not guarantee the positive definiteness. In order to maintain the symmetry and positive definiteness of

Bk+1

, the update form can be chosen as

Bk+1=Bk+\alphauu\top+\betavv\top

. Imposing the secant condition,

Bk+1sk=yk

. Choosing

u=yk

and

v=Bksk

, we can obtain:

\alpha=

1
T
ysk
k

,

\beta=-

1
T
sBksk
k

.

Finally, we substitute

\alpha

and

\beta

into

Bk+1=Bk+\alphauu\top+\betavv\top

and get the update equation of

Bk+1

:

Bk+1=Bk+

y
T
y
k
k
T
ysk
k

-

Bsk
T
s
k
T
B
k
k
T
sBksk
k

.

Algorithm

Consider the following unconstrained optimization problem

f:Rn\toR

is a nonlinear objective function.

From an initial guess

x0\inRn

and an initial guess of the Hessian matrix

B0\inRn x

the following steps are repeated as

xk

converges to the solution:
  1. Obtain a direction

pk

by solving

Bkpk=-\nablaf(xk)

.
  1. Perform a one-dimensional optimization (line search) to find an acceptable stepsize

\alphak

in the direction found in the first step. If an exact line search is performed, then

\alphak=\argminf(xk+\alphapk)

. In practice, an inexact line search usually suffices, with an acceptable

\alphak

satisfying Wolfe conditions.
  1. Set

sk=\alphakpk

and update

xk+1=xk+sk

.

yk={\nablaf(xk+1)-\nablaf(xk)}

.

Bk+1=Bk+

y
T
y
k
k
T
ysk
k

-

Bsk
T
s
k
T
B
k
k
T
sBksk
k
.

Convergence can be determined by observing the norm of the gradient; given some

\epsilon>0

, one may stop the algorithm when

||\nablaf(xk)||\leq\epsilon.

If

B0

is initialized with

B0=I

, the first step will be equivalent to a gradient descent, but further steps are more and more refined by

Bk

, the approximation to the Hessian.

The first step of the algorithm is carried out using the inverse of the matrix

Bk

, which can be obtained efficiently by applying the Sherman–Morrison formula to the step 5 of the algorithm, giving
-1
B
k+1

=\left(I-

s
T
y
k
k
T
ysk
k

\right)

-1
B
k

\left(I-

y
T
s
k
k
T
ysk
k

\right)+

s
T
s
k
k
T
ysk
k

.

This can be computed efficiently without temporary matrices, recognizing that

-1
B
k
is symmetric,and that
T
y
k
-1
B
k

yk

and
T
s
k

yk

are scalars, using an expansion such as
-1
B
k+1

=

-1
B
k

+

T
(syk+y
T
k
-1
B
k
yk)(sk
T
s
k
)
k
T
(s
2
y
k)
k

-

-1
Byk
T
s
k
+sk
T
y
k
-1
B
k
k
T
syk
k

.

Therefore, in order to avoid any matrix inversion, the inverse of the Hessian can be approximated instead of the Hessian itself:

Hk\overset{\operatorname{def}}{=}

-1
B
k

.

From an initial guess

x0

and an approximate inverted Hessian matrix

H0

the following steps are repeated as

xk

converges to the solution:
  1. Obtain a direction

pk

by solving

pk=-Hk\nablaf(xk)

.
  1. Perform a one-dimensional optimization (line search) to find an acceptable stepsize

\alphak

in the direction found in the first step. If an exact line search is performed, then

\alphak=\argminf(xk+\alphapk)

. In practice, an inexact line search usually suffices, with an acceptable

\alphak

satisfying Wolfe conditions.
  1. Set

sk=\alphakpk

and update

xk+1=xk+sk

.

yk={\nablaf(xk+1)-\nablaf(xk)}

.

Hk+1=Hk+

T
(syk+y
T
k
Hkyk)(sk
T
s
k
)
k
T
(s
2
y
k)
k

-

Hyk
T
s
k
+sk
T
y
k
Hk
k
T
syk
k
.

In statistical estimation problems (such as maximum likelihood or Bayesian inference), credible intervals or confidence intervals for the solution can be estimated from the inverse of the final Hessian matrix . However, these quantities are technically defined by the true Hessian matrix, and the BFGS approximation may not converge to the true Hessian matrix.[1]

Further developments

The BFGS update formula heavily relies on the curvature

\top
s
k

yk

being strictly positive and bounded away from zero.This condition is satisfied when we perform a line search with Wolfe conditions on a convex target.However, some real-life applications (like Sequential Quadratic Programming methods) routinely produce negative or nearly-zero curvatures.This can occur when optimizing a nonconvex target or when employing a trust-region approach instead of a line search.It is also possible to produce spurious values due to noise in the target.

In such cases, one of the so-called damped BFGS updates can be used (see) which modify

sk

and/or

yk

in order to obtain a more robust update.

Notable implementations

Notable open source implementations are:

Notable proprietary implementations include:

See also

Notes and References

  1. Ge . Powell. Ren-pu . M. J. D. . The Convergence of Variable Metric Matrices in Unconstrained Optimization . . 27. 1983 . 2. 123 . 10.1007/BF02591941 . 8113073.
  2. Web site: GNU Scientific Library — GSL 2.6 documentation. 2020-11-22. www.gnu.org.
  3. Web site: R: General-purpose Optimization. 2020-11-22. stat.ethz.ch.
  4. Web site: scipy.optimize.fmin_bfgs — SciPy v1.5.4 Reference Guide. 2020-11-22. docs.scipy.org.
  5. Web site: Optim.jl Configurable options . julianlsolvers.