In mathematical optimization, Wolfe duality, named after Philip Wolfe, is type of dual problem in which the objective function and constraints are all differentiable functions. Using this concept a lower bound for a minimization problem can be found because of the weak duality principle.[1]
For a minimization problem with inequality constraints,
\begin{align} &\underset{x}{\operatorname{minimize}}&&f(x)\\ &\operatorname{subject to} &&gi(x)\leq0, i=1,...,m \end{align}
the Lagrangian dual problem is
\begin{align} &\underset{u}{\operatorname{maximize}}&&infx\left(f(x)+
m | |
\sum | |
j=1 |
ujgj(x)\right)\\ &\operatorname{subject to} &&ui\geq0, i=1,...,m \end{align}
where the objective function is the Lagrange dual function. Provided that the functions
f
g1,\ldots,gm
\begin{align} &\underset{x,u}{\operatorname{maximize}}&&f(x)+
m | |
\sum | |
j=1 |
ujgj(x)\\ &\operatorname{subject to} &&\nablaf(x)+
m | |
\sum | |
j=1 |
uj\nablagj(x)=0\\ &&&ui\geq0, i=1,...,m \end{align}
is called the Wolfe dual problem.[2] This problem employs the KKT conditions as a constraint. Also, the equality constraint
\nablaf(x)+
m | |
\sum | |
j=1 |
uj\nablagj(x)