Karush–Kuhn–Tucker conditions explained
In mathematical optimization, the Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–Tucker conditions, are first derivative tests (sometimes called first-order necessary conditions) for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.
Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers, which allows only equality constraints. Similar to the Lagrange approach, the constrained maximization (minimization) problem is rewritten as a Lagrange function whose optimal point is a global maximum or minimum over the domain of the choice variables and a global minimum (maximum) over the multipliers. The Karush–Kuhn–Tucker theorem is sometimes referred to as the saddle-point theorem.[1]
The KKT conditions were originally named after Harold W. Kuhn and Albert W. Tucker, who first published the conditions in 1951.[2] Later scholars discovered that the necessary conditions for this problem had been stated by William Karush in his master's thesis in 1939.[3] [4]
Nonlinear optimization problem
Consider the following nonlinear optimization problem in standard form:
minimize
subject to
where
is the optimization variable chosen from a
convex subset of
,
is the
objective or
utility function,
are the inequality
constraint functions and
are the equality
constraint functions. The numbers of inequalities and equalities are denoted by
and
respectively. Corresponding to the constrained optimization problem one can form the Lagrangian function
where
The Karush–Kuhn–Tucker theorem then states the following.
Since the idea of this approach is to find a supporting hyperplane on the feasible set
\Gamma=\left\{x\inX:gi(x)\leq0,i=1,\ldots,m\right\}
, the proof of the Karush–Kuhn–Tucker theorem makes use of the
hyperplane separation theorem.
[5] The system of equations and inequalities corresponding to the KKT conditions is usually not solved directly, except in the few special cases where a closed-form solution can be derived analytically. In general, many optimization algorithms can be interpreted as methods for numerically solving the KKT system of equations and inequalities.[6]
Necessary conditions
and the constraint functions
and
have
subderivatives at a point
. If
is a
local optimum and the optimization problem satisfies some regularity conditions (see below), then there exist constants
and
, called KKT multipliers, such that the following four groups of conditions hold:
[7]
- Stationarity
For minimizing
: \partialf(x*)+
λj\partial
+
\mui\partial
\ni0
For maximizing
: -\partialf(x*)+
λj\partial
+
\mui\partial
\ni0
- Primal feasibility
- Dual feasibility
- Complementary slackness
The last condition is sometimes written in the equivalent form:
\muigi(x*)=0,fori=1,\ldots,m.
In the particular case
, i.e., when there are no inequality constraints, the KKT conditions turn into the Lagrange conditions, and the KKT multipliers are called Lagrange multipliers.Interpretation: KKT conditions as balancing constraint-forces in state space
The primal problem can be interpreted as moving a particle in the space of
, and subjecting it to three kinds of force fields:
is a potential field that the particle is minimizing. The force generated by
is
.
are one-sided constraint surfaces. The particle is allowed to move inside
, but whenever it touches
, it is pushed inwards.
are two-sided constraint surfaces. The particle is allowed to move only on the surface
.Primal stationarity states that the "force" of
is exactly balanced by a linear sum of forces
and
.Dual feasibility additionally states that all the
forces must be one-sided, pointing inwards into the feasible set for
.Complementary slackness states that if
, then the force coming from
must be zero i.e.,
, since the particle is not on the boundary, the one-sided constraint force cannot activate.Matrix representation
The necessary conditions can be written with Jacobian matrices of the constraint functions. Let
be defined as g(x)=\left(g1(x),\ldots,gm(x)\right)\top
and let
be defined as h(x)=\left(h1(x),\ldots,h\ell(x)\right)\top
. Let \boldsymbol{\mu}=\left(\mu1,\ldots,\mum\right)\top
and \boldsymbol{λ}=\left(λ1,\ldots,λ\ell\right)\top
. Then the necessary conditions can be written as:
- Stationarity
For maximizing
: \partialf(x*)-Dg(x*)\top\boldsymbol{\mu}-Dh(x*)\top\boldsymbol{λ}=0
For minimizing
: \partialf(x*)+Dg(x*)\top\boldsymbol{\mu}+Dh(x*)\top\boldsymbol{λ}=0
- Primal feasibility
- Dual feasibility
- Complementary slackness
\boldsymbol\mu\topg(x*)=0.
Regularity conditions (or constraint qualifications)
One can ask whether a minimizer point
of the original, constrained optimization problem (assuming one exists) has to satisfy the above KKT conditions. This is similar to asking under what conditions the minimizer
of a function
in an unconstrained problem has to satisfy the condition
. For the constrained case, the situation is more complicated, and one can state a variety of (increasingly complicated) "regularity" conditions under which a constrained minimizer also satisfies the KKT conditions. Some common examples for conditions that guarantee this are tabulated in the following, with the LICQ the most frequently used one:Constraint | Acronym | Statement |
---|
Linearity constraint qualification | LCQ | If
and
are affine functions, then no other condition is needed. |
Linear independence constraint qualification | LICQ | The gradients of the active inequality constraints and the gradients of the equality constraints are linearly independent at
. |
Mangasarian-Fromovitz constraint qualification | MFCQ | The gradients of the equality constraints are linearly independent at
and there exists a vector
such that
for all active inequality constraints and
for all equality constraints.[8] |
Constant rank constraint qualification | CRCQ | For each subset of the gradients of the active inequality constraints and the gradients of the equality constraints the rank at a vicinity of
is constant. |
Constant positive linear dependence constraint qualification | CPLD | For each subset of gradients of active inequality constraints and gradients of equality constraints, if the subset of vectors is linearly dependent at
with non-negative scalars associated with the inequality constraints, then it remains linearly dependent in a neighborhood of
. |
Quasi-normality constraint qualification | QNCQ | If the gradients of the active inequality constraints and the gradients of the equality constraints are linearly dependent at
with associated multipliers
for equalities and
for inequalities, then there is no sequence
such that
and
|
Slater's condition | SC | For a convex problem (i.e., assuming minimization,
are convex and
is affine), there exists a point
such that
and
| |
The strict implications can be shown
LICQ ⇒ MFCQ ⇒ CPLD ⇒ QNCQ
and
LICQ ⇒ CRCQ ⇒ CPLD ⇒ QNCQ
In practice weaker constraint qualifications are preferred since they apply to a broader selection of problems.
Sufficient conditions
In some cases, the necessary conditions are also sufficient for optimality. In general, the necessary conditions are not sufficient for optimality and additional information is required, such as the Second Order Sufficient Conditions (SOSC). For smooth functions, SOSC involve the second derivatives, which explains its name.
The necessary conditions are sufficient for optimality if the objective function
of a maximization problem is a differentiable concave function, the inequality constraints
are differentiable convex functions, the equality constraints
are affine functions, and Slater's condition holds.[9] Similarly, if the objective function
of a minimization problem is a differentiable convex function, the necessary conditions are also sufficient for optimality.It was shown by Martin in 1985 that the broader class of functions in which KKT conditions guarantees global optimality are the so-called Type 1 invex functions.[10] [11]
Second-order sufficient conditions
For smooth, non-linear optimization problems, a second order sufficient condition is given as follows.
The solution
found in the above section is a constrained local minimum if for the Lagrangian,L(x,λ,\mu)=f(x)+
\muigi(x)+
λjhj(x)
then,
sT\nabla
L(x*,λ*,\mu*)s\ge0
where
is a vector satisfying the following,\left[\nablax
\nablax
\right]Ts=
where only those active inequality constraints
corresponding to strict complementarity (i.e. where
) are applied. The solution is a strict constrained local minimum in the case the inequality is also strict.If
sT\nabla
L(x*,λ*,\mu*)s=0
, the third order Taylor expansion of the Lagrangian should be used to verify if
is a local minimum. The minimization of
is a good counter-example, see also Peano surface.Economics
See also: Profit maximization. Often in mathematical economics the KKT approach is used in theoretical models in order to obtain qualitative results. For example,[12] consider a firm that maximizes its sales revenue subject to a minimum profit constraint. Letting
be the quantity of output produced (to be chosen),
be sales revenue with a positive first derivative and with a zero value at zero output,
be production costs with a positive first derivative and with a non-negative value at zero output, and
be the positive minimal acceptable level of profit, then the problem is a meaningful one if the revenue function levels off so it eventually is less steep than the cost function. The problem expressed in the previously given minimization form isMinimize
subject to
and the KKT conditions are
\begin{align}
&\left(
\right)(1+\mu)-\mu\left(
\right)\le0,\\[5pt]
&Q\ge0,\\[5pt]
&Q\left[\left(
\right)(1+\mu)-\mu\left(
\right)\right]=0,\\[5pt]
&R(Q)-C(Q)-Gmin\ge0,\\[5pt]
&\mu\ge0,\\[5pt]
&\mu[R(Q)-C(Q)-Gmin]=0.
\end{align}
Since
would violate the minimum profit constraint, we have
and hence the third condition implies that the first condition holds with equality. Solving that equality gives
Because it was given that
and
are strictly positive, this inequality along with the non-negativity condition on
guarantees that
is positive and so the revenue-maximizing firm operates at a level of output at which marginal revenue
is less than marginal cost
— a result that is of interest because it contrasts with the behavior of a profit maximizing firm, which operates at a level at which they are equal.Value function
If we reconsider the optimization problem as a maximization problem with constant inequality constraints:
The value function is defined as
V(a1,\ldots,an)=\sup\limitsxf(x)
j\in\{1,\ldots,\ell\},i\in\{1,\ldots,m\},
so the domain of
is \{a\inRm\midforsomex\inX,gi(x)\leqai,i\in\{1,\ldots,m\}\}.
Given this definition, each coefficient
is the rate at which the value function increases as
increases. Thus if each
is interpreted as a resource constraint, the coefficients tell you how much increasing a resource will increase the optimum value of our function
. This interpretation is especially important in economics and is used, for instance, in utility maximization problems.Generalizations
With an extra multiplier
, which may be zero (as long as
), in front of
the KKT stationarity conditions turn into\begin{align}
&\mu0\nablaf(x*)+
\mui\nabla
+
λj\nabla
=0,\\[4pt]
&\mujg
i=1,...,m,
\end{align}
which are called the Fritz John conditions. This optimality conditions holds without constraint qualifications and it is equivalent to the optimality condition KKT or (not-MFCQ).
The KKT conditions belong to a wider class of the first-order necessary conditions (FONC), which allow for non-smooth functions using subderivatives.
See also
Further reading
- R. . Andreani . J. M. . Martínez . M. L. . Schuverdt . On the relation between constant positive linear dependence condition and quasinormality constraint qualification . Journal of Optimization Theory and Applications . 125 . 2 . 473–485 . 2005 . 10.1007/s10957-004-1861-9 . 122212394 .
- Book: Avriel, Mordecai . 2003 . Nonlinear Programming: Analysis and Methods . Dover . 0-486-43227-0 .
- Book: V. . Boltyanski . H. . Martini . V. . Soltan . Geometric Methods and Optimization Problems . New York . Springer . 1998 . 0-7923-5454-0 . 78–92 . The Kuhn–Tucker Theorem . https://books.google.com/books?id=YD7UBwAAQBAJ&pg=PA78 .
- Book: S. . Boyd . L. . Vandenberghe . 2004 . Optimality Conditions . https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf#page=255 . Convex Optimization . Cambridge University Press . 0-521-83378-7 . 241–249 .
- Book: Murray C. . Kemp . Yoshio . Kimura . Introduction to Mathematical Economics . New York . Springer . 1978 . 0-387-90304-6 . 38–73 .
- Book: Rau, Nicholas . Lagrange Multipliers . Matrices and Mathematical Programming . London . Macmillan . 1981 . 0-333-27768-6 . 156–174 .
- Book: J. . Nocedal . S. J. . Wright . 2006 . Numerical Optimization . Springer . New York . 978-0-387-30303-1 .
- Book: Sundaram, Rangarajan K. . Inequality Constraints and the Theorem of Kuhn and Tucker . A First Course in Optimization Theory . New York . Cambridge University Press . 1996 . 0-521-49770-1 . 145–171 . https://books.google.com/books?id=yAfug81P-8YC&pg=PA145 .
External links
Notes and References
- Book: Daniel . Tabak . Benjamin C. . Kuo . Optimal Control by Mathematical Programming . Englewood Cliffs, NJ . Prentice-Hall . 1971 . 0-13-638106-5 . 19–20 .
- H. W. . Kuhn . Harold W. Kuhn . A. W. . Tucker . Albert W. Tucker . Proceedings of 2nd Berkeley Symposium . 481–492 . Nonlinear programming . University of California Press . 1951 . Berkeley . 47303.
- W. Karush. Minima of Functions of Several Variables with Inequalities as Side Constraints. M.Sc. thesis. Dept. of Mathematics, Univ. of Chicago, Chicago, Illinois. 1939.
- Kjeldsen . Tinne Hoff. Tinne Hoff Kjeldsen . A contextualized historical analysis of the Kuhn-Tucker theorem in nonlinear programming: the impact of World War II . Historia Math. . 27 . 2000 . 4 . 331–361 . 1800317 . 10.1006/hmat.2000.2289. free .
- Book: Murray C. . Kemp . Yoshio . Kimura . Introduction to Mathematical Economics . New York . Springer . 1978 . 0-387-90304-6 . 38–44 .
- Book: Boyd. Stephen. Vandenberghe. Lieven. Convex Optimization. Cambridge University Press. Cambridge . 2004. 244. 0-521-83378-7. 2061575.
- Book: Ruszczyński, Andrzej . Nonlinear Optimization . . 2006 . 978-0691119151 . Princeton, NJ . 2199043 . Andrzej Piotr Ruszczyński.
- Book: Dimitri Bertsekas. Dimitri Bertsekas. Nonlinear Programming. 1999. Athena Scientific. 2. 329–330. 9781886529007.
- Book: Boyd. Stephen. Vandenberghe. Lieven. Convex Optimization. Cambridge University Press. Cambridge . 2004. 244. 0-521-83378-7. 2061575.
- D. H. . Martin . J. Optim. Theory Appl. . 47 . 1 . 65–76 . The Essence of Invexity . 1985 . 10.1007/BF00941316 . 122906371 .
- M. A. . Hanson . Invexity and the Kuhn-Tucker Theorem . J. Math. Anal. Appl. . 236 . 2 . 594–604 . 1999 . 10.1006/jmaa.1999.6484 . free .
- Chiang, Alpha C. Fundamental Methods of Mathematical Economics, 3rd edition, 1984, pp. 750–752.