Nonlinear programming explained

In mathematics, nonlinear programming (NLP) is the process of solving an optimization problem where some of the constraints are not linear equalities or the objective function is not a linear function. An optimization problem is one of calculation of the extrema (maxima, minima or stationary points) of an objective function over a set of unknown real variables and conditional to the satisfaction of a system of equalities and inequalities, collectively termed constraints. It is the sub-field of mathematical optimization that deals with problems that are not linear.

Definition and discussion

Let n, m, and p be positive integers. Let X be a subset of Rn (usually a box-constrained one), let f, gi, and hj be real-valued functions on X for each i in and each j in, with at least one of f, gi, and hj being nonlinear.

A nonlinear programming problem is an optimization problem of the form

\begin{align} minimize&f(x)\\ subjectto&gi(x)\leq0foreachi\in\{1,...c,m\}\\ &hj(x)=0foreachj\in\{1,...c,p\}\\ &x\inX. \end{align}

Depending on the constraint set, there are several possibilities:

Most realistic applications feature feasible problems, with infeasible or unbounded problems seen as a failure of an underlying model. In some cases, infeasible problems are handled by minimizing a sum of feasibility violations.

Some special cases of nonlinear programming have specialized solution methods:

Applicability

A typical non-convex problem is that of optimizing transportation costs by selection from a set of transportation methods, one or more of which exhibit economies of scale, with various connectivities and capacity constraints. An example would be petroleum product transport given a selection or combination of pipeline, rail tanker, road tanker, river barge, or coastal tankship. Owing to economic batch size the cost functions may have discontinuities in addition to smooth changes.

In experimental science, some simple data analysis (such as fitting a spectrum with a sum of peaks of known location and shape but unknown magnitude) can be done with linear methods, but in general these problems are also nonlinear. Typically, one has a theoretical model of the system under study with variable parameters in it and a model the experiment or experiments, which may also have unknown parameters. One tries to find a best fit numerically. In this case one often wants a measure of the precision of the result, as well as the best fit itself.

Methods for solving a general nonlinear program

Analytic methods

Under differentiability and constraint qualifications, the Karush–Kuhn–Tucker (KKT) conditions provide necessary conditions for a solution to be optimal. If some of the functions are non-differentiable, subdifferential versions of Karush–Kuhn–Tucker (KKT) conditions are available.[1]

Under convexity, the KKT conditions are sufficient for a global optimum. Without convexity, these conditions are sufficient only for a local optimum. In some cases, the number of local optima is small, and one can find all of them analytically and find the one for which the objective value is smallest.[2]

Numeric methods

In most realistic cases, it is very hard to solve the KKT conditions analytically, and so the problems are solved using numerical methods. These methods are iterative: they start with an initial point, and then proceed to points that are supposed to be closer to the optimal point, using some update rule. There are three kinds of update rules:

Third-order routines (and higher) are theoretically possible, but not used in practice, due to the higher computational load and little theoretical benefit.

Branch and bound

Another method involves the use of branch and bound techniques, where the program is divided into subclasses to be solved with convex (minimization problem) or linear approximations that form a lower bound on the overall cost within the subdivision. With subsequent divisions, at some point an actual solution will be obtained whose cost is equal to the best lower bound obtained for any of the approximate solutions. This solution is optimal, although possibly not unique. The algorithm may also be stopped early, with the assurance that the best possible solution is within a tolerance from the best point found; such points are called ε-optimal. Terminating to ε-optimal points is typically necessary to ensure finite termination. This is especially useful for large, difficult problems and problems with uncertain costs or values where the uncertainty can be estimated with an appropriate reliability estimation.

Implementations

There exist numerous nonlinear programming solvers, including open source:

Numerical Examples

2-dimensional example

A simple problem (shown in the diagram) can be defined by the constraints\beginx_1 &\geq 0 \\x_2 &\geq 0 \\x_1^2 + x_2^2 &\geq 1 \\x_1^2 + x_2^2 &\leq 2\endwith an objective function to be maximizedf(\mathbf x) = x_1 + x_2where .

3-dimensional example

Another simple problem (see diagram) can be defined by the constraints\beginx_1^2 - x_2^2 + x_3^2 &\leq 2 \\x_1^2 + x_2^2 + x_3^2 &\leq 10\endwith an objective function to be maximizedf(\mathbf x) = x_1 x_2 + x_2 x_3where .

See also

Further reading

External links

Notes and References

  1. Book: Ruszczyński, Andrzej . Nonlinear Optimization . . 2006 . 978-0691119151 . Princeton, NJ . xii+454 . 2199043 . Andrzej Piotr Ruszczyński.
  2. Web site: Nemirovsky and Ben-Tal . 2023 . Optimization III: Convex Optimization .