Pontryagin's maximum principle is used in optimal control theory to find the best possible control for taking a dynamical system from one state to another, especially in the presence of constraints for the state or input controls. It states that it is necessary for any optimal control along with the optimal state trajectory to solve the so-called Hamiltonian system, which is a two-point boundary value problem, plus a maximum condition of the control Hamiltonian. These necessary conditions become sufficient under certain convexity conditions on the objective and constraint functions.[1] [2]
The maximum principle was formulated in 1956 by the Russian mathematician Lev Pontryagin and his students,[3] [4] and its initial application was to the maximization of the terminal speed of a rocket.[5] The result was derived using ideas from the classical calculus of variations.[6] After a slight perturbation of the optimal control, one considers the first-order term of a Taylor expansion with respect to the perturbation; sending the perturbation to zero leads to a variational inequality from which the maximum principle follows.[7]
Widely regarded as a milestone in optimal control theory, the significance of the maximum principle lies in the fact that maximizing the Hamiltonian is much easier than the original infinite-dimensional control problem; rather than maximizing over a function space, the problem is converted to a pointwise optimization.[8] A similar logic leads to Bellman's principle of optimality, a related approach to optimal control problems which states that the optimal trajectory remains optimal at intermediate points in time.[9] The resulting Hamilton–Jacobi–Bellman equation provides a necessary and sufficient condition for an optimum, and admits a straightforward extension to stochastic optimal control problems, whereas the maximum principle does not. However, in contrast to the Hamilton–Jacobi–Bellman equation, which needs to hold over the entire state space to be valid, Pontryagin's Maximum Principle is potentially more computationally efficient in that the conditions which it specifies only need to hold over a particular trajectory.
For set
l{U}
\Psi:\realsn\to\reals
H:\realsn x l{U} x \realsn x \reals\to\reals
L:\realsn x l{U}\to\reals
f:\realsn x l{U}\to\realsn
\PsiT(x(T))=\left.
\partial\Psi(x) | |
\partialT |
\right|x=x(T)
\Psix(x(T))=\begin{bmatrix}\left.
\partial \Psi(x) | |
\partialx1 |
\right|x=x(T)& … &\left.
\partial \Psi(x) | |
\partialxn |
\right|x=x(T)\end{bmatrix}
*,u | |
H | |
x(x |
*,λ*,t)=\begin{bmatrix}\left.
\partialH | |
\partialx1 |
\right| | |
x=x*,u=u*,λ=λ* |
& … &\left.
\partialH | |
\partialxn |
\right| | |
x=x*,u=u*,λ=λ* |
\end{bmatrix}
*,u | |
L | |
x(x |
*)=\begin{bmatrix}\left.
\partialL | |
\partialx1 |
\right| | |
x=x*,u=u* |
& … &\left.
\partialL | |
\partialxn |
\right| | |
x=x*,u=u* |
\end{bmatrix}
*,u | |
f | |
x(x |
*)=\begin{bmatrix}\left.
\partialf1 | |
\partialx1 |
\right| | |
x=x*,u=u* |
& … &\left.
\partialf1 | |
\partialxn |
\right| | |
x=x*,u=u* |
\\ \vdots&\ddots&\vdots\ \left.
\partialfn | |
\partialx1 |
\right| | |
x=x*,u=u* |
& \ldots&\left.
\partialfn | |
\partialxn |
\right| | |
x=x*,u=u* |
\end{bmatrix}
Here the necessary conditions are shown for minimization of a functional.
Consider an n-dimensional dynamical system, with state variable
x\in\Rn
u\inl{U}
l{U}
x |
=f(x,u)
x0
t\in[0,T]
x |
=f(x,u), x(0)=x0, u(t)\inl{U}, t\in[0,T]
u:[0,T]\tol{U}
J
T | |
J=\Psi(x(T))+\int | |
0 |
L(x(t),u(t))dt
L(x,u)
u
x
\Psi(x)
x
L,\Psi
L
λ
H
t\in[0,T]
H(x(t),u(t),λ(t),t)=λ\rm(t) ⋅ f(x(t),u(t))+L(x(t),u(t))
λ\rm
λ
Pontryagin's minimum principle states that the optimal state trajectory
x*
u*
λ*
H
for all time
t\in[0,T]
u\inl{U}
λ
If
x(T)
If the final state
x(T)
These four conditions in (1)-(4) are the necessary conditions for an optimal control.