Differential dynamic programming explained
Differential dynamic programming (DDP) is an optimal control algorithm of the trajectory optimization class. The algorithm was introduced in 1966 by Mayne[1] and subsequently analysed in Jacobson and Mayne's eponymous book.[2] The algorithm uses locally-quadratic models of the dynamics and cost functions, and displays quadratic convergence. It is closely related to Pantoja's step-wise Newton's method.[3] [4]
Finite-horizon discrete-time problems
The dynamics
describe the evolution of the state
given the control
from time
to time
. The
total cost
is the sum of running costs
and final cost
, incurred when starting from state
and applying the control sequence
until the horizon is reached:
J0(x,U)=\sum
\ell(xi,ui)+\ellf(xN),
where
, and the
for
are given by . The solution of the optimal control problem is the minimizing control sequence
U*(x)\equiv\operatorname{argmin}UJ0(x,U).
Trajectory optimization means finding
for a particular
, rather than for all possible initial states.
Dynamic programming
Let
be the partial control sequence
Ui\equiv\{ui,ui+1...,uN-1\}
and define the
cost-to-go
as the partial sum of costs from
to
:
Ji(x,Ui)=\sum
\ell(xj,uj)+\ellf(xN).
The optimal cost-to-go or value function at time
is the cost-to-go given the minimizing control sequence:
Setting
, the
dynamic programming principle reduces the minimization over an entire sequence of controls to a sequence of minimizations over a single control, proceeding backwards in time:
This is the Bellman equation.
Differential dynamic programming
DDP proceeds by iteratively performing a backward pass on the nominal trajectory to generate a new control sequence, and then a forward-pass to compute and evaluate a new nominal trajectory. We begin with the backward pass. If
is the argument of the
operator in, let
be the variation of this quantity around the
-th
pair:
\begin{align}Q(\deltax,\deltau)\equiv&\ell(x+\deltax,u+\deltau)&&{}+V(f(x+\deltax,u+\deltau),i+1)
\\
-&\ell(x,u)&&{}-V(f(x,u),i+1)
\end{align}
and expand to second order
The
notation used here is a variant of the notation of Morimoto where subscripts denote differentiation in denominator layout.
[5] Dropping the index
for readability, primes denoting the next time-step
, the expansion coefficients are
\begin{alignat}{2}
Qx&=\ellx+
V'x\\
Qu&=\ellu+
V'x\\
Qxx&=\ellxx+
V'xxfx+Vx' ⋅ fxx\\
Quu&=\elluu+
V'xxfu+{V'x
} \cdot\mathbf_\\Q_ &= \ell_ + \mathbf_\mathbf^\mathsf V'_\mathbf_\mathbf + \cdot \mathbf_.\end
The last terms in the last three equations denote contraction of a vector with a tensor. Minimizing the quadratic approximation with respect to
we have
giving an open-loop term
and a feedback gain term
. Plugging the result back into, we now have a quadratic model of the value at time
:
\begin{alignat}{2}
\DeltaV(i)&=&{}
Qu\\
Vx(i)&=Qx&{}-Qxu
Qu\\
Vxx(i)&=Qxx&{}-Qxu
Qux.
\end{alignat}
Recursively computing the local quadratic models of
and the control modifications
, from
down to
, constitutes the backward pass. As above, the Value is initialized with
. Once the backward pass is completed, a forward pass computes a new trajectory:
}(1)&=\mathbf(1)\\\hat(i)&=\mathbf(i) + \mathbf(i) +\mathbf(i)(\hat(i) - \mathbf(i))\\\hat(i+1)&=\mathbf(\hat(i),\hat(i))\end
The backward passes and forward passes are iterated until convergence.
Regularization and line-search
Differential dynamic programming is a second-order algorithm like Newton's method. It therefore takes large steps toward the minimum and often requires regularization and/or line-search to achieve convergence.[6] [7] Regularization in the DDP context means ensuring that the
matrix in is
positive definite. Line-search in DDP amounts to scaling the open-loop control modification
by some
.
Monte Carlo version
Sampled differential dynamic programming (SaDDP) is a Monte Carlo variant of differential dynamic programming.[8] [9] [10] It is based on treating the quadratic cost of differential dynamic programming as the energy of a Boltzmann distribution. This way the quantities of DDP can be matched to the statistics of a multidimensional normal distribution. The statistics can be recomputed from sampled trajectories without differentiation.
Sampled differential dynamic programming has been extended to Path Integral Policy Improvement with Differential Dynamic Programming.[11] This creates a link between differential dynamic programming and path integral control,[12] which is a framework of stochastic optimal control.
Constrained problems
Interior Point Differential dynamic programming (IPDDP) is an interior-point method generalization of DDP that can address the optimal control problem with nonlinear state and input constraints.[13]
See also
External links
Notes and References
- 3. 85–95. Mayne. D. Q.. David Mayne . A second-order gradient method of optimizing non-linear discrete time systems. Int J Control. 1966. 10.1080/00207176608921369.
- Book: Mayne. David Q.. Jacobson. David H.. Differential dynamic programming. 1970. American Elsevier Pub. Co.. New York. 978-0-444-00070-5.
- 10.1080/00207178808906114. 0020-7179. 47. 5. 1539–1553. de O. Pantoja. J. F. A.. Differential dynamic programming and Newton's method. International Journal of Control. 1988.
- Liao. L. Z.. C. A Shoemaker . Christine Shoemaker. Advantages of differential dynamic programming over Newton's method for discrete-time optimal control problems. Cornell University. 1992. 1813/5474 . free.
- 2. 1927–1932. Morimoto. J. . G. Zeglin . C.G. Atkeson. Minimax differential dynamic programming: Application to a biped walking robot. Intelligent Robots and Systems, 2003.(IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on. 2003.
- Liao . L. Z . C. A Shoemaker . Christine Shoemaker . 1991 . Convergence in unconstrained discrete-time differential dynamic programming . IEEE Transactions on Automatic Control . 36 . 6 . 692 . 10.1109/9.86943.
- Hebrew University. Tassa. Y.. Theory and implementation of bio-mimetic motor controllers. 2011. 2012-02-27. https://web.archive.org/web/20160304023026/http://icnc.huji.ac.il/phd/theses/files/YuvalTassa.pdf. 2016-03-04.
- Sampled differential dynamic programming . 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) . en-US. 10.1109/IROS.2016.7759229. 1338737.
- Rajamäki. Joose. Perttu. Hämäläinen. Regularizing Sampled Differential Dynamic Programming - IEEE Conference Publication. 2018 Annual American Control Conference (ACC). June 2018 . 2182–2189 . 10.23919/ACC.2018.8430799 . 243932441 . en-US. 2018-10-19.
- Book: Rajamäki, Joose. 2018. Random Search Algorithms for Optimal Control. en. 1799-4942. 978-952-60-8156-4. Aalto University.
- Book: Lefebvre. Tom. Crevecoeur. Guillaume. 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM) . Path Integral Policy Improvement with Differential Dynamic Programming . July 2019. https://ieeexplore.ieee.org/document/8868359. 739–745. 10.1109/AIM.2019.8868359. 1854/LU-8623968. 978-1-7281-2493-3. 204816072. free.
- Book: Theodorou. Evangelos. Buchli. Jonas. Schaal. Stefan. 2010 IEEE International Conference on Robotics and Automation . Reinforcement learning of motor skills in high dimensions: A path integral approach . May 2010. https://ieeexplore.ieee.org/document/5509336. 2397–2403. 10.1109/ROBOT.2010.5509336. 978-1-4244-5038-1. 15116370.
- Pavlov . Andrei. Shames. Iman. Manzie. Chris. 2020 . Interior Point Differential Dynamic Programming . 2004.12710 . math.OC.