Differential dynamic programming explained

Differential dynamic programming (DDP) is an optimal control algorithm of the trajectory optimization class. The algorithm was introduced in 1966 by Mayne[1] and subsequently analysed in Jacobson and Mayne's eponymous book.[2] The algorithm uses locally-quadratic models of the dynamics and cost functions, and displays quadratic convergence. It is closely related to Pantoja's step-wise Newton's method.[3] [4]

Finite-horizon discrete-time problems

The dynamics

describe the evolution of the state

stylex

given the control

u

from time

i

to time

i+1

. The total cost

J0

is the sum of running costs

style\ell

and final cost

\ellf

, incurred when starting from state

x

and applying the control sequence

U\equiv\{u0,u1...,uN-1\}

until the horizon is reached:

J0(x,U)=\sum

N-1
i=0

\ell(xi,ui)+\ellf(xN),

where

x0\equivx

, and the

xi

for

i>0

are given by . The solution of the optimal control problem is the minimizing control sequence

U*(x)\equiv\operatorname{argmin}UJ0(x,U).

Trajectory optimization means finding

U*(x)

for a particular

x0

, rather than for all possible initial states.

Dynamic programming

Let

Ui

be the partial control sequence

Ui\equiv\{ui,ui+1...,uN-1\}

and define the cost-to-go

Ji

as the partial sum of costs from

i

to

N

:

Ji(x,Ui)=\sum

N-1
j=i

\ell(xj,uj)+\ellf(xN).

The optimal cost-to-go or value function at time

i

is the cost-to-go given the minimizing control sequence:

V(x,i)\equiv

min
Ui

Ji(x,Ui).

Setting

V(x,N)\equiv\ellf(xN)

, the dynamic programming principle reduces the minimization over an entire sequence of controls to a sequence of minimizations over a single control, proceeding backwards in time:

This is the Bellman equation.

Differential dynamic programming

DDP proceeds by iteratively performing a backward pass on the nominal trajectory to generate a new control sequence, and then a forward-pass to compute and evaluate a new nominal trajectory. We begin with the backward pass. If

\ell(x,u)+V(f(x,u),i+1)

is the argument of the

min[]

operator in, let

Q

be the variation of this quantity around the

i

-th

(x,u)

pair:

\begin{align}Q(\deltax,\deltau)\equiv&\ell(x+\deltax,u+\deltau)&&{}+V(f(x+\deltax,u+\deltau),i+1) \\ -&\ell(x,u)&&{}-V(f(x,u),i+1) \end{align}

and expand to second order

The

Q

notation used here is a variant of the notation of Morimoto where subscripts denote differentiation in denominator layout.[5] Dropping the index

i

for readability, primes denoting the next time-step

V'\equivV(i+1)

, the expansion coefficients are

\begin{alignat}{2} Qx&=\ellx+

T
f
x

V'x\\ Qu&=\ellu+

T
f
u

V'x\\ Qxx&=\ellxx+

T
f
x

V'xxfx+Vx'fxx\\ Quu&=\elluu+

T
f
u

V'xxfu+{V'x

} \cdot\mathbf_\\Q_ &= \ell_ + \mathbf_\mathbf^\mathsf V'_\mathbf_\mathbf + \cdot \mathbf_.\end

The last terms in the last three equations denote contraction of a vector with a tensor. Minimizing the quadratic approximation with respect to

\deltau

we have

giving an open-loop term

-1
k=-Q
uu

Qu

and a feedback gain term
-1
K=-Q
uu

Qux

. Plugging the result back into, we now have a quadratic model of the value at time

i

:

\begin{alignat}{2} \DeltaV(i)&=&{}

T
-\tfrac{1}{2}Q
u
-1
Q
uu

Qu\\ Vx(i)&=Qx&{}-Qxu

-1
Q
uu

Qu\\ Vxx(i)&=Qxx&{}-Qxu

-1
Q
uu

Qux. \end{alignat}

Recursively computing the local quadratic models of

V(i)

and the control modifications

\{k(i),K(i)\}

, from

i=N-1

down to

i=1

, constitutes the backward pass. As above, the Value is initialized with

V(x,N)\equiv\ellf(xN)

. Once the backward pass is completed, a forward pass computes a new trajectory:

\begin{align} \hat{x

}(1)&=\mathbf(1)\\\hat(i)&=\mathbf(i) + \mathbf(i) +\mathbf(i)(\hat(i) - \mathbf(i))\\\hat(i+1)&=\mathbf(\hat(i),\hat(i))\end

The backward passes and forward passes are iterated until convergence.

Regularization and line-search

Differential dynamic programming is a second-order algorithm like Newton's method. It therefore takes large steps toward the minimum and often requires regularization and/or line-search to achieve convergence.[6] [7] Regularization in the DDP context means ensuring that the

Quu

matrix in is positive definite. Line-search in DDP amounts to scaling the open-loop control modification

k

by some

0<\alpha<1

.

Monte Carlo version

Sampled differential dynamic programming (SaDDP) is a Monte Carlo variant of differential dynamic programming.[8] [9] [10] It is based on treating the quadratic cost of differential dynamic programming as the energy of a Boltzmann distribution. This way the quantities of DDP can be matched to the statistics of a multidimensional normal distribution. The statistics can be recomputed from sampled trajectories without differentiation.

Sampled differential dynamic programming has been extended to Path Integral Policy Improvement with Differential Dynamic Programming.[11] This creates a link between differential dynamic programming and path integral control,[12] which is a framework of stochastic optimal control.

Constrained problems

Interior Point Differential dynamic programming (IPDDP) is an interior-point method generalization of DDP that can address the optimal control problem with nonlinear state and input constraints.[13]

See also

External links

Notes and References

  1. 3. 85–95. Mayne. D. Q.. David Mayne . A second-order gradient method of optimizing non-linear discrete time systems. Int J Control. 1966. 10.1080/00207176608921369.
  2. Book: Mayne. David Q.. Jacobson. David H.. Differential dynamic programming. 1970. American Elsevier Pub. Co.. New York. 978-0-444-00070-5.
  3. 10.1080/00207178808906114. 0020-7179. 47. 5. 1539–1553. de O. Pantoja. J. F. A.. Differential dynamic programming and Newton's method. International Journal of Control. 1988.
  4. Liao. L. Z.. C. A Shoemaker . Christine Shoemaker. Advantages of differential dynamic programming over Newton's method for discrete-time optimal control problems. Cornell University. 1992. 1813/5474 . free.
  5. 2. 1927–1932. Morimoto. J. . G. Zeglin . C.G. Atkeson. Minimax differential dynamic programming: Application to a biped walking robot. Intelligent Robots and Systems, 2003.(IROS 2003). Proceedings. 2003 IEEE/RSJ International Conference on. 2003.
  6. Liao . L. Z . C. A Shoemaker . Christine Shoemaker . 1991 . Convergence in unconstrained discrete-time differential dynamic programming . IEEE Transactions on Automatic Control . 36 . 6 . 692 . 10.1109/9.86943.
  7. Hebrew University. Tassa. Y.. Theory and implementation of bio-mimetic motor controllers. 2011. 2012-02-27. https://web.archive.org/web/20160304023026/http://icnc.huji.ac.il/phd/theses/files/YuvalTassa.pdf. 2016-03-04.
  8. Sampled differential dynamic programming . 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) . en-US. 10.1109/IROS.2016.7759229. 1338737.
  9. Rajamäki. Joose. Perttu. Hämäläinen. Regularizing Sampled Differential Dynamic Programming - IEEE Conference Publication. 2018 Annual American Control Conference (ACC). June 2018 . 2182–2189 . 10.23919/ACC.2018.8430799 . 243932441 . en-US. 2018-10-19.
  10. Book: Rajamäki, Joose. 2018. Random Search Algorithms for Optimal Control. en. 1799-4942. 978-952-60-8156-4. Aalto University.
  11. Book: Lefebvre. Tom. Crevecoeur. Guillaume. 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM) . Path Integral Policy Improvement with Differential Dynamic Programming . July 2019. https://ieeexplore.ieee.org/document/8868359. 739–745. 10.1109/AIM.2019.8868359. 1854/LU-8623968. 978-1-7281-2493-3. 204816072. free.
  12. Book: Theodorou. Evangelos. Buchli. Jonas. Schaal. Stefan. 2010 IEEE International Conference on Robotics and Automation . Reinforcement learning of motor skills in high dimensions: A path integral approach . May 2010. https://ieeexplore.ieee.org/document/5509336. 2397–2403. 10.1109/ROBOT.2010.5509336. 978-1-4244-5038-1. 15116370.
  13. Pavlov . Andrei. Shames. Iman. Manzie. Chris. 2020 . Interior Point Differential Dynamic Programming . 2004.12710 . math.OC.