Stochastic dynamic programming explained

Originally introduced by Richard E. Bellman in, stochastic dynamic programming is a technique for modelling and solving problems of decision making under uncertainty. Closely related to stochastic programming and dynamic programming, stochastic dynamic programming represents the problem under scrutiny in the form of a Bellman equation. The aim is to compute a policy prescribing how to act optimally in the face of uncertainty.

A motivating example: Gambling game

A gambler has $2, she is allowed to play a game of chance 4 times and her goal is to maximize her probability of ending up with a least $6. If the gambler bets $

b

on a play of the game, then with probability 0.4 she wins the game, recoup the initial bet, and she increases her capital position by $

b

; with probability 0.6, she loses the bet amount $

b

; all plays are pairwise independent. On any play of the game, the gambler may not bet more money than she has available at the beginning of that play.[1]

Stochastic dynamic programming can be employed to model this problem and determine a betting strategy that, for instance, maximizes the gambler's probability of attaining a wealth of at least $6 by the end of the betting horizon.

Note that if there is no limit to the number of games that can be played, the problem becomes a variant of the well known St. Petersburg paradox.

Formal background

Consider a discrete system defined on

n

stages in which each stage

t=1,\ldots,n

is characterized by

st\inSt

, where

St

is the set of feasible states at the beginning of stage

t

;

xt\inXt

, where

Xt

is the set of feasible actions at stage

t

– note that

Xt

may be a function of the initial state

st

;

pt(st,xt)

, representing the cost/reward at stage

t

if

st

is the initial state and

xt

the action selected;

gt(st,xt)

that leads the system towards state

st+1=gt(st,xt)

.

Let

ft(st)

represent the optimal cost/reward obtained by following an optimal policy over stages

t,t+1,\ldots,n

. Without loss of generality in what follow we will consider a reward maximisation setting. In deterministic dynamic programming one usually deals with functional equations taking the following structure

ft(st)=max

xt\inXt

\{pt(st,xt)+ft+1(st+1)\}

where

st+1=gt(st,xt)

and the boundary condition of the system is

fn(sn)=max

xn\inXn

\{pn(sn,xn)\}.

The aim is to determine the set of optimal actions that maximise

f1(s1)

. Given the current state

st

and the current action

xt

, we know with certainty the reward secured during the current stage and – thanks to the state transition function

gt

– the future state towards which the system transitions.

In practice, however, even if we know the state of the system at the beginning of the current stage as well as the decision taken, the state of the system at the beginning of the next stage and the current period reward are often random variables that can be observed only at the end of the current stage.

Stochastic dynamic programming deals with problems in which the current period reward and/or the next period state are random, i.e. with multi-stage stochastic systems. The decision maker's goal is to maximise expected (discounted) reward over a given planning horizon.

In their most general form, stochastic dynamic programs deal with functional equations taking the following structure

ft(st)=max

xt\inXt(st)

\left\{(expectedrewardduringstaget\midst,xt)+

\alpha\sum
st+1

\Pr(st+1\midst,xt)ft+1(st+1)\right\}

where

ft(st)

is the maximum expected reward that can be attained during stages

t,t+1,\ldots,n

, given state

st

at the beginning of stage

t

;

xt

belongs to the set

Xt(st)

of feasible actions at stage

t

given initial state

st

;

\alpha

is the discount factor;

\Pr(st+1\midst,xt)

is the conditional probability that the state at the end of stage

t

is

st+1

given current state

st

and selected action

xt

.

Markov decision processes represent a special class of stochastic dynamic programs in which the underlying stochastic process is a stationary process that features the Markov property.

Gambling game as a stochastic dynamic program

Gambling game can be formulated as a Stochastic Dynamic Program as follows: there are

n=4

games (i.e. stages) in the planning horizon

s

in period

t

represents the initial wealth at the beginning of period

t

;

s

in period

t

is the bet amount

b

;
a
p
i,j
from state

i

to state

j

when action

a

is taken in state

i

is easily derived from the probability of winning (0.4) or losing (0.6) a game.

Let

ft(s)

be the probability that, by the end of game 4, the gambler has at least $6, given that she has $

s

at the beginning of game

t

.

b

is taken in state

s

is given by the expected value

pt(s,b)=0.4ft+1(s+b)+0.6ft+1(s-b)

.

To derive the functional equation, define

bt(s)

as a bet that attains

ft(s)

, then at the beginning of game

t=4

s<3

it is impossible to attain the goal, i.e.

f4(s)=0

for

s<3

;

s\geq6

the goal is attained, i.e.

f4(s)=1

for

s\geq6

;

3\leqs\leq5

the gambler should bet enough to attain the goal, i.e.

f4(s)=0.4

for

3\leqs\leq5

.

For

t<4

the functional equation is

ft(s)=max

bt(s)

\{0.4ft+1(s+b)+0.6ft+1(s-b)\}

, where

bt(s)

ranges in

0,...,s

; the aim is to find

f1(2)

.

Given the functional equation, an optimal betting policy can be obtained via forward recursion or backward recursion algorithms, as outlined below.

Solution methods

Stochastic dynamic programs can be solved to optimality by using backward recursion or forward recursion algorithms. Memoization is typically employed to enhance performance. However, like deterministic dynamic programming also its stochastic variant suffers from the curse of dimensionality. For this reason approximate solution methods are typically employed in practical applications.

Backward recursion

Given a bounded state space, backward recursion begins by tabulating

fn(k)

for every possible state

k

belonging to the final stage

n

. Once these values are tabulated, together with the associated optimal state-dependent actions

xn(k)

, it is possible to move to stage

n-1

and tabulate

fn-1(k)

for all possible states belonging to the stage

n-1

. The process continues by considering in a backward fashion all remaining stages up to the first one. Once this tabulation process is complete,

f1(s)

– the value of an optimal policy given initial state

s

– as well as the associated optimal action

x1(s)

can be easily retrieved from the table. Since the computation proceeds in a backward fashion, it is clear that backward recursion may lead to computation of a large number of states that are not necessary for the computation of

f1(s)

.

Example: Gambling game

Forward recursion

Given the initial state

s

of the system at the beginning of period 1, forward recursion computes

f1(s)

by progressively expanding the functional equation (forward pass). This involves recursive calls for all

ft+1(),ft+2(),\ldots

that are necessary for computing a given

ft()

. The value of an optimal policy and its structure are then retrieved via a (backward pass) in which these suspended recursive calls are resolved. A key difference from backward recursion is the fact that

ft

is computed only for states that are relevant for the computation of

f1(s)

. Memoization is employed to avoid recomputation of states that have been already considered.

Example: Gambling game

We shall illustrate forward recursion in the context of the Gambling game instance previously discussed. We begin the forward pass by considering

f1(2)= min\left\{ \begin{array}{rr} b&successprobabilityinperiods1,2,3,4\\ \hline 0&0.4f2(2+0)+0.6f2(2-0)\\ 1&0.4f2(2+1)+0.6f2(2-1)\\ 2&0.4f2(2+2)+0.6f2(2-2)\\ \end{array} \right.

At this point we have not computed yet

f2(4),f2(3),f2(2),f2(1),f2(0)

, which are needed to compute

f1(2)

; we proceed and compute these items. Note that

f2(2+0)=f2(2-0)=f2(2)

, therefore one can leverage memoization and perform the necessary computations only once.
Computation of

f2(4),f2(3),f2(2),f2(1),f2(0)

f2(0)= min\left\{ \begin{array}{rr} b&successprobabilityinperiods2,3,4\\ \hline 0&0.4f3(0+0)+0.6f3(0-0)\\ \end{array} \right.

f2(1)= min\left\{ \begin{array}{rr} b&successprobabilityinperiods2,3,4\\ \hline 0&0.4f3(1+0)+0.6f3(1-0)\\ 1&0.4f3(1+1)+0.6f3(1-1)\\ \end{array} \right.

f2(2)= min\left\{ \begin{array}{rr} b&successprobabilityinperiods2,3,4\\ \hline 0&0.4f3(2+0)+0.6f3(2-0)\\ 1&0.4f3(2+1)+0.6f3(2-1)\\ 2&0.4f3(2+2)+0.6f3(2-2)\\ \end{array} \right.

f2(3)= min\left\{ \begin{array}{rr} b&successprobabilityinperiods2,3,4\\ \hline 0&0.4f3(3+0)+0.6f3(3-0)\\ 1&0.4f3(3+1)+0.6f3(3-1)\\ 2&0.4f3(3+2)+0.6f3(3-2)\\ 3&0.4f3(3+3)+0.6f3(3-3)\\ \end{array} \right.

f2(4)= min\left\{ \begin{array}{rr} b&successprobabilityinperiods2,3,4\\ \hline 0&0.4f3(4+0)+0.6f3(4-0)\\ 1&0.4f3(4+1)+0.6f3(4-1)\\ 2&0.4f3(4+2)+0.6f3(4-2) \end{array} \right.

We have now computed

f2(k)

for all

k

that are needed to compute

f1(2)

. However, this has led to additional suspended recursions involving

f3(4),f3(3),f3(2),f3(1),f3(0)

. We proceed and compute these values.
Computation of

f3(4),f3(3),f3(2),f3(1),f3(0)

f3(0)= min\left\{ \begin{array}{rr} b&successprobabilityinperiods3,4\\ \hline 0&0.4f4(0+0)+0.6f4(0-0)\\ \end{array} \right.

f3(1)= min\left\{ \begin{array}{rr} b&successprobabilityinperiods3,4\\ \hline 0&0.4f4(1+0)+0.6f4(1-0)\\ 1&0.4f4(1+1)+0.6f4(1-1)\\ \end{array} \right.

f3(2)= min\left\{ \begin{array}{rr} b&successprobabilityinperiods3,4\\ \hline 0&0.4f4(2+0)+0.6f4(2-0)\\ 1&0.4f4(2+1)+0.6f4(2-1)\\ 2&0.4f4(2+2)+0.6f4(2-2)\\ \end{array} \right.

f3(3)= min\left\{ \begin{array}{rr} b&successprobabilityinperiods3,4\\ \hline 0&0.4f4(3+0)+0.6f4(3-0)\\ 1&0.4f4(3+1)+0.6f4(3-1)\\ 2&0.4f4(3+2)+0.6f4(3-2)\\ 3&0.4f4(3+3)+0.6f4(3-3)\\ \end{array} \right.

f3(4)= min\left\{ \begin{array}{rr} b&successprobabilityinperiods3,4\\ \hline 0&0.4f4(4+0)+0.6f4(4-0)\\ 1&0.4f4(4+1)+0.6f4(4-1)\\ 2&0.4f4(4+2)+0.6f4(4-2) \end{array} \right.

f3(5)= min\left\{ \begin{array}{rr} b&successprobabilityinperiods3,4\\ \hline 0&0.4f4(5+0)+0.6f4(5-0)\\ 1&0.4f4(5+1)+0.6f4(5-1) \end{array} \right.

Since stage 4 is the last stage in our system,

f4()

represent boundary conditions that are easily computed as follows.
Boundary conditions

\begin{array}{ll} f4(0)=0&b4(0)=0\\ f4(1)=0&b4(1)=\{0,1\}\\ f4(2)=0&b4(2)=\{0,1,2\}\\ f4(3)=0.4&b4(3)=\{3\}\\ f4(4)=0.4&b4(4)=\{2,3,4\}\\ f4(5)=0.4&b4(5)=\{1,2,3,4,5\}\\ f4(d)=1&b4(d)=\{0,\ldots,d-6\}ford\geq6 \end{array}

At this point it is possible to proceed and recover the optimal policy and its value via a backward pass involving, at first, stage 3

Backward pass involving

f3()

f3(0)= min\left\{ \begin{array}{rr} b&successprobabilityinperiods3,4\\ \hline 0&0.4(0)+0.6(0)=0\\ \end{array} \right.

f3(1)= min\left\{ \begin{array}{rrr} b&successprobabilityinperiods3,4&max\\ \hline 0&0.4(0)+0.6(0)=0&\leftarrowb3(1)=0\\ 1&0.4(0)+0.6(0)=0&\leftarrowb3(1)=1\\ \end{array} \right.

f3(2)= min\left\{ \begin{array}{rrr} b&successprobabilityinperiods3,4&max\\ \hline 0&0.4(0)+0.6(0)=0\\ 1&0.4(0.4)+0.6(0)=0.16&\leftarrowb3(2)=1\\ 2&0.4(0.4)+0.6(0)=0.16&\leftarrowb3(2)=2\\ \end{array} \right.

f3(3)= min\left\{ \begin{array}{rrr} b&successprobabilityinperiods3,4&max\\ \hline 0&0.4(0.4)+0.6(0.4)=0.4&\leftarrowb3(3)=0\\ 1&0.4(0.4)+0.6(0)=0.16\\ 2&0.4(0.4)+0.6(0)=0.16\\ 3&0.4(1)+0.6(0)=0.4&\leftarrowb3(3)=3\\ \end{array} \right.

f3(4)= min\left\{ \begin{array}{rrr} b&successprobabilityinperiods3,4&max\\ \hline 0&0.4(0.4)+0.6(0.4)=0.4&\leftarrowb3(4)=0\\ 1&0.4(0.4)+0.6(0.4)=0.4&\leftarrowb3(4)=1\\ 2&0.4(1)+0.6(0)=0.4&\leftarrowb3(4)=2\\ \end{array} \right.

f3(5)= min\left\{ \begin{array}{rrr} b&successprobabilityinperiods3,4&max\\ \hline 0&0.4(0.4)+0.6(0.4)=0.4\\ 1&0.4(1)+0.6(0.4)=0.64&\leftarrowb3(5)=1\\ \end{array} \right.

and, then, stage 2.

Backward pass involving

f2()

f2(0)= min\left\{ \begin{array}{rrr} b&successprobabilityinperiods2,3,4&max\\ \hline 0&0.4(0)+0.6(0)=0&\leftarrowb2(0)=0\\ \end{array} \right.

f2(1)= min\left\{ \begin{array}{rrr} b&successprobabilityinperiods2,3,4&max\\ \hline 0&0.4(0)+0.6(0)=0\\ 1&0.4(0.16)+0.6(0)=0.064&\leftarrowb2(1)=1\\ \end{array} \right.

f2(2)= min\left\{ \begin{array}{rrr} b&successprobabilityinperiods2,3,4&max\\ \hline 0&0.4(0.16)+0.6(0.16)=0.16&\leftarrowb2(2)=0\\ 1&0.4(0.4)+0.6(0)=0.16&\leftarrowb2(2)=1\\ 2&0.4(0.4)+0.6(0)=0.16&\leftarrowb2(2)=2\\ \end{array} \right.

f2(3)= min\left\{ \begin{array}{rrr} b&successprobabilityinperiods2,3,4&max\\ \hline 0&0.4(0.4)+0.6(0.4)=0.4&\leftarrowb2(3)=0\\ 1&0.4(0.4)+0.6(0.16)=0.256\\ 2&0.4(0.64)+0.6(0)=0.256\\ 3&0.4(1)+0.6(0)=0.4&\leftarrowb2(3)=3\\ \end{array} \right.

f2(4)= min\left\{ \begin{array}{rrr} b&successprobabilityinperiods2,3,4&max\\ \hline 0&0.4(0.4)+0.6(0.4)=0.4\\ 1&0.4(0.64)+0.6(0.4)=0.496&\leftarrowb2(4)=1\\ 2&0.4(1)+0.6(0.16)=0.496&\leftarrowb2(4)=2\\ \end{array} \right.

We finally recover the value

f1(2)

of an optimal policy

f1(2)= min\left\{ \begin{array}{rrr} b&successprobabilityinperiods1,2,3,4&max\\ \hline 0&0.4(0.16)+0.6(0.16)=0.16\\ 1&0.4(0.4)+0.6(0.064)=0.1984&\leftarrowb1(2)=1\\ 2&0.4(0.496)+0.6(0)=0.1984&\leftarrowb1(2)=2\\ \end{array} \right.

This is the optimal policy that has been previously illustrated. Note that there are multiple optimal policies leading to the same optimal value

f1(2)=0.1984

; for instance, in the first game one may either bet $1 or $2.

Python implementation. The one that follows is a complete Python implementation of this example.from typing import List, Tupleimport functools

class memoize: def __init__(self, func): self.func = func self.memoized = self.method_cache =

def __call__(self, *args): return self.cache_get(self.memoized, args, lambda: self.func(*args))

def __get__(self, obj, objtype): return self.cache_get(self.method_cache, obj, lambda: self.__class__(functools.partial(self.func, obj)),)

def cache_get(self, cache, key, func): try: return cache[key] except KeyError: cache[key] = func return cache[key]

def reset(self): self.memoized = self.method_cache =

class State: """the state of the gambler's ruin problem"""

def __init__(self, t: int, wealth: float): """state constructor

Arguments: t -- time period wealth -- initial wealth """ self.t, self.wealth = t, wealth

def __eq__(self, other): return self.__dict__

other.__dict__

def __str__(self): return str(self.t) + " " + str(self.wealth)

def __hash__(self): return hash(str(self))

class GamblersRuin: def __init__(self, bettingHorizon: int, targetWealth: float, pmf: List[List[Tuple[int, float]]],): """the gambler's ruin problem

Arguments: bettingHorizon -- betting horizon targetWealth -- target wealth pmf -- probability mass function """

# initialize instance variables self.bettingHorizon, self.targetWealth, self.pmf = (bettingHorizon, targetWealth, pmf,)

# lambdas self.ag = lambda s: [i for i in range(0, min(self.targetWealth // 2, s.wealth) + 1) ] # action generator self.st = lambda s, a, r: State(s.t + 1, s.wealth - a + a * r) # state transition self.iv = (lambda s, a, r: 1 if s.wealth - a + a * r >= self.targetWealth else 0) # immediate value function

self.cache_actions = # cache with optimal state/action pairs

def f(self, wealth: float) -> float: s = State(0, wealth) return self._f(s)

def q(self, t: int, wealth: float) -> float: s = State(t, wealth) return self.cache_actions[str(s)]

@memoize def _f(self, s: State) -> float: # Forward recursion values = [sum([p[1]*(self._f(self.st(s, a, p[0])) if s.t < self.bettingHorizon - 1 else self.iv(s, a, p[0])) # value function for p in self.pmf[s.t]]) # bet realisations for a in self.ag(s)] # actions

v = max(values) try: self.cache_actions[str(s)]=self.ag(s)[values.index(v)] # store best action except ValueError: self.cache_actions[str(s)]=None print("Error in retrieving best action") return v # return expected total cost

instance = gr, initial_wealth = GamblersRuin(**instance), 2

  1. f_1(x) is gambler's probability of attaining $targetWealth at the end of bettingHorizon

print("f_1(" + str(initial_wealth) + "): " + str(gr.f(initial_wealth)))

  1. Recover optimal action for period 2 when initial wealth at the beginning of period 2 is $1.

t, initial_wealth = 1, 1print("b_" + str(t + 1) + "(" + str(initial_wealth) + "): " + str(gr.q(t, initial_wealth)))

Java implementation. GamblersRuin.java is a standalone Java 8 implementation of the above example.

Approximate dynamic programming

An introduction to approximate dynamic programming is provided by .

Further reading

Notes and References

  1. This problem is adapted from W. L. Winston, Operations Research: Applications and Algorithms (7th Edition), Duxbury Press, 2003, chap. 19, example 3.