In mathematics, a Markov decision process (MDP) is a discrete-time stochastic control process. It provides a mathematical framework for modeling decision making in situations where outcomes are partly random and partly under the control of a decision maker. MDPs are useful for studying optimization problems solved via dynamic programming. MDPs were known at least as early as the 1950s;[1] a core body of research on Markov decision processes resulted from Ronald Howard's 1960 book, Dynamic Programming and Markov Processes.[2] They are used in many disciplines, including robotics, automatic control, economics and manufacturing. The name of MDPs comes from the Russian mathematician Andrey Markov as they are an extension of Markov chains.
At each time step, the process is in some state
s
a
s
s'
Ra(s,s')
The probability that the process moves into its new state
s'
Pa(s,s')
s'
s
a
s
a
Markov decision processes are an extension of Markov chains; the difference is the addition of actions (allowing choice) and rewards (giving motivation). Conversely, if only one action exists for each state (e.g. "wait") and all rewards are the same (e.g. "zero"), a Markov decision process reduces to a Markov chain.
(S,A,Pa,Ra)
S
A
As
s
Pa(s,s')
a
s
t
s'
t+1
\Pr(st+1\inS'\midst=s,at=a)=\intS'Pa(s,s')ds',
S'\subseteqS
Pa(s,s')=\Pr(st+1=s'\midst=s,at=a)
S\subseteqRd
Ra(s,s')
s
s'
a
A policy function
\pi
S
A
The goal in a Markov decision process is to find a good "policy" for the decision maker: a function
\pi
\pi(s)
s
s
\pi(s)
The objective is to choose a policy
\pi
infty | |
E\left[\sum | |
t=0 |
{\gammat
R | |
at |
(st,st+1)}\right]
at=\pi(st)
st+1\sim
P | |
at |
(st,st+1)
where
\gamma
0\le \gamma \le 1
\gamma=1/(1+r)
A policy that maximizes the function above is called an optimal policy and is usually denoted
\pi*
In many cases, it is difficult to represent the transition probability distributions,
Pa(s,s')
Another form of simulator is a generative model, a single step simulator that can generate samples of the next state and reward given any state and action.[3] (Note that this is a different meaning from the term generative model in the context of statistical classification.) In algorithms that are expressed using pseudocode,
G
s',r\getsG(s,a)
s
a
s'
r
These model classes form a hierarchy of information content: an explicit model trivially yields a generative model through sampling from the distributions, and repeated application of a generative model yields an episodic simulator. In the opposite direction, it is only possible to learn approximate models through regression. The type of model available for a particular MDP plays a significant role in determining which solution algorithms are appropriate. For example, the dynamic programming algorithms described in the next section require an explicit model, and Monte Carlo tree search requires a generative model (or an episodic simulator that can be copied at any state), whereas most reinforcement learning algorithms require only an episodic simulator.
Solutions for MDPs with finite state and action spaces may be found through a variety of methods such as dynamic programming. The algorithms in this section apply to MDPs with finite state and action spaces and explicitly given transition probabilities and reward functions, but the basic concepts may be extended to handle other problem classes, for example using function approximation.
The standard family of algorithms to calculate optimal policies for finite state and action MDPs requires storage for two arrays indexed by state: value
V
\pi
\pi
V(s)
s
The algorithm has two steps, (1) a value update and (2) a policy update, which are repeated in some order for all the states until no further changes take place. Both recursively update a new estimation of the optimal policy and state value using an older estimation of those values.
V(s):=\sums'P\pi(s)(s,s')\left(R\pi(s)(s,s')+\gammaV(s')\right)
\pi(s):=\operatorname{argmax}a\left\{\sums'Pa(s,s')\left(Ra(s,s')+\gammaV(s')\right)\right\}
Their order depends on the variant of the algorithm; one can also do them for all states at once or state by state, and more often to some states than others. As long as no state is permanently excluded from either of the steps, the algorithm will eventually arrive at the correct solution.[4]
In value iteration, which is also called backward induction,the
\pi
\pi(s)
V(s)
\pi(s)
V(s)
Vi+1(s):=maxa\left\{\sums'Pa(s,s')\left(Ra(s,s')+\gammaVi(s')\right)\right\},
where
i
i=0
V0
Vi+1
s
V
In policy iteration, step one is performed once, and then step two is performed once, then both are repeated until policy converges. Then step one is again performed once and so on. (Policy iteration was invented by Howard to optimize Sears catalogue mailing, which he had been optimizing using value iteration.[7])
Instead of repeating step two to convergence, it may be formulated and solved as a set of linear equations. These equations are merely obtained by making
s=s'
This variant has the advantage that there is a definite stopping condition: when the array
\pi
Policy iteration is usually slower than value iteration for a large number of possible states.
In modified policy iteration (;), step one is performed once, and then step two is repeated several times.[8] [9] Then step one is again performed once and so on.
In this variant, the steps are preferentially applied to states which are in some way important – whether based on the algorithm (there were large changes in
V
\pi
Algorithms for finding optimal policies with time complexity polynomial in the size of the problem representation exist for finite MDPs. Thus, decision problems based on MDPs are in computational complexity class P.[10] However, due to the curse of dimensionality, the size of the problem representation is often exponential in the number of state and action variables, limiting exact solution techniques to problems that have a compact representation. In practice, online planning techniques such as Monte Carlo tree search can find useful solutions in larger problems, and, in theory, it is possible to construct online planning algorithms that can find an arbitrarily near-optimal policy with no computational complexity dependence on the size of the state space.[11]
A Markov decision process is a stochastic game with only one player.
See main article: Partially observable Markov decision process. The solution above assumes that the state
s
\pi(s)
Constrained Markov decision processes (CMDPS) are extensions to Markov decision process (MDPs). There are three fundamental differences between MDPs and CMDPs.[12]
The method of Lagrange multipliers applies to CMDPs.Many Lagrangian-based algorithms have been developed.
There are a number of applications for CMDPs. It has recently been used in motion planning scenarios in robotics.[14]
In discrete-time Markov Decision Processes, decisions are made at discrete time intervals. However, for continuous-time Markov decision processes, decisions can be made at any time the decision maker chooses. In comparison to discrete-time Markov decision processes, continuous-time Markov decision processes can better model the decision-making process for a system that has continuous dynamics, i.e., the system dynamics is defined by ordinary differential equations (ODEs). These kind of applications raise in queueing systems, epidemic processes, and population processes.
Like the discrete-time Markov decision processes, in continuous-time Markov decision processes the agent aims at finding the optimal policy which could maximize the expected cumulated reward. The only difference with the standard case stays in the fact that, due to the continuous nature of the time variable, the sum is replaced by an integral:
max\operatorname{E}\pi\left[\left.
infty\gamma | |
\int | |
0 |
tr(s(t),\pi(s(t)))dt \right|s0\right]
0\leq\gamma<1.
If the state space and action space are finite, we could use linear programming to find the optimal policy, which was one of the earliest approaches applied. Here we only consider the ergodic model, which means our continuous-time MDP becomes an ergodic continuous-time Markov chain under a stationary policy. Under this assumption, although the decision maker can make a decision at any time at the current state, they could not benefit more by taking more than one action. It is better for them to take an action only at the time when system is transitioning from the current state to another state. Under some conditions,(for detail check Corollary 3.14 of Continuous-Time Markov Decision Processes), if our optimal value function
V*
i
g\geqR(i,a)+\sumj\inq(j\midi,a)h(j) \foralli\inSanda\inA(i)
h
\barV*
g
\barV*
\begin{align} Minimize &g\\ s.t &g-\sumjq(j\midi,a)h(j)\geqR(i,a) \foralli\inS,a\inA(i) \end{align}
\begin{align} Maximize&\sumi\in\suma\inR(i,a)y(i,a)\\ s.t.&\sumi\in\suma\inq(j\midi,a)y(i,a)=0 \forallj\inS,\\ &\sumi\in\suma\iny(i,a)=1,\\ &y(i,a)\geq0 \foralla\inA(i)and\foralli\inS \end{align}
y(i,a)
y(i,a)
y*(i,a)
\begin{align} \sumi\in\suma\inR(i,a)y*(i,a)\geq\sumi\in\suma\inR(i,a)y(i,a) \end{align}
y(i,a)
y*(i,a)
In continuous-time MDP, if the state space and action space are continuous, the optimal criterion could be found by solving Hamilton–Jacobi–Bellman (HJB) partial differential equation. In order to discuss the HJB equation, we need to reformulateour problem
\begin{align}V(s(0),0)={}&maxa(t)=\pi(s(t))
T | |
\int | |
0 |
r(s(t),a(t))dt+D[s(T)]\\ s.t. &
dx(t) | |
dt |
=f[t,s(t),a(t)] \end{align}
D( ⋅ )
s(t)
a(t)
f( ⋅ )
0=maxu(r(t,s,a)+
\partialV(t,s) | |
\partialx |
f(t,s,a))
a(t)
V*
See main article: Reinforcement learning.
Reinforcement learning uses MDPs where the probabilities and rewards are unknown.[15]
For the purpose of this section, it is useful to define a further function, which corresponds to taking the action
a
Q(s,a)=\sums'Pa(s,s')(Ra(s,s')+\gammaV(s')).
While this function is also unknown, experience during learning is based on
(s,a)
s'
s
a
s'
Q
Reinforcement learning can solve Markov-Decision processes without explicit specification of the transition probabilities; the values of the transition probabilities are needed in value and policy iteration. In reinforcement learning, instead of explicit specification of the transition probabilities, the transition probabilities are accessed through a simulator that is typically restarted many times from a uniformly random initial state. Reinforcement learning can also be combined with function approximation to address problems with a very large number of states.
See main article: Learning automata. Another application of MDP process in machine learning theory is called learning automata. This is also one type of reinforcement learning if the environment is stochastic. The first detail learning automata paper is surveyed by Narendra and Thathachar (1974), which were originally described explicitly as finite state automata.[16] Similar to reinforcement learning, a learning automata algorithm also has the advantage of solving the problem when probability or rewards are unknown. The difference between learning automata and Q-learning is that the former technique omits the memory of Q-values, but updates the action probability directly to find the learning result. Learning automata is a learning scheme with a rigorous proof of convergence.[17]
In learning automata theory, a stochastic automaton consists of:
The states of such an automaton correspond to the states of a "discrete-state discrete-parameter Markov process". At each time step t = 0,1,2,3,..., the automaton reads an input from its environment, updates P(t) to P(t + 1) by A, randomly chooses a successor state according to the probabilities P(t + 1) and outputs the corresponding action. The automaton's environment, in turn, reads the action and sends the next input to the automaton.
Other than the rewards, a Markov decision process
(S,A,P)
l{A}
l{A}\toDist
In this way, Markov decision processes could be generalized from monoids (categories with one object) to arbitrary categories. One can call the result
(l{C},F:l{C}\toDist)
l{C}
The terminology and notation for MDPs are not entirely settled. There are two main streams — one focuses on maximization problems from contexts like economics, using the terms action, reward, value, and calling the discount factor or, while the other focuses on minimization problems from engineering and navigation, using the terms control, cost, cost-to-go, and calling the discount factor . In addition, the notation for the transition probability varies.
in this article | alternative | comment | |
---|---|---|---|
action | control | ||
reward | cost | is the negative of | |
value | cost-to-go | is the negative of | |
policy | policy | ||
discounting factor | discounting factor | ||
transition probability Pa(s,s') | transition probability pss'(a) |
In addition, transition probability is sometimes written
\Pr(s,a,s')
\Pr(s'\mids,a)
ps's(a).