Controllability is an important property of a control system and plays a crucial role in many control problems, such as stabilization of unstable systems by feedback, or optimal control.
Controllability and observability are dual aspects of the same problem.
Roughly, the concept of controllability denotes the ability to move a system around in its entire configuration space using only certain admissible manipulations. The exact definition varies slightly within the framework or the type of models applied.
The following are examples of variations of controllability notions which have been introduced in the systems and control literature:
The state of a deterministic system, which is the set of values of all the system's state variables (those variables characterized by dynamic equations), completely describes the system at any given time. In particular, no information on the past of a system is needed to help in predicting the future, if the states at the present time are known and all current and future values of the control variables (those whose values can be chosen) are known.
Complete state controllability (or simply controllability if no other context is given) describes the ability of an external input (the vector of control variables) to move the internal state of a system from any initial state to any final state in a finite time interval.[1]
That is, we can informally define controllability as follows:If for any initial state
x0 |
xf |
x0 |
xf |
x |
=Ax(t)+Bu(t)
x
A
B
Controllability does not mean that a reached state can be maintained, merely that any state can be reached.
Controllability does not mean that arbitrary paths can be made through state space, only that there exists a path within the prescribed finite time interval.
Consider the continuous linear system [2]
x |
(t)=A(t)x(t)+B(t)u(t)
y(t)=C(t)x(t)+D(t)u(t).
There exists a control
u
x0
t0
x1
t1>t0
x1-\phi(t0,t1)x0
W(t0,t1)=
t1 | |
\int | |
t0 |
T | |
\phi(t | |
0,t)B(t)B(t) |
T | |
\phi(t | |
0,t) |
dt
\phi
W(t0,t1)
In fact, if
η0
W(t0,t1)η=x1-\phi(t0,t1)x0
u(t)=-B(t)T
T | |
\phi(t | |
0,t) |
η0
Note that the matrix
W
W(t0,t1)
W(t0,t1)
t1\geqt0
W(t0,t1)
d | |
dt |
W(t,t1)=A(t)W(t,t1)+W(t,t
T | |
1)A(t) |
-B(t)B(t)T, W(t1,t1)=0
W(t0,t1)
W(t0,t1)=W(t0,t)+\phi(t0,t)W(t,t1)\phi(t
T | |
0,t) |
The Controllability Gramian involves integration of the state-transition matrix of a system. A simpler condition for controllability is a rank condition analogous to the Kalman rank condition for time-invariant systems.
Consider a continuous-time linear system
\Sigma
[t0,t]
R
x |
(t)=A(t)x(t)+B(t)u(t)
y(t)=C(t)x(t)+D(t)u(t).
The state-transition matrix
\phi
M0(t)=\phi(t0,t)B(t)
Mk(t)
| |||
dtk |
(t),k\geqslant1
Mi
i=0,1,\ldots,k
M(k)(t):=\left[M0(t),\ldots,Mk(t)\right]
If there exists a
\bar{t}\in[t0,t]
\operatorname{rank}M(k)(\bar{t})=n
\Sigma
If
\Sigma
[t0,t]
\Sigma
[t0,t]
\bar{t}\in[t0,t]
\operatorname{rank}M(k)(ti)=n
The above methods can still be complex to check, since it involves the computation of the state-transition matrix
\phi
B0(t)=B(t)
i\geq0
Bi+1(t)
A(t)Bi(t)-
d | |
dt |
Bi(t).
Bi
(A(t),B(t)).
\bar{t}\in[t0,t]
k
rm{rank}(\left[B0(\bar{t}),B1(\bar{t}),\ldots,Bk(\bar{t})\right])=n
Consider a system varying analytically in
(-infty,infty)
A(t)=\begin{bmatrix} t&1&0\ 0&t3&0\ 0&0&t2\end{bmatrix}
B(t)=\begin{bmatrix} 0\ 1\ 1\end{bmatrix}.
[B0(0),B1(0),B2(0),B3(0)]=\begin{bmatrix} 0&1&0&-1\ 1&0&0&0\ 1&0&0&2 \end{bmatrix}
R
Consider the continuous linear time-invariant system
x |
(t)=Ax(t)+Bu(t)
y(t)=Cx(t)+Du(t)
where
x
n x 1
y
m x 1
u
r x 1
A
n x n
B
n x r
C
m x n
D
m x r
The
n x nr
R=\begin{bmatrix}B&AB&A2B&...&An-1B\end{bmatrix}
The system is controllable if the controllability matrix has full row rank (i.e.
\operatorname{rank}(R)=n
For a discrete-time linear state-space system (i.e. time variable
k\inZ
bf{x}(k+1)=Abf{x}(k)+Bbf{u}(k)
where
A
n x n
B
n x r
u
r
r x 1
n x nr
l{C}=\begin{bmatrix}B&AB&A2B& … &An-1B\end{bmatrix}
has full row rank (i.e.,
\operatorname{rank}(lC)=n
lC
n
n
lC
n
u(k)
Given the state
bf{x}(0)
bf{x}(1)=Abf{x}(0)+Bbf{u}(0),
bf{x}(2)=Abf{x}(1)+Bbf{u}(1)=A2bf{x}(0)+ABbf{u}(0)+Bbf{u}(1),
bf{x}(n)=Bbf{u}(n-1)+ABbf{u}(n-2)+ … +An-1Bbf{u}(0)+Anbf{x}(0)
or equivalently
bf{x}(n)-Anbf{x}(0)=[BAB … An-1B][bf{u}T(n-1)bf{u}T(n-2) … bf{u}T(0)]T.
Imposing any desired value of the state vector
bf{x}(n)
For example, consider the case when
n=2
r=1
B
AB
2 x 1
\begin{bmatrix}B&AB\end{bmatrix}
B
AB
B
AB
Assume that the initial state is zero.
At time
k=0
x(1)=Abf{x}(0)+Bbf{u}(0)=Bbf{u}(0)
At time
k=1
x(2)=Abf{x}(1)+Bbf{u}(1)=ABbf{u}(0)+Bbf{u}(1)
At time
k=0
B
k=1
AB
B
k=2
This example holds for all positive
n
n=2
Consider an analogy to the previous example system.You are sitting in your car on an infinite, flat plane and facing north.The goal is to reach any point in the plane by driving a distance in a straight line, come to a full stop, turn, and driving another distance, again, in a straight line.If your car has no steering then you can only drive straight, which means you can only drive on a line (in this case the north-south line since you started facing north).The lack of steering case would be analogous to when the rank of
C
Now, if your car did have steering then you could easily drive to any point in the plane and this would be the analogous case to when the rank of
C
If you change this example to
n=3
Although the 3-dimensional case is harder to visualize, the concept of controllability is still analogous.
Nonlinear systems in the control-affine form
x |
=f(x)+
m | |
\sum | |
i=1 |
gi(x)ui
are locally accessible about
x0
R
n
n
x
R=\begin{bmatrix}g1& … &gm&
k | |
[ad | |
gi |
gj] |
& … &
k | |
[ad | |
f |
gi] |
\end{bmatrix}.
Here,
k | |
[ad | |
f |
g]
k | |
[ad | |
f |
g]=\begin{bmatrix}f& … &j& … &[f,g]\end{bmatrix}.
The controllability matrix for linear systems in the previous section can in fact be derived from this equation.
If a discrete control system is null-controllable, it means that there exists a controllable
u(k)
x(k0)=0
x(0)=x0
F
A+BF
This can be easily shown by controllable-uncontrollable decomposition.
Output controllability is the related notion for the output of the system (denoted y in the previous equations); the output controllability describes the ability of an external input to move the output from any initial condition to any final condition in a finite time interval. It is not necessary that there is any relationship between state controllability and output controllability. In particular:
For a linear continuous-time system, like the example above, described by matrices
A
B
C
D
m x (n+1)r
\begin{bmatrix}CB&CAB&CA2B& … &CAn-1B&D\end{bmatrix}
m
In systems with limited control authority, it is often no longer possible to move any initial state to any final state inside the controllable subspace. This phenomenon is caused by constraints on the input that could be inherent to the system (e.g. due to saturating actuator) or imposed on the system for other reasons (e.g. due to safety-related concerns). The controllability of systems with input and state constraints is studied in the context of reachability[6] and viability theory.[7]
In the so-called behavioral system theoretic approach due to Willems (see people in systems and control), models considered do not directly define an input - output structure. In this framework systems are described by admissible trajectories of a collection of variables, some of which might be interpreted as inputs or outputs.
A system is then defined to be controllable in this setting, if any past part of a behavior (trajectory of the external variables) can be concatenated with any future trajectory of the behavior in such a way that the concatenation is contained in the behavior, i.e. is part of the admissible system behavior.[8]
A slightly weaker notion than controllability is that of stabilizability. A system is said to be stabilizable when all uncontrollable state variables can be made to have stable dynamics. Thus, even though some of the state variables cannot be controlled (as determined by the controllability test above) all the state variables will still remain bounded during the system's behavior.[9]
Let T ∈ Т and x ∈ X (where X is the set of all possible states and Т is an interval of time). The reachable set from x in time T is defined as:
RT{(x)}=\left\{z\inX:x\overset{T}{ → }z\right\}
For autonomous systems the reachable set is given by :
Im(R)=Im(B)+Im(AB)+....+Im(An-1B)
In terms of the reachable set, the system is controllable if and only if
Im(R)=Rn
Proof We have the following equalities:
R=[B AB....An-1B]
Im(R)=Im([B AB....An-1B])
dim(Im(R))=rank(R)
dim(Im(R))=n
rank(R)=n
Im(R)=\Rn \blacksquare
A related set to the reachable set is the controllable set, defined by:
CT{(x)}=\left\{z\inX:z\overset{T}{ → }x\right\}
(a) An n-dimensional discrete linear system is controllable if and only if:
R(0)=Rk{(0)=X}
R(0)=Re{(0)=X}
C(0)=Ce{(0)=X}
ExampleLet the system be an n dimensional discrete-time-invariant system from the formula:
Φ(n,0,0,w)=
n | |
\sum\limits | |
i=1 |
Ai-1Bw(n-1)
Rk{(0)}
Im(R)=R(A,B)≜ Im(
[B AB....An-1B]
un
u=Km
X=Kn
B, AB,....,An-1B
[B AB....An-1B]
R(0)=Rk{(0)=X}
\Rn