In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment.[1] [2]
Radial basis function (RBF) networks typically have three layers: an input layer, a hidden layer with a non-linear RBF activation function and a linear output layer. The input can be modeled as a vector of real numbers
x\inRn
\varphi:Rn\toR
\varphi(x)=
N | |
\sum | |
i=1 |
ai\rho(||x-ci||)
where
N
ci
i
ai
i
\rho (\left\Vertx-ci\right\Vert )=\exp\left[-\betai\left\Vertx-ci\right\Vert2\right]
The Gaussian basis functions are local to the center vector in the sense that
\lim||x||\rho(\left\Vertx-ci\right\Vert)=0
i.e. changing parameters of one neuron has only a small effect for input values that are far away from the center of that neuron.
Given certain mild conditions on the shape of the activation function, RBF networks are universal approximators on a compact subset of
Rn
The parameters
ai
ci
\betai
\varphi
In addition to the above unnormalized architecture, RBF networks can be normalized. In this case the mapping is
\varphi(x) \stackrel{def
u (\left\Vertx-ci\right\Vert ) \stackrel{def
is known as a normalized radial basis function.
There is theoretical justification for this architecture in the case of stochastic data flow. Assume a stochastic kernel approximation for the joint probability density
P\left(x\landy\right)={1\overN}
N | |
\sum | |
i=1 |
\rho (\left\Vertx-ci\right\Vert )\sigma (\left\verty-ei\right\vert )
where the weights
ci
ei
\int\rho (\left\Vertx-ci\right\Vert )dnx=1
\int\sigma (\left\verty-ei\right\vert )dy=1
The probability densities in the input and output spaces are
P\left(x\right)=\intP\left(x\landy\right)dy={1\overN}
N | |
\sum | |
i=1 |
\rho (\left\Vertx-ci\right\Vert )
and
The expectation of y given an input
x
\varphi\left(x\right) \stackrel{def
P\left(y\midx\right)
x
P\left(y\midx\right)=
P\left(x\landy\right) | |
P\left(x\right) |
which yields
\varphi\left(x\right)=\inty
P\left(x\landy\right) | |
P\left(x\right) |
dy
This becomes
\varphi\left(x\right)=
| |||||||||
|
=
N | |
\sum | |
i=1 |
eiu (\left\Vertx-ci\right\Vert )
when the integrations are performed.
It is sometimes convenient to expand the architecture to include local linear models. In that case the architectures become, to first order,
\varphi\left(x\right)=
N | |
\sum | |
i=1 |
\left(ai+bi ⋅ \left(x-ci\right)\right)\rho (\left\Vertx-ci\right\Vert )
and
\varphi\left(x\right)=
N | |
\sum | |
i=1 |
\left(ai+bi ⋅ \left(x-ci\right)\right)u (\left\Vertx-ci\right\Vert )
in the unnormalized and normalized cases, respectively. Here
bi
This result can be written
\varphi\left(x\right)=
2N | |
\sum | |
i=1 |
n | |
\sum | |
j=1 |
eijvij (x-ci )
where
eij=\begin{cases}ai,&ifi\in[1,N]\ bij,&ifi\in[N+1,2N]\end{cases}
and
vij (x-ci ) \stackrel{def
in the unnormalized case and
vij (x-ci ) \stackrel{def
in the normalized case.
Here
\deltaij
\deltaij=\begin{cases}1,&ifi=j\ 0,&ifi\nej\end{cases}
RBF networks are typically trained from pairs of input and target values
x(t),y(t)
t=1,...,T
In the first step, the center vectors
ci
The second step simply fits a linear model with coefficients
wi
K(w) \stackrel{def
Kt(w) \stackrel{def
There are occasions in which multiple objectives, such as smoothness as well as accuracy, must be optimized. In that case it is useful to optimize a regularized objective function such as
H(w) \stackrel{def
where
S(w) \stackrel{def
and
Ht(w) \stackrel{def
where optimization of S maximizes smoothness and
λ
A third optional backpropagation step can be performed to fine-tune all of the RBF net's parameters.[6]
RBF networks can be used to interpolate a function
y:Rn\toR
y(xi)=bi,i=1,\ldots,N
xi
gij=\rho(||xj-xi||)
\left[\begin{matrix} g11&g12& … &g1N\\ g21&g22& … &g2N\\ \vdots&&\ddots&\vdots\\ gN1&gN2& … &gNN\end{matrix}\right]\left[\begin{matrix} w1\\ w2\\ \vdots\\ wN \end{matrix}\right]=\left[\begin{matrix} b1\\ b2\\ \vdots\\ bN \end{matrix}\right]
It can be shown that the interpolation matrix in the above equation is non-singular, if the points
xi
w
w=G-1b
G=(gij)
If the purpose is not to perform strict interpolation but instead more general function approximation or classification the optimization is somewhat more complex because there is no obvious choice for the centers. The training is typically done in two phases first fixing the width and centers and then the weights. This can be justified by considering the different nature of the non-linear hidden neurons versus the linear output neuron.
Basis function centers can be randomly sampled among the input instances or obtained by Orthogonal Least Square Learning Algorithm or found by clustering the samples and choosing the cluster means as the centers.
The RBF widths are usually all fixed to same value which is proportional to the maximum distance between the chosen centers.
After the centers
ci
w=G+b
xi
gji=\rho(||xj-ci||)
The existence of this linear solution means that unlike multi-layer perceptron (MLP) networks, RBF networks have an explicit minimizer (when the centers are fixed).
Another possible training algorithm is gradient descent. In gradient descent training, the weights are adjusted at each time step by moving them in a direction opposite from the gradient of the objective function (thus allowing the minimum of the objective function to be found),
w(t+1)=w(t)-\nu
d | |
dw |
Ht(w)
where
\nu
For the case of training the linear weights,
ai
ai(t+1)=ai(t)+\nu [y(t)-\varphi (x(t),w ) ]\rho (\left\Vertx(t)-ci\right\Vert )
in the unnormalized case and
ai(t+1)=ai(t)+\nu [y(t)-\varphi (x(t),w ) ]u (\left\Vertx(t)-ci\right\Vert )
in the normalized case.
For local-linear-architectures gradient-descent training is
eij(t+1)=eij(t)+\nu [y(t)-\varphi (x(t),w ) ]vij (x(t)-ci )
For the case of training the linear weights,
ai
eij
ai(t+1)=ai(t)+\nu [y(t)-\varphi (x(t),w ) ]
\rho (\left\Vertx(t)-ci\right\Vert ) | |||||||||
|
in the unnormalized case and
ai(t+1)=ai(t)+\nu [y(t)-\varphi (x(t),w ) ]
u (\left\Vertx(t)-ci\right\Vert ) | |||||||||
|
in the normalized case and
eij(t+1)=eij(t)+\nu [y(t)-\varphi (x(t),w ) ]
vij (x(t)-ci ) | |||||||||||||||||||||
|
in the local-linear case.
For one basis function, projection operator training reduces to Newton's method.
The basic properties of radial basis functions can be illustrated with a simple mathematical map, the logistic map, which maps the unit interval onto itself. It can be used to generate a convenient prototype data stream. The logistic map can be used to explore function approximation, time series prediction, and control theory. The map originated from the field of population dynamics and became the prototype for chaotic time series. The map, in the fully chaotic regime, is given by
x(t+1) \stackrel{def
where t is a time index. The value of x at time t+1 is a parabolic function of x at time t. This equation represents the underlying geometry of the chaotic time series generated by the logistic map.
Generation of the time series from this equation is the forward problem. The examples here illustrate the inverse problem; identification of the underlying dynamics, or fundamental equation, of the logistic map from exemplars of the time series. The goal is to find an estimate
x(t+1)=f\left[x(t)\right] ≈ \varphi(t)=\varphi\left[x(t)\right]
for f.
The architecture is
\varphi(x) \stackrel{def
where
\rho (\left\Vertx-ci\right\Vert )=\exp\left[-\betai\left\Vertx-ci\right\Vert2\right]=\exp\left[-\betai\left(x(t)-ci\right)2\right]
Since the input is a scalar rather than a vector, the input dimension is one. We choose the number of basis functions as N=5 and the size of the training set to be 100 exemplars generated by the chaotic time series. The weight
\beta
ci
ai
ai(t+1)=ai(t)+\nu [x(t+1)-\varphi (x(t),w ) ]
\rho (\left\Vertx(t)-ci\right\Vert ) | |||||||||
|
\nu
The normalized RBF architecture is
\varphi(x) \stackrel{def
u (\left\Vertx-ci\right\Vert ) \stackrel{def
Again:
\rho (\left\Vertx-ci\right\Vert )=\exp\left[-\beta\left\Vertx-ci\right\Vert2\right]=\exp\left[-\beta\left(x(t)-ci\right)2\right]
Again, we choose the number of basis functions as five and the size of the training set to be 100 exemplars generated by the chaotic time series. The weight
\beta
ci
ai
ai(t+1)=ai(t)+\nu [x(t+1)-\varphi (x(t),w ) ]
u (\left\Vertx(t)-ci\right\Vert ) | |||||||||
|
\nu
Once the underlying geometry of the time series is estimated as in the previous examples, a prediction for the time series can be made by iteration:
\varphi(0)=x(1)
{x}(t) ≈ \varphi(t-1)
{x}(t+1) ≈ \varphi(t)=\varphi[\varphi(t-1)]
A comparison of the actual and estimated time series is displayed in the figure. The estimated times series starts out at time zero with an exact knowledge of x(0). It then uses the estimate of the dynamics to update the time series estimate for several time steps.
Note that the estimate is accurate for only a few time steps. This is a general characteristic of chaotic time series. This is a property of the sensitive dependence on initial conditions common to chaotic time series. A small initial error is amplified with time. A measure of the divergence of time series with nearly identical initial conditions is known as the Lyapunov exponent.
We assume the output of the logistic map can be manipulated through a control parameter
c[x(t),t]
{x} | |
(t+1)=4x(t)[1-x(t)]+c[x(t),t]
The goal is to choose the control parameter in such a way as to drive the time series to a desired output
d(t)
c | |
[x(t),t] \stackrel{def
where
y[x(t)] ≈ f[x(t)]=x(t+1)-c[x(t),t]
is an approximation to the underlying natural dynamics of the system.
The learning algorithm is given by
ai(t+1)=ai(t)+\nu\varepsilon
u (\left\Vertx(t)-ci\right\Vert ) | |||||||||
|
where
\varepsilon \stackrel{def