ADALINE (Adaptive Linear Neuron or later Adaptive Linear Element) is an early single-layer artificial neural network and the name of the physical device that implemented this network.[1] [2] [3] [4] [5] It was developed by professor Bernard Widrow and his doctoral student Ted Hoff at Stanford University in 1960. It is based on the perceptron. It consists of a weight, a bias and a summation function. The weights and biases were implemented by rheostats (as seen in the "knobby ADALINE"), and later, memistors.
The difference between Adaline and the standard (McCulloch–Pitts) perceptron is in how they learn. Adaline unit weights are adjusted to match a teacher signal, before applying the Heaviside function (see figure), but the standard perceptron unit weights are adjusted to match the correct output, after applying the Heaviside function.
A multilayer network of ADALINE units is a MADALINE.
Adaline is a single layer neural network with multiple nodes where each node accepts multiple inputs and generates one output. Given the following variables as:
x
w
n
\theta
y
then we find that the output is
n | |
y=\sum | |
j=1 |
xjwj+\theta
x0=1
w0=\theta
then the output further reduces to:
n | |
y=\sum | |
j=0 |
xjwj
The learning rule used by ADALINE is the LMS ("least mean squares") algorithm, a special case of gradient descent.
Define the following notations:
η
y
o
E=(o-y)2
The LMS algorithm updates the weights by
This update rule minimizes
E
MADALINE (Many ADALINE[8]) is a three-layer (input, hidden, output), fully connected, feed-forward artificial neural network architecture for classification that uses ADALINE units in its hidden and output layers, i.e. its activation function is the sign function.[9] The three-layer network uses memistors. Three different training algorithms for MADALINE networks, which cannot be learned using backpropagation because the sign function is not differentiable, have been suggested, called Rule I, Rule II and Rule III.
Despite many attempts, they never succeeded in training more than a single layer of weights in a MADALINE. This was until Widrow saw the backpropagation algorithm in a 1985 Snowbird conference.[10]
MADALINE Rule 1 (MRI) - The first of these dates back to 1962.[11] It consists of two layers. The first layer is made of ADALINE units. Let the output of the i-th ADALINE unit be
oi
oi
The largest MADALINE machine built had 1000 weights, each implemented by a memistor. It was built in 1963 and used MRI for learning.[12]
Some MADALINE machines were demonstrated to perform inverted pendulum balancing, weather prediction, speech recognition, etc.
MADALINE Rule 2 (MRII) - The second training algorithm improved on Rule I and was described in 1988. The Rule II training algorithm is based on a principle called "minimal disturbance". It proceeds by looping over training examples, then for each example, it:
MADALINE Rule 3 - The third "Rule" applied to a modified network with sigmoid activations instead of signum; it was later found to be equivalent to backpropagation.[13]
Additionally, when flipping single units' signs does not drive the error to zero for a particular example, the training algorithm starts flipping pairs of units' signs, then triples of units, etc.[8]