Feedforward neural network explained

A feedforward neural network (FNN) is one of the two broad types of artificial neural network, characterized by direction of the flow of information between its layers.[1] Its flow is uni-directional, meaning that the information in the model flows in only one direction - forward - from the input nodes, through the hidden nodes (if any) and to the output nodes, without any cycles or loops, in contrast to recurrent neural networks,[2] which have a bi-directional flow. Modern feedforward networks are trained using the backpropagation method[3] [4] [5] [6] [7] and are colloquially referred to as the "vanilla" neural networks.[8]

Mathematical foundations

Activation function

The two historically common activation functions are both sigmoids, and are described by

y(vi)=\tanh(vi)~~rm{and}~~y(vi)=

-vi
(1+e

)-1

.

The first is a hyperbolic tangent that ranges from -1 to 1, while the other is the logistic function, which is similar in shape but ranges from 0 to 1. Here

yi

is the output of the

i

th node (neuron) and

vi

is the weighted sum of the input connections. Alternative activation functions have been proposed, including the rectifier and softplus functions. More specialized activation functions include radial basis functions (used in radial basis networks, another class of supervised neural network models).

In recent developments of deep learning the rectified linear unit (ReLU) is more frequently used as one of the possible ways to overcome the numerical problems related to the sigmoids.

Learning

Learning occurs by changing connection weights after each piece of data is processed, based on the amount of error in the output compared to the expected result. This is an example of supervised learning, and is carried out through backpropagation.

We can represent the degree of error in an output node

j

in the

n

th data point (training example) by

ej(n)=dj(n)-yj(n)

, where

dj(n)

is the desired target value for

n

th data point at node

j

, and

yj(n)

is the value produced at node

j

when the

n

th data point is given as an input.

The node weights can then be adjusted based on corrections that minimize the error in the entire output for the

n

th data point, given by
l{E}(n)=1
2
\sum
outputnodej
2(n)
e
j
.

Using gradient descent, the change in each weight

wij

is

\Deltawji(n)=-η

\partiall{E
(n)}{\partial

vj(n)}yi(n)

where

yi(n)

is the output of the previous neuron

i

, and

η

is the learning rate, which is selected to ensure that the weights quickly converge to a response, without oscillations. In the previous expression,
\partiall{E
(n)}{\partial

vj(n)}

denotes the partial derivate of the error

l{E}(n)

according to the weighted sum

vj(n)

of the input connections of neuron

i

.

The derivative to be calculated depends on the induced local field

vj

, which itself varies. It is easy to prove that for an output node this derivative can be simplified to
-\partiall{E
(n)}{\partial

vj(n)}=

\prime
e
j(n)\phi

(vj(n))

where

\phi\prime

is the derivative of the activation function described above, which itself does not vary. The analysis is more difficult for the change in weights to a hidden node, but it can be shown that the relevant derivative is
-\partiall{E
(n)}{\partial

vj(n)}=\phi\prime(vj(n))\sumk-

\partiall{E
(n)}{\partial

vk(n)}wkj(n)

.

This depends on the change in weights of the

k

th nodes, which represent the output layer. So to change the hidden layer weights, the output layer weights change according to the derivative of the activation function, and so this algorithm represents a backpropagation of the activation function.[9]

History

Timeline

Linear regression

The simplest feedforward network consists of a single weight layer without activation functions. It would be just a linear map, and training it would be linear regression. Linear regression by least squares method was used by Legendre (1805) and Gauss (1795) for the prediction of planetary movement.[21] [22] [23] [24]

Perceptron

See main article: Perceptron. If using a threshold, i.e. a linear activation function, the resulting linear threshold unit is called a perceptron. (Often the term is used to denote just one of these units.) Multiple parallel non-linear units are able to approximate any continuous function from a compact interval of the real numbers into the interval [−1,1] despite the limited computational power of single unit with a linear threshold function.[25]

Perceptrons can be trained by a simple learning algorithm that is usually called the delta rule. It calculates the errors between calculated output and sample output data, and uses this to create an adjustment to the weights, thus implementing a form of gradient descent.

Multilayer perceptron

A multilayer perceptron (MLP) is a misnomer for a modern feedforward artificial neural network, consisting of fully connected neurons (hence the synonym sometimes used of fully connected network (FCN)), often with a nonlinear kind of activation function, organized in at least three layers, notable for being able to distinguish data that is not linearly separable.[26]

Other feedforward networks

Examples of other feedforward networks include convolutional neural networks and radial basis function networks, which use a different activation function.

See also

External links

Notes and References

  1. Book: Zell, Andreas . 1994 . Simulation Neuronaler Netze . Simulation of Neural Networks . German . 1st . Addison-Wesley . 73 . 3-89319-554-8.
  2. 2015-01-01. Deep learning in neural networks: An overview. Neural Networks. en. 61. 85–117. 10.1016/j.neunet.2014.09.003. 0893-6080. 1404.7828. Schmidhuber. Jürgen. 25462637. 11715509.
  3. Seppo . Linnainmaa . Seppo Linnainmaa . 1970 . Masters . The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors . fi . University of Helsinki . 6–7.
  4. Kelley . Henry J. . Henry J. Kelley . 1960 . Gradient theory of optimal flight paths . ARS Journal . 30 . 10 . 947–954 . 10.2514/8.5282.
  5. Rosenblatt, Frank. x. Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington DC, 1961
  6. Book: Werbos, Paul . Paul Werbos . System modeling and optimization . Springer . 1982 . 762–770 . Applications of advances in nonlinear sensitivity analysis . 2 July 2017 . http://werbos.com/Neural/SensitivityIFIPSeptember1981.pdf . https://web.archive.org/web/20160414055503/http://werbos.com/Neural/SensitivityIFIPSeptember1981.pdf . 14 April 2016 . live.
  7. Rumelhart, David E., Geoffrey E. Hinton, and R. J. Williams. "Learning Internal Representations by Error Propagation". David E. Rumelhart, James L. McClelland, and the PDP research group. (editors), Parallel distributed processing: Explorations in the microstructure of cognition, Volume 1: Foundation. MIT Press, 1986.
  8. Hastie, Trevor. Tibshirani, Robert. Friedman, Jerome. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, New York, NY, 2009.
  9. Book: Haykin, Simon . Simon Haykin . Neural Networks: A Comprehensive Foundation . 2 . 1998 . Prentice Hall . 0-13-273350-1.
  10. McCulloch . Warren S. . Pitts . Walter . 1943-12-01 . A logical calculus of the ideas immanent in nervous activity . The Bulletin of Mathematical Biophysics . en . 5 . 4 . 115–133 . 10.1007/BF02478259 . 1522-9602.
  11. Rosenblatt . Frank . Frank Rosenblatt . 1958 . The Perceptron: A Probabilistic Model For Information Storage And Organization in the Brain . Psychological Review . 65 . 6 . 386–408 . 10.1.1.588.3775 . 10.1037/h0042519 . 13602029 . 12781225.
  12. Book: Rosenblatt, Frank . Frank Rosenblatt . Principles of Neurodynamics . Spartan, New York . 1962.
  13. Book: Ivakhnenko, A. G. . Alexey Grigorevich Ivakhnenko . [{{google books |plainurl=y |id=FhwVNQAACAAJ}} Cybernetic Predicting Devices ]. CCM Information Corporation . 1973.
  14. Book: Ivakhnenko . A. G. . Alexey Grigorevich Ivakhnenko . [{{google books |plainurl=y |id=rGFgAAAAMAAJ}} Cybernetics and forecasting techniques ]. Grigorʹevich Lapa . Valentin . American Elsevier Pub. Co. . 1967.
  15. 2212.11279 . cs.NE . Juergen . Schmidhuber . Juergen Schmidhuber . Annotated History of Modern AI and Deep Learning . 2022.
  16. Amari . Shun'ichi . Shun'ichi Amari . 1967 . A theory of adaptive pattern classifier . IEEE Transactions . EC . 16 . 279-307.
  17. Linnainmaa . Seppo . Seppo Linnainmaa . 1976 . Taylor expansion of the accumulated rounding error . BIT Numerical Mathematics . 16 . 2 . 146–160 . 10.1007/bf01931367 . 122357351.
  18. Book: Talking Nets: An Oral History of Neural Networks . 2000 . The MIT Press . 978-0-262-26715-1 . Anderson . James A. . en . 10.7551/mitpress/6626.003.0016 . Rosenfeld . Edward.
  19. Rumelhart . David E. . Hinton . Geoffrey E. . Williams . Ronald J. . October 1986 . Learning representations by back-propagating errors . Nature . en . 323 . 6088 . 533–536 . 1986Natur.323..533R . 10.1038/323533a0 . 1476-4687.
  20. Bengio . Yoshua . Ducharme . Réjean . Vincent . Pascal . Janvin . Christian . March 2003 . A neural probabilistic language model . The Journal of Machine Learning Research . 3 . 1137–1155.
  21. Merriman, Mansfield. A List of Writings Relating to the Method of Least Squares: With Historical and Critical Notes. Vol. 4. Academy, 1877.
  22. Stephen M. . Stigler . 1981 . Gauss and the Invention of Least Squares . Ann. Stat. . 9 . 3 . 465–474 . 10.1214/aos/1176345451 . free .
  23. Book: Bretscher, Otto . Linear Algebra With Applications . 3rd . Prentice Hall . 1995 . Upper Saddle River, NJ.
  24. Book: Stigler , Stephen M. . Stephen Stigler . 1986 . The History of Statistics: The Measurement of Uncertainty before 1900 . Cambridge . Harvard . 0-674-40340-1 . registration .
  25. Peter . Auer . Harald Burgsteiner . Wolfgang Maass . A learning rule for very simple universal approximators consisting of a single layer of perceptrons . Neural Networks . 21 . 5 . 786–795 . 2008 . 10.1016/j.neunet.2007.12.036 . 18249524 . 2009-09-08 . https://web.archive.org/web/20110706095227/http://www.igi.tugraz.at/harry/psfiles/biopdelta-07.pdf . 2011-07-06 . dead.
  26. Cybenko, G. 1989. Approximation by superpositions of a sigmoidal function Mathematics of Control, Signals, and Systems, 2(4), 303–314.