Vanishing gradient problem explained

In machine learning, the vanishing gradient problem is encountered when training neural networks with gradient-based learning methods and backpropagation. In such methods, during each iteration of training each of the neural networks weights receives an update proportional to the partial derivative of the error function with respect to the current weight.[1] The problem is that as the sequence length increases, the gradient magnitude typically is expected to decrease (or grow uncontrollably), slowing the training process.[1] In the worst case, this may completely stop the neural network from further training.[1] As one example of the problem cause, traditional activation functions such as the hyperbolic tangent function have gradients in the range, and backpropagation computes gradients by the chain rule. This has the effect of multiplying of these small numbers to compute gradients of the early layers in an -layer network, meaning that the gradient (error signal) decreases exponentially with while the early layers train very slowly.

Back-propagation allowed researchers to train supervised deep artificial neural networks from scratch, initially with little success. Hochreiter's diplom thesis of 1991 formally identified the reason for this failure in the "vanishing gradient problem",[2] [3] which not only affects many-layered feedforward networks,[4] but also recurrent networks.[5] The latter are trained by unfolding them into very deep feedforward networks, where a new layer is created for each time step of an input sequence processed by the network. (The combination of unfolding and backpropagation is termed backpropagation through time.)

When activation functions are used whose derivatives can take on larger values, one risks encountering the related exploding gradient problem.

Prototypical models

This section is based on the paper On the difficulty of training Recurrent Neural Networks by Pascanu, Mikolov, and Bengio.

Recurrent network model

A generic recurrent network has hidden states

h1,h2,...

inputs

u1,u2,...

, and outputs

x1,x2,...

. Let it be parametrized by

\theta

, so that the system evolves as(h_t, x_t) = F(h_, u_t, \theta)Often, the output

xt

is a function of

ht

, as some

xt=G(ht)

. The vanishing gradient problem already presents itself clearly when

xt=ht

, so we simplify our notation to the special case with:x_t = F(x_, u_t, \theta) Now, take its differential:\begin dx_t &= \nabla_\theta F(x_, u_t, \theta) d\theta + \nabla_x F(x_, u_t, \theta) dx_ \\ &= \nabla_\theta F(x_, u_t, \theta) d\theta + \nabla_x F(x_, u_t, \theta)(\nabla_\theta F(x_, u_, \theta) d\theta + \nabla_x F(x_, u_, \theta) dx_) \\ &= \cdots \\ &= \left(\nabla_\theta F(x_, u_t, \theta) + \nabla_x F(x_, u_t, \theta) \nabla_\theta F(x_, u_, \theta) + \cdots \right) d\theta\endTraining the network requires us to define a loss function to be minimized. Let it be

L(xT,u1,...,uT)

, then minimizing it by gradient descent gives\Delta \theta = -\eta \cdot\left[\nabla_x L(x_T)\left(\nabla_\theta F(x_{t-1}, u_t, \theta) + \nabla_x F(x_{t-1}, u_t, \theta) \nabla_\theta F(x_{t-2}, u_{t-1}, \theta) + \cdots \right) \right]^T where

η

is the learning rate.

The vanishing/exploding gradient problem appears because there are repeated multiplications, of the form\nabla_x F(x_, u_t, \theta) \nabla_x F(x_, u_, \theta)\nabla_x F(x_, u_, \theta) \cdots

Example: recurrent network with sigmoid activation

For a concrete example, consider a typical recurrent network defined by

x_t = F(x_, u_t, \theta) = W_ \sigma(x_) + W_ u_t + b where

\theta=(Wrec,Win)

is the network parameter,

\sigma

is the sigmoid activation function, applied to each vector coordinate separately, and

b

is the bias vector.

Then,

\nablaxF(xt-1,ut,\theta)=Wrecdiag(\sigma'(xt-1))

, and so \begin\nabla_x F(x_, u_t, \theta) & \nabla_x F(x_, u_, \theta)\cdots \nabla_x F(x_, u_, \theta) \\= W_ \mathop(\sigma'(x_)) & W_ \mathop(\sigma'(x_)) \cdots W_ \mathop(\sigma'(x_))

\end Since

|\sigma'|\leq1

, the operator norm of the above multiplication is bounded above by

\|Wrec\|k

. So if the spectral radius of

Wrec

is

\gamma<1

, then at large

k

, the above multiplication has operator norm bounded above by

\gammak\to0

. This is the prototypical vanishing gradient problem.

The effect of a vanishing gradient is that the network cannot learn long-range effects. Recall Equation :\nabla_\theta L = \nabla_x L(x_T, u_1, ..., u_T)\left(\nabla_\theta F(x_, u_t, \theta) + \nabla_x F(x_, u_t, \theta) \nabla_\theta F(x_, u_, \theta) + \cdots \right) The components of

\nabla\thetaF(x,u,\theta)

are just components of

\sigma(x)

and

u

, so if

ut,ut-1,...

are bounded, then

\|\nabla\thetaF(xt-k-1,ut-k,\theta)\|

is also bounded by some

M>0

, and so the terms in

\nabla\thetaL

decay as

M\gammak

. This means that, effectively,

\nabla\thetaL

is affected only by the first

O(\gamma-1)

terms in the sum.

If

\gamma\geq1

, the above analysis does not quite work. For the prototypical exploding gradient problem, the next model is clearer.

Dynamical systems model

Following (Doya, 1993),[6] consider this one-neuron recurrent network with sigmoid activation:x_ = (1-\epsilon) x_t + \epsilon \sigma(wx_t + b) + \epsilon w' u_tAt the small

\epsilon

limit, the dynamics of the network becomes\frac = -x(t) + \sigma(wx(t)+b) + w' u(t) Consider first the autonomous case, with

u=0

. Set

w=5.0

, and vary

b

in

[-3,-2]

. As

b

decreases, the system has 1 stable point, then has 2 stable points and 1 unstable point, and finally has 1 stable point again. Explicitly, the stable points are

(x,b)=\left(x,ln\left(

x
1-x

\right)-5x\right)

.

Now consider

\Deltax(T)
\Deltax(0)

and
\Deltax(T)
\Deltab

, where

T

is large enough that the system has settled into one of the stable points.

If

(x(0),b)

puts the system very close to an unstable point, then a tiny variation in

x(0)

or

b

would make

x(T)

move from one stable point to the other. This makes
\Deltax(T)
\Deltax(0)

and
\Deltax(T)
\Deltab

both very large, a case of the exploding gradient.

If

(x(0),b)

puts the system far from an unstable point, then a small variation in

x(0)

would have no effect on

x(T)

, making
\Deltax(T)
\Deltax(0)

=0

, a case of the vanishing gradient.

Note that in this case,

\Deltax(T)
\Deltab

\partialx(T)
\partialb

=\left(

1
x(T)(1-x(T))

-5\right)-1

neither decays to zero nor blows up to infinity. Indeed, it's the only well-behaved gradient, which explains why early researches focused on learning or designing recurrent networks systems that could perform long-ranged computations (such as outputting the first input it sees at the very end of an episode) by shaping its stable attractors.[7]

For the general case, the intuition still holds (Figures 3, 4, and 5).

Geometric model

Continue using the above one-neuron network, fixing

w=5,x(0)=0.5,u(t)=0

, and consider a loss function defined by

L(x(T))=(0.855-x(T))2

. This produces a rather pathological loss landscape: as

b

approach

-2.5

from above, the loss approaches zero, but as soon as

b

crosses

-2.5

, the attractor basin changes, and loss jumps to 0.50.

Consequently, attempting to train

b

by gradient descent would "hit a wall in the loss landscape", and cause exploding gradient. A slightly more complex situation is plotted in, Figures 6.

Solutions

To overcome this problem, several methods were proposed.

Batch normalization

Batch normalization is a standard method for solving both the exploding and the vanishing gradient problems.[8] [9]

Multi-level hierarchy

One is Jürgen Schmidhuber's multi-level hierarchy of networks (1992) pre-trained one level at a time through unsupervised learning, fine-tuned through backpropagation.[10] Here each level learns a compressed representation of the observations that is fed to the next level.

Related approach

Similar ideas have been used in feed-forward neural networks for unsupervised pre-training to structure a neural network, making it first learn generally useful feature detectors. Then the network is trained further by supervised backpropagation to classify labeled data. The deep belief network model by Hinton et al. (2006) involves learning the distribution of a high-level representation using successive layers of binary or real-valued latent variables. It uses a restricted Boltzmann machine to model each new layer of higher level features. Each new layer guarantees an increase on the lower-bound of the log likelihood of the data, thus improving the model, if trained properly. Once sufficiently many layers have been learned the deep architecture may be used as a generative model by reproducing the data when sampling down the model (an "ancestral pass") from the top level feature activations.[11] Hinton reports that his models are effective feature extractors over high-dimensional, structured data.[12]

Long short-term memory

See main article: Long short-term memory.

Another technique particularly used for recurrent neural networks is the long short-term memory (LSTM) network of 1997 by Hochreiter & Schmidhuber.[13] In 2009, deep multidimensional LSTM networks demonstrated the power of deep learning with many nonlinear layers, by winning three ICDAR 2009 competitions in connected handwriting recognition, without any prior knowledge about the three different languages to be learned.[14] [15]

Faster hardware

Hardware advances have meant that from 1991 to 2015, computer power (especially as delivered by GPUs) has increased around a million-fold, making standard backpropagation feasible for networks several layers deeper than when the vanishing gradient problem was recognized. Schmidhuber notes that this "is basically what is winning many of the image recognition competitions now", but that it "does not really overcome the problem in a fundamental way"[16] since the original models tackling the vanishing gradient problem by Hinton and others were trained in a Xeon processor, not GPUs.

Residual networks

One of the newest and most effective ways to resolve the vanishing gradient problem is with residual neural networks,[17] or ResNets (not to be confused with recurrent neural networks). ResNets refer to neural networks where skip connections or residual connections are part of the network architecture. These skip connections allow gradient information to pass through the layers, by creating "highways" of information, where the output of a previous layer/activation is added to the output of a deeper layer. This allows information from the earlier parts of the network to be passed to the deeper parts of the network, helping maintain signal propagation even in deeper networks. Skip connections are a critical component of what allowed successful training of deeper neural networks.

ResNets yielded lower training error (and test error) than their shallower counterparts simply by reintroducing outputs from shallower layers in the network to compensate for the vanishing data. Note that ResNets are an ensemble of relatively shallow nets and do not resolve the vanishing gradient problem by preserving gradient flow throughout the entire depth of the network – rather, they avoid the problem simply by constructing ensembles of many short networks together. (Ensemble by Construction[18])

Other activation functions

Rectifiers such as ReLU suffer less from the vanishing gradient problem, because they only saturate in one direction.[19]

Weight initialization

Weight initialization is another approach that has been proposed to reduce the vanishing gradient problem in deep networks.

Kumar suggested that the distribution of initial weights should vary according to activation function used and proposed to initialize the weights in networks with the logistic activation function using a Gaussian distribution with a zero mean and a standard deviation of 3.6/sqrt(N), where N is the number of neurons in a layer.[20]

Recently, Yilmaz and Poli[21] performed a theoretical analysis on how gradients are affected by the mean of the initial weights in deep neural networks using the logistic activation function and found that gradients do not vanish if the mean of the initial weights is set according to the formula: max(−1,-8/N). This simple strategy allows networks with 10 or 15 hidden layers to be trained very efficiently and effectively using the standard backpropagation.

Other

Behnke relied only on the sign of the gradient (Rprop) when training his Neural Abstraction Pyramid[22] to solve problems like image reconstruction and face localization.

Neural networks can also be optimized by using a universal search algorithm on the space of neural network's weights, e.g., random guess or more systematically genetic algorithm. This approach is not based on gradient and avoids the vanishing gradient problem.[23]

See also

Notes and References

  1. Basodi. Sunitha. Ji. Chunyan. Zhang. Haiping. Pan. Yi. September 2020. Gradient amplification: An efficient way to train deep neural networks. Big Data Mining and Analytics. 3. 3. 198. 10.26599/BDMA.2020.9020004. 219792172 . 2096-0654. free. 2006.10560.
  2. S. . Hochreiter . Untersuchungen zu dynamischen neuronalen Netzen . Diplom thesis . Institut f. Informatik, Technische Univ. Munich . 1991 .
  3. Book: S. . Hochreiter . Y. . Bengio . P. . Frasconi . J. . Schmidhuber . Gradient flow in recurrent nets: the difficulty of learning long-term dependencies . S. C. . Kremer . J. F. . Kolen . A Field Guide to Dynamical Recurrent Neural Networks . IEEE Press . 2001 . 0-7803-5369-2 .
  4. Goh. Garrett B.. Hodas. Nathan O.. Vishnu. Abhinav. 2017-06-15. Deep learning for computational chemistry. Journal of Computational Chemistry. en. 38. 16. 1291–1307. 10.1002/jcc.24764. 28272810. 1701.04503. 2017arXiv170104503G. 6831636.
  5. Pascanu. Razvan. Mikolov. Tomas. Bengio. Yoshua. 2012-11-21. On the difficulty of training Recurrent Neural Networks. 1211.5063. cs.LG.
  6. Book: Doya, K. . [Proceedings] 1992 IEEE International Symposium on Circuits and Systems . Bifurcations in the learning of recurrent neural networks . http://dx.doi.org/10.1109/iscas.1992.230622 . 1992 . 6 . 2777–2780 . IEEE . 10.1109/iscas.1992.230622. 0-7803-0593-0 . 15069221 .
  7. Bengio . Y. . Simard . P. . Frasconi . P. . March 1994 . Learning long-term dependencies with gradient descent is difficult . IEEE Transactions on Neural Networks . 5 . 2 . 157–166 . 10.1109/72.279181 . 18267787 . 206457500 . 1941-0093.
  8. Ioffe . Sergey . Szegedy . Christian . 2015-06-01 . Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift . International Conference on Machine Learning . en . PMLR . 448–456. 1502.03167 .
  9. Santurkar . Shibani . Tsipras . Dimitris . Ilyas . Andrew . Madry . Aleksander . 2018 . How Does Batch Normalization Help Optimization? . Advances in Neural Information Processing Systems . Curran Associates, Inc. . 31.
  10. J. Schmidhuber., "Learning complex, extended sequences using the principle of history compression," Neural Computation, 4, pp. 234–242, 1992.
  11. Osindero. S.. Teh. Y.. 2006. A fast learning algorithm for deep belief nets. Neural Computation. 18. 7. 1527–1554. 10.1162/neco.2006.18.7.1527. 16764513. Hinton. G. E.. Geoffrey Hinton. 10.1.1.76.1541. 2309950.
  12. 2009. Deep belief networks. Scholarpedia. 4. 5. 5947. 10.4249/scholarpedia.5947. Hinton. G.. 2009SchpJ...4.5947H. free.
  13. Hochreiter . Sepp . Sepp Hochreiter . Jürgen Schmidhuber . Schmidhuber . Jürgen . 1997 . Long Short-Term Memory . Neural Computation . 9 . 8. 1735–1780 . 10.1162/neco.1997.9.8.1735 . 9377276. 1915014 .
  14. Graves, Alex; and Schmidhuber, Jürgen; Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks, in Bengio, Yoshua; Schuurmans, Dale; Lafferty, John; Williams, Chris K. I.; and Culotta, Aron (eds.), Advances in Neural Information Processing Systems 22 (NIPS'22), December 7th–10th, 2009, Vancouver, BC, Neural Information Processing Systems (NIPS) Foundation, 2009, pp. 545–552
  15. Graves . A. . Liwicki . M. . Fernandez . S. . Bertolami . R. . Bunke . H. . Schmidhuber . J. . A Novel Connectionist System for Improved Unconstrained Handwriting Recognition . IEEE Transactions on Pattern Analysis and Machine Intelligence . 31 . 5. 2009 . 855–868 . 10.1109/tpami.2008.137 . 19299860 . 10.1.1.139.4502 . 14635907 .
  16. Schmidhuber . Jürgen . Deep learning in neural networks: An overview . Neural Networks . 61 . 2015 . 85–117 . 1404.7828 . 10.1016/j.neunet.2014.09.003. 25462637 . 11715509 .
  17. He. Kaiming. Zhang. Xiangyu. Ren. Shaoqing. Sun. Jian. 2016. Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA. IEEE. 770–778. 1512.03385. 10.1109/CVPR.2016.90. 978-1-4673-8851-1.
  18. Veit. Andreas. Wilber. Michael. Belongie. Serge. 2016-05-20. Residual Networks Behave Like Ensembles of Relatively Shallow Networks. 1605.06431. cs.CV.
  19. Glorot. Xavier. Bordes. Antoine. Bengio. Yoshua. 2011-06-14. Deep Sparse Rectifier Neural Networks. PMLR. 315–323. en.
  20. Kumar, Siddharth Krishna. "On weight initialization in deep neural networks." arXiv preprint arXiv:1704.08863 (2017).
  21. Yilmaz . Ahmet . Poli . Riccardo . 2022-09-01 . Successfully and efficiently training deep multi-layer perceptrons with logistic activation function simply requires initializing the weights with an appropriate negative mean . Neural Networks . en . 153 . 87–103 . 10.1016/j.neunet.2022.05.030 . 35714424 . 249487697 . 0893-6080.
  22. Book: Hierarchical Neural Networks for Image Interpretation.. Springer. 2003. Lecture Notes in Computer Science. 2766. Sven Behnke.
  23. Web site: Sepp Hochreiter's Fundamental Deep Learning Problem (1991). people.idsia.ch. 2017-01-07.