In artificial neural networks, the hidden layer is a series of artificial neurons that processes the inputs received from the input layers before passing them to the output layer. An example of a neural network utilizing a hidden layer is the feedforward neural network.[1]
The hidden layers transform inputs from the input layer to the output layer. This is accomplished by applying what are called weights to the inputs and passing them through what is called an activation function, which calculate input based on input and weight. This allows the artificial neural network to learn non-linear relationships between the input and output data.
The weighted inputs can be randomly assigned. They can also be fine-tuned and calibrated through what is called backpropagation.[2]
A large number of hidden layers in terms of the complexity at hand can cause what is called overfitting, where the network matches the data to the level where generalization is limited. With the opposite situation of the number of hidden layers being less than the complexity at hand can cause underfitting, and the system may struggle to take on the problem given to it.[3]
Muhammad Uzair, Noreen Jamil
IEEE 23rd Multitopic Conference