Concept

Layers of a Feed Forward Neural Network

In the chain structure example f(x)=f(3)(f(2)(f(1)(x)))f (x) = f ^{(3)}(f^{(2)}(f^{(1)}(x))), f(1)f^{(1)} is called the first layer of the network, f^{(2)} is called the second layer, and so on. The final layer of a feedforward network is called the output layer.

During neural network training, we drive f(x)f(x) to match f(x)f^∗(x). The training data provides us with noisy, approximate examples of f(x)f^∗(x) evaluated at different training points. Each example x is accompanied by a label yf(x)y ≈ f^*(x).

The training examples specify directly that the output layer must produce a value that is close to yy at each point xx. However, the behavior of the other layers is not directly specified by the training data. The training data do not say what each individual layer should do. Instead, the learning algorithm must decide how to use these layers to best implement an approximation of ff^*. Because the training data does not show the desired output for each of these layers, they are called hidden layers.

Image 0

0

2

Updated 2021-06-15

References


Tags

Data Science