Learn Before
Concept

When do we need Deep Learning?

The Universal Approximation Theorem (UAT) declares that only one hidden layer, with a finite number of neurons, is enough for approximating any looked-for function. This is an impressive statement for two reasons: On the one hand, this theorem proves the immense capacity of neural networks. But, on the other hand… Does it mean that we never need Deep Learning? No, breathe deeply, it doesn’t mean that…

The UAT doesn’t specify how many neurons it must contain. Although a single hidden layer could be enough to model a specific function, it could be more efficient to learn it by multiple hidden layers net. Furthermore, when training a net, we are looking for a function that best generalizes the relationship in data. Even if a single hidden network is able to represent the function that best fits the training examples, this would not mean that it generalizes better the behavior of the data out of the training set.

This is very well explained in the book Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville:

In summary, a feedforward network with a single layer is sufficient to represent any function, but the layer may be infeasibly large and may fail to learn and generalize correctly. In many circumstances, using deeper models can reduce the number of units required to represent the desired function and can reduce the amount of generalization error.

0

0

Updated 2021-02-20

Tags

Data Science

Related