Universal Approximation Theorem
The universal approximation theorem (Horniket al., 1989; Cybenko, 1989) states that a feedforward network with a linear output layer and at least one hidden layer with any “squashing” activation function (such as the logistic sigmoid activation function) can approximate any Borel measurable function from one finite-dimensional space to another with any desired nonzero amount of error, provided that the network is given enough hidden units. The derivatives of the feedforward network can also approximate the derivatives of the function arbitrarily well.
It means that regardless of what function we are trying to learn, we know that a large feedforward network will be able to represent this function.
0
1
Tags
Data Science
Related
Depth and Width for Neural Networks
Layers of a Feed Forward Neural Network
Formulation of a Feedforward Neural Networks
Universal Approximation Theorem
Universal Approximation Theorem
Set relationship of Artificial Intelligence
Limitation of Deep Learning
Drawbacks or disadvantages of Deep Learning
Applications of Deep Learning
Automated Feature Engineering