Deep Feedfoward Network as time passed.
-The chain rule that underlies the back-propagation algorithm (one of the common algorithms used to train neural networks) was invented in the seventeenth century.
-While algebra and advanced mathematics have always been implemented to solve computer optimization problems, gradient descent was not introduced as a technique for iteratively approximating the solution to optimization problems until the nineteenth century.
-Non-linear functions could only be properly studied once a multilayer perceptron and a means of computing the gradient were developed. One of the biggest advancements for this development occurred once eļ¬cient applications of the chain rule based on dynamic programming were implemented between the 1960s and 70s.
0
3
Tags
Data Science
Related
Overall Structure of Deep Feedforward Networks
Stages of Feed Forward Neural Network Learning
Hyperparameters of Feedforward Neural Network
Deep Feedforward Network Cost Functions
Deep Feedfoward Network as time passed.
Symbol-to-number Diļ¬erentiation
Theano and Tensorflow Approach
Symbolic Representations
Feedforward Neural Language Modeling
Feedforward Neural Network Classification in NLP