Learn Before
Update Weight Iteratively Until Convergence
Weights are changed to the optimal values according to the results of the backpropagation algorithm.
Since the weights are updated a small delta step at a time, several iterations are required in order for the network to learn. After each iteration, the gradient descent force updates the weights towards less and less global loss function. The amount of iterations needed to converge depends on the learning rate, the network meta-parameters, and the optimization method used.
0
1
Tags
Data Science
Related
Forward Propagation
Update Weight Iteratively Until Convergence
Deep Learning Weight Initialization
What is the "cache" used for in our implementation of forward propagation and backward propagation?
Consider the following 1 hidden layer neural network:
Which of the following are true regarding activation outputs and vectors? (Check all that apply.)
Backpropagation
Objective Function