Learn Before
Default Random Initialization
When building a neural network, if a specific parameter initialization method is not explicitly defined by the user, the deep learning framework will apply a default random initialization method. This default approach, such as drawing weight values from a standard normal distribution, is often sufficient and works well in practice for models with moderate problem sizes.
0
1
Tags
D2L
Dive into Deep Learning @ D2L
Related
Example of Weight Initialization
Vanishing/exploding gradient
Symmetry Breaking in Deep Learning
How to Initialization Weights to Prevent Vanishing/Exploding Gradients
Transfer Learning in Deep Learning
Multi-task Learning in Deep Learning
Variance of Layer Output in Forward Propagation
Default Random Initialization
Xavier Initialization