Parameter Sharing
While a parameter norm penalty is one way to regularize parameters to be close to one another, the more popular way is to use constraints: to force sets of parameters to be equal. This method of regularization is often referred to as parameter sharing, because we interpret the various models or model components as sharing a unique set of parameters. A significant advantage of parameter sharing over regularizing the parameters to be close (via a norm penalty) is that only a subset of the parameters (the unique set) needs to be stored in memory. In certain models—such as the convolutional neural network—this can lead to significant reduction in the memory footprint of the model.
Parameter sharing: to force sets of parameters to be equal
Main application: Convolutional Neural Network (CNN)
Advantages: significant reduction in the memory footprint of the model
0
1
Tags
Data Science
Related
Why does regularization prevent overfitting?
Popular Regularization Techniques in Deep Learning
Human Level Performance: Based on the evidence below, which two of the following four options seem the most promising to try?
Local Constancy and Smoothness Priors
Parameter Sharing
Parameter Tying
L1 regularization and L2 regularization
MTL as a Regularizer
Parameter Penalties in Classical Regularization
Sparsity of Connections in Convolutional Neural Networks
Causal Inference
Smoothing Splines
Clustering, an unsupervised statistical learning method
Manifold learning algorithms
Effect of Depth for Neural Networks
Parameter Sharing
Temporal and Spatial Coherence
Simplicity of Factor Dependencies