Learn Before
Distributed Representations
Representations composed of many elements that can be set separately from each other. They're powerful tools for representation learning, since they can use features with values to describe different concepts. Since many deep learning algorithms are motivated by the assumption that hidden units can learn to represent the underlying casual factors that explain the data, distributed representations are naturally useful, since each direction in representation space can correspond to the value of a different underlying configuration variable.
0
2
Tags
Data Science
Related
Rules-Based Systems vs. Classic Machine Learning vs. Representation Learning vs. Deep Learning
Methods of Feature Learning
Distributed Representations
Nondistributed Representations
How does Unsupervised Pretraining act as a regularizer?
Disadvantage of Pretraining
When to use greedy unsupervised pretraining
Greedy Layer-Wise Unsupervised Pretraining
Data Augmentation
Structured Prediction