Learn Before
Undercomplete Autoencoders
Undercomplete Autoencoders are an attempt to solve the problem of encoding and decoding being learned so successfully that they circumvent the true goal of autoencoders, which is to learn which features of the data are important.
"Undercomplete" means that for an autoencoder with basic structure input->code->output, with encoding function g and decoding function h, we constrain h to have a lower dimension than the input. Because of this reduced granularity, it is hoped that h will capture only the most "salient features" of the input data during training.
The learning process just involves minimizing the cost function L(x, g(f(x)). where L is a loss function which delivers a panalty for dissimilarity between x and f(x). (example: mean squared error).
0
1
Tags
Data Science
Related
What does the Autoencoder try to do ?
Putting it all together - AutoEncoder Code
Problems With Autoencoders
Reference to the AutoEncoder Code
Autoencoder Decoder Code
Autoencoder Encoder Sample Code
Denoising Autoencoders
Sparse Autoencoders
Undercomplete Autoencoders
Overcomplete Autoencoders
Regularizing Autoencoder
Autoencoder Depth
Learning Manifolds Using Autoencoder
Drawing Samples From Autoencoders