Learn Before
History of MLPs for Denoising Dates
The idea of using MLPs for denoising dates back to 1987. Denoising Autoencoder basically refers to a model that intends to denoise its input and to learn a good internal representation. The motivation for DAEs was to allow the learning of a very high-capacity encoder while preventing the encoder and decoder from learning a useless identity function.
0
1
Tags
Data Science
Related
Introduction of Denoising Autoencoders
Vector Field of Denoising Autoencoders
History of MLPs for Denoising Dates
Training Encoder-Decoder Models with a Denoising Autoencoding Objective
An engineer trains two autoencoder models on a large dataset of clean, high-resolution images. Model A is a standard autoencoder, trained to reconstruct the original images perfectly. Model B is a denoising autoencoder, trained to reconstruct the original clean images from input images that have been intentionally corrupted with random noise (e.g., salt-and-pepper noise). After training, both models are evaluated on their ability to reconstruct a new set of images that have a different, unseen type of corruption (e.g., a slight blur). Based on their training objectives, which model is expected to perform better on this new task, and why?
A key modification to the standard autoencoder training process is the introduction of a 'corruption' step to create a more robust model. Arrange the following steps to accurately describe a single training iteration for this modified approach, which aims to reconstruct an original data point from a noisy version of it.
An autoencoder model is trained on a large dataset of facial images. During each training step, a clean image (
x) is taken, a random rectangular section of it is completely blacked out to create a corrupted version (~x), and the model is tasked with reconstructing the original, clean image (x) from the corrupted input (~x). Which of the following best explains what the model must learn about the data distribution to succeed at this specific task?