Learn Before
Concept

Denoising Autoencoders

  • Traditionally, autoencoders minimize some function L(x,g(f(x)))L(x,g(f(x))) where LL is a loss function penalizing g(f(x))g(f(x)) for being dissimilar from xx, such as the L2L^2 norm of their difference.
  • A denoising auto encoder (DAE) instead minimizes L(x,g(f(x~)))L(x,g(f(\widetilde{x}))) where x~\widetilde{x} is a copy of xx that has been corrupted by some form of noise. Denoising autoencoders must therefore undo this corruption rather than simply copying theirinput.
  • Two assumptions are inherent to this approach: 1)Higher level representations are relatively stable and robust to the corruption of the input; 2) To perform denoising well, the model needs to extract features that capture useful structure in the input distribution.
  • In other words, denoising is advocated as a training criterion for learning to extract useful features that will constitute better higher level representations of the input.
Image 0

0

1

Updated 2025-10-06

Tags

Data Science

Foundations of Large Language Models Course

Computing Sciences

Learn After