Learn Before
Denoising Autoencoding
Denoising autoencoding is a pre-training approach, often used for encoder-decoder models, that trains a model to reconstruct an original, clean sequence from a corrupted input. The core task is to learn robust representations by removing the noise that was artificially introduced into the data.
0
1
Contributors are:
Who are from:
References
Pre-trained Models for Natural Language Processing: A Survey
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Tags
Data Science
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Contrastive Learning (CTL)
Extensions of PTMs
Applying and Adapting Pre-trained Models to Downstream Tasks
Unsupervised Pre-training
Supervised Pre-training
Self-Supervised Learning
Comparison of Pre-training Paradigms
Rationale for Categorizing Pre-training Tasks by Objective
Denoising Autoencoding
Comparability of Pre-training Tasks
Generality of Pre-training Tasks and Performance
Applying Pre-trained Models to Downstream Tasks
Identifying a Pre-training Strategy
Breadth of Pre-training Tasks
A research team is developing a new language model and is considering different pre-training approaches. Match each pre-training scenario below with the correct category of learning it represents.
A language model is being trained on a large corpus of text from the internet. The training process involves randomly hiding 15% of the words in each sentence and then tasking the model with predicting the original identity of these hidden words based on the surrounding context. Which category of pre-training task does this scenario best exemplify, and why?
Comparing Pre-training Task Categories
Comparison of Pre-training Tasks
Learn After
Training Encoder-Decoder Models with a Denoising Autoencoding Objective
A research team is pre-training a language model with the specific goal of making it highly proficient at understanding long-range contextual relationships and the logical flow of arguments within a paragraph. They use a method where the model learns to restore an original, clean text from a deliberately corrupted version. Which of the following corruption strategies applied to the input text would be most effective for achieving the team's specific goal?
Designing a Robust Text Correction Model
Analyzing the Impact of Input Corruption
Example of Span Masking in Denoising Autoencoding
Example of Sentinel Masking in Denoising Autoencoding