Learn Before
Unsupervised Pre-training
Unsupervised pre-training, a key focus in the early resurgence of deep learning, involves optimizing a neural network's parameters using a task-agnostic criterion. Instead of relying on task-specific labels, this method uses objectives like minimizing the input's reconstruction cross-entropy. It is typically used as a preparatory phase before a model undergoes supervised training.
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Contrastive Learning (CTL)
Extensions of PTMs
Applying and Adapting Pre-trained Models to Downstream Tasks
Unsupervised Pre-training
Supervised Pre-training
Self-Supervised Learning
Comparison of Pre-training Paradigms
Rationale for Categorizing Pre-training Tasks by Objective
Denoising Autoencoding
Comparability of Pre-training Tasks
Generality of Pre-training Tasks and Performance
Applying Pre-trained Models to Downstream Tasks
Identifying a Pre-training Strategy
Breadth of Pre-training Tasks
A research team is developing a new language model and is considering different pre-training approaches. Match each pre-training scenario below with the correct category of learning it represents.
A language model is being trained on a large corpus of text from the internet. The training process involves randomly hiding 15% of the words in each sentence and then tasking the model with predicting the original identity of these hidden words based on the surrounding context. Which category of pre-training task does this scenario best exemplify, and why?
Comparing Pre-training Task Categories
Comparison of Pre-training Tasks
Learn After
Benefits of Unsupervised Pre-training
Initial Language Model Training Strategy
A research team is developing a large neural network for various language tasks. In the initial training phase, they use a vast dataset of unlabeled text from the internet. The model's objective is not tied to any specific end-user application (like translation or sentiment classification), but rather to learn the underlying structure and statistical patterns of the language itself. What is the fundamental purpose of this initial training approach?
A research team is training a large neural network on a massive dataset of unlabeled text from the web. The training objective is to predict a masked word within a sentence based on its surrounding context. No task-specific labels, such as sentiment scores or document categories, are provided during this stage. What is the primary goal of this training methodology?
Adaptation Effort in Unsupervised Pre-training