Learn Before
Comparing Pre-training Task Categories
A machine learning engineer is tasked with developing a language model but has access to different types of datasets. Compare and contrast the three main categories of pre-training tasks (supervised, unsupervised, and self-supervised) in terms of their data requirements and the nature of their supervision signals. For each category, provide a hypothetical example of a task and dataset that would be appropriate.
0
1
Tags
Data Science
Foundations of Large Language Models Course
Computing Sciences
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Contrastive Learning (CTL)
Extensions of PTMs
Applying and Adapting Pre-trained Models to Downstream Tasks
Unsupervised Pre-training
Supervised Pre-training
Self-Supervised Learning
Comparison of Pre-training Paradigms
Rationale for Categorizing Pre-training Tasks by Objective
Denoising Autoencoding
Comparability of Pre-training Tasks
Generality of Pre-training Tasks and Performance
Applying Pre-trained Models to Downstream Tasks
Identifying a Pre-training Strategy
Breadth of Pre-training Tasks
A research team is developing a new language model and is considering different pre-training approaches. Match each pre-training scenario below with the correct category of learning it represents.
A language model is being trained on a large corpus of text from the internet. The training process involves randomly hiding 15% of the words in each sentence and then tasking the model with predicting the original identity of these hidden words based on the surrounding context. Which category of pre-training task does this scenario best exemplify, and why?
Comparing Pre-training Task Categories
Comparison of Pre-training Tasks