Learn Before
Generality of Pre-training Tasks and Performance
The effectiveness of pre-trained models stems from the general nature of their training tasks. By learning from broad objectives rather than task-specific ones, these models build versatile representations. This generalist foundation enables them to achieve strong performance across a wide variety of NLP problems, often to the point of outperforming systems that were previously developed with specialized, supervised training for individual tasks.
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Contrastive Learning (CTL)
Extensions of PTMs
Applying and Adapting Pre-trained Models to Downstream Tasks
Unsupervised Pre-training
Supervised Pre-training
Self-Supervised Learning
Comparison of Pre-training Paradigms
Rationale for Categorizing Pre-training Tasks by Objective
Denoising Autoencoding
Comparability of Pre-training Tasks
Generality of Pre-training Tasks and Performance
Applying Pre-trained Models to Downstream Tasks
Identifying a Pre-training Strategy
Breadth of Pre-training Tasks
A research team is developing a new language model and is considering different pre-training approaches. Match each pre-training scenario below with the correct category of learning it represents.
A language model is being trained on a large corpus of text from the internet. The training process involves randomly hiding 15% of the words in each sentence and then tasking the model with predicting the original identity of these hidden words based on the surrounding context. Which category of pre-training task does this scenario best exemplify, and why?
Comparing Pre-training Task Categories
Comparison of Pre-training Tasks
Learn After
A team is developing a system to classify rare historical manuscripts. They have a small, highly specialized dataset. One engineer argues against using a large, generally pre-trained model, stating, 'Our task is too unique. A model trained on the entire internet will have learned irrelevant patterns. We should build a smaller model from scratch using only our specific manuscript data.' Which of the following statements best evaluates this engineer's argument?
Model Development Strategy for NLP Products
The Generalist Advantage in Model Pre-training