Applying and Adapting Pre-trained Models to Downstream Tasks
A fundamental issue following the pre-training phase is applying the generalized pre-trained model, denoted as , to specific downstream tasks. To successfully adapt the model to these downstream tasks, it is necessary to slightly adjust its parameters, denoted as , using labeled data, or alternatively, to prompt the model with task descriptions.
0
1
Contributors are:
Who are from:
References
Pre-trained Models for Natural Language Processing: A Survey
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Tags
Data Science
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Related
Contrastive Learning (CTL)
Extensions of PTMs
Applying and Adapting Pre-trained Models to Downstream Tasks
Unsupervised Pre-training
Supervised Pre-training
Self-Supervised Learning
Comparison of Pre-training Paradigms
Rationale for Categorizing Pre-training Tasks by Objective
Denoising Autoencoding
Comparability of Pre-training Tasks
Generality of Pre-training Tasks and Performance
Applying Pre-trained Models to Downstream Tasks
Identifying a Pre-training Strategy
Breadth of Pre-training Tasks
A research team is developing a new language model and is considering different pre-training approaches. Match each pre-training scenario below with the correct category of learning it represents.
A language model is being trained on a large corpus of text from the internet. The training process involves randomly hiding 15% of the words in each sentence and then tasking the model with predicting the original identity of these hidden words based on the surrounding context. Which category of pre-training task does this scenario best exemplify, and why?
Comparing Pre-training Task Categories
Comparison of Pre-training Tasks
Optimizing for Generalizability in Pre-training
Applying and Adapting Pre-trained Models to Downstream Tasks
Learn After
Transfer knowledge of a PTM to the downstream NLP tasks
Fine-Tuning Strategies
Applications of PTMs
Fine-tuning for Sequence Encoding Models
Fine-Tuning Pre-trained Models for Downstream Tasks
Freezing Encoder Parameters During Fine-Tuning
Discarding the Pre-training Head for Downstream Adaptation
Textual Instructions for Task Adaptation
Influence of Downstream Task on Model Architecture
Broad Applications of Fine-Tuning in LLM Development
Scope of Introductory Fine-Tuning Discussion
LLM Alignment
Pre-train and Fine-tune Paradigm for Encoder Models
Necessity of Fine-Tuning for Downstream Task Adaptation
Fine-Tuning as a Standard Adaptation Method for LLMs
Prompting in Language Models
Fine-Tuning as a Mechanism for Activating Pre-Trained Knowledge
A startup wants to adapt a large, pre-trained language model to classify customer sentiment (positive, negative, neutral). They have a very small labeled dataset (fewer than 500 examples) and extremely limited access to high-performance computing, making extensive retraining financially unfeasible. Which adaptation approach is most suitable for their situation?
Efficiency of LLM Adaptation via Prompting
A developer intends to specialize a general-purpose, pre-trained language model for a new text classification task by updating its internal parameters. Arrange the following steps in the correct chronological order to accomplish this adaptation.
Selecting an Adaptation Strategy for a Pre-trained Model