SFT as a Post-Training Phase
Supervised Fine-Tuning is conceptualized as a distinct training phase that follows the initial pre-training of a model. Its purpose is to make specific adjustments and incorporate new capabilities while aiming to preserve the valuable, general knowledge gained during the pre-training stage.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Instruction Fine-Tuning
Potential for Undesirable Content Generation After SFT
Example of SFT: Question-Answering Task
Applicability of Supervised Fine-Tuning
Practical Implementation Challenges of SFT
Maximum Likelihood Estimation (MLE) as the Objective for Supervised Fine-Tuning
Instruction Fine-Tuning as a Technique of SFT
Size and Specialization of SFT Datasets
Generalization as an Outcome of SFT
Characteristics of SFT Datasets
Generalization from Supervised Fine-Tuning
Definition of SFT Datasets
A development team starts with a base language model that has been pre-trained on a massive, general-purpose dataset from the web. To make the model a specialized customer service chatbot, the team initiates a second phase of training. How would the dataset used in this second phase most likely differ from the original pre-training dataset?
Comparison of SFT and Pre-training Datasets
SFT as a Post-Training Phase
Adapting a Model for a New Task
A law firm wants to develop a language model that can take a lengthy legal contract as input and produce a concise, one-paragraph summary highlighting key clauses like the term, liability limits, and governing law. They have a team of paralegals available to create a high-quality dataset of several thousand contract-summary pairs. Which of the following approaches is the most effective and direct way to train the model for this specific task?
Learn After
A development team starts with a large language model that has been pre-trained on a vast corpus of text from the internet, giving it a broad base of general knowledge. To make it a better customer service assistant, they then fine-tune it on a specific dataset of support chat logs. After this fine-tuning, they observe that while the model excels at customer service conversations, its performance on general trivia questions has noticeably degraded. What does this outcome most directly illustrate?
Chatbot Development Strategy
Balancing General and Specific Knowledge in Model Training