Example of SFT: Question-Answering Task
An illustrative example of Supervised Fine-Tuning (SFT) is training a Large Language Model on a dataset composed of question-answer pairs. This process enables the model to learn the task of answering questions, allowing it to generate relevant responses even for questions it has not previously encountered.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Computing Sciences
Foundations of Large Language Models Course
Related
Instruction Fine-Tuning
Potential for Undesirable Content Generation After SFT
Example of SFT: Question-Answering Task
Applicability of Supervised Fine-Tuning
Practical Implementation Challenges of SFT
Maximum Likelihood Estimation (MLE) as the Objective for Supervised Fine-Tuning
Instruction Fine-Tuning as a Technique of SFT
Size and Specialization of SFT Datasets
Generalization as an Outcome of SFT
Characteristics of SFT Datasets
Generalization from Supervised Fine-Tuning
Definition of SFT Datasets
A development team starts with a base language model that has been pre-trained on a massive, general-purpose dataset from the web. To make the model a specialized customer service chatbot, the team initiates a second phase of training. How would the dataset used in this second phase most likely differ from the original pre-training dataset?
Comparison of SFT and Pre-training Datasets
SFT as a Post-Training Phase
Adapting a Model for a New Task
A law firm wants to develop a language model that can take a lengthy legal contract as input and produce a concise, one-paragraph summary highlighting key clauses like the term, liability limits, and governing law. They have a team of paralegals available to create a high-quality dataset of several thousand contract-summary pairs. Which of the following approaches is the most effective and direct way to train the model for this specific task?
Learn After
A development team fine-tunes a general-purpose, pre-trained language model using a dataset of 1,000 specific question-and-answer pairs related to their new software product. The goal is to create a helpful product support chatbot. Which statement best predicts the model's capability after this fine-tuning process?
Choosing a Fine-Tuning Dataset for a Medical Chatbot
A large language model is fine-tuned exclusively on a dataset containing 50,000 question-answer pairs about world history. After this training, the model will only be able to provide correct answers to those specific 50,000 questions and will fail on any new, unseen history questions.