Potential for Undesirable Content Generation After SFT
Even after undergoing pre-training and supervised fine-tuning (SFT), a Large Language Model may still produce outputs that are unfactual, biased, or harmful when responding to user prompts. This limitation of SFT necessitates further alignment steps to ensure the model's behavior is consistently safe and helpful.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Instruction Fine-Tuning
Potential for Undesirable Content Generation After SFT
Example of SFT: Question-Answering Task
Applicability of Supervised Fine-Tuning
Practical Implementation Challenges of SFT
Maximum Likelihood Estimation (MLE) as the Objective for Supervised Fine-Tuning
Instruction Fine-Tuning as a Technique of SFT
Size and Specialization of SFT Datasets
Generalization as an Outcome of SFT
Characteristics of SFT Datasets
Generalization from Supervised Fine-Tuning
Definition of SFT Datasets
A development team starts with a base language model that has been pre-trained on a massive, general-purpose dataset from the web. To make the model a specialized customer service chatbot, the team initiates a second phase of training. How would the dataset used in this second phase most likely differ from the original pre-training dataset?
Comparison of SFT and Pre-training Datasets
SFT as a Post-Training Phase
Adapting a Model for a New Task
A law firm wants to develop a language model that can take a lengthy legal contract as input and produce a concise, one-paragraph summary highlighting key clauses like the term, liability limits, and governing law. They have a team of paralegals available to create a high-quality dataset of several thousand contract-summary pairs. Which of the following approaches is the most effective and direct way to train the model for this specific task?
Learn After
Learning from Human Feedback
A development team trains a large language model on a vast dataset of high-quality, curated instruction-and-response pairs to create a helpful chatbot. After this training, they observe that while the model answers most questions correctly, it occasionally generates responses that are subtly biased or confidently presents outdated, incorrect information when faced with novel or ambiguous user queries. Which of the following statements best analyzes the fundamental limitation demonstrated by the model's behavior?
Evaluating a Chatbot's Training Limitations
Analyzing Model Behavior After Instruction-Based Training