A machine learning team has successfully developed a large language model by training it on a massive, general-purpose text corpus. They now want to make the model better at following specific user commands. To do this, they have created a new, high-quality dataset that is much smaller than the original corpus and consists of example commands paired with ideal responses. Based on the standard procedures for adapting such models, which statement best describes the relationship between the initial training phase and this new adaptation phase?
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Parameter Initialization and Moderate Adjustment in Fine-Tuning
Probabilistic Objective of Supervised Fine-Tuning
Comparison of Training Objectives: Instruction Fine-Tuning vs. Pre-training
A machine learning team has successfully developed a large language model by training it on a massive, general-purpose text corpus. They now want to make the model better at following specific user commands. To do this, they have created a new, high-quality dataset that is much smaller than the original corpus and consists of example commands paired with ideal responses. Based on the standard procedures for adapting such models, which statement best describes the relationship between the initial training phase and this new adaptation phase?
Training Strategy for a Specialized Chatbot
The training methodology for instruction fine-tuning must be fundamentally different from the methodology used for pre-training, primarily because the dataset used for fine-tuning is substantially smaller.
Objective of Instruction Fine-Tuning