A development team is fine-tuning a language model to handle a wide range of customer support inquiries. To streamline the process, they convert a large dataset of complex, real-world user questions into a single, simplified format, such as 'Problem: [issue], Desired Outcome: [resolution]'. The model is then trained exclusively on this standardized dataset. What is the most probable consequence of this training strategy when the model is deployed?
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Cost and Data Limitations of Diverse Instruction Fine-Tuning
Analysis of a Fine-Tuning Strategy
A development team is fine-tuning a language model to handle a wide range of customer support inquiries. To streamline the process, they convert a large dataset of complex, real-world user questions into a single, simplified format, such as 'Problem: [issue], Desired Outcome: [resolution]'. The model is then trained exclusively on this standardized dataset. What is the most probable consequence of this training strategy when the model is deployed?
Analyzing LLM Performance Discrepancy