Improving a Customer Service Chatbot's Robustness
Based on the principles of building generalizable models, analyze the most likely reason for the chatbot's failure on varied user queries and describe a specific strategy for modifying the training data to address this issue.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.4 Alignment - Foundations of Large Language Models
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Multi-Task Capability through Diverse Fine-Tuning Datasets
Modern Focus of Instruction Fine-Tuning Datasets
Using Diverse Data to Steer LLM Specialization
Examples of Instruction-Following Tasks in SFT Datasets
A development team has fine-tuned a large language model to be a helpful assistant. They observe that the model excels at summarizing technical documents and answering direct factual questions, which were the primary tasks in its fine-tuning dataset. However, when users ask it to perform more creative tasks like writing a short poem or brainstorming marketing slogans, the model's performance is poor and generic. Which of the following strategies would be the most effective next step to improve the model's ability to handle this wider range of user requests?
Using Varied Instructions for a Single Task to Enhance Data Diversity
Improving a Customer Service Chatbot's Robustness
Characteristics and Limitations of Early Instruction Fine-Tuning Datasets
Evaluating a Fine-Tuning Strategy for LLMs
Example of a Recipe Generation Task for LLMs
Example of a Creative Writing Task for LLMs
Example of a Math Word Problem Task for LLMs