Learn Before
Evaluating Fine-Tuning Strategies for a Chatbot
A team is fine-tuning a language model for a customer service chatbot. One developer suggests training the model on a small, rigid set of command-like instructions (e.g., FETCH_ORDER_STATUS, INITIATE_RETURN). Another developer advocates for training it on a large dataset of diverse, real-world customer questions. Evaluate these two strategies. Which approach is more likely to result in a successful chatbot, and why? Justify your answer based on the principles of how these models process instructions.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Evaluation in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Rephrasing Instructions for Simplicity
Adapting LLMs to Follow Diverse Instructions via Fine-Tuning
A software development team is building a feature that allows users to ask a language model to summarize text. One developer argues for a strict input format, such as
COMMAND: SUMMARIZE, while another argues for allowing flexible, natural language inputs like'can you give me the main points?'or'what's the short version of this?'. Which of the following statements most accurately analyzes the technical feasibility of the flexible approach?Evaluating Prompt Flexibility
Example of a Standard Instruction for English-to-Chinese Translation
Evaluating Fine-Tuning Strategies for a Chatbot