Strategy for Handling Diverse User Instructions
A team is developing an application powered by a large language model. They anticipate that end-users will provide instructions in many different ways, from complete sentences to single keywords. The team is considering two main strategies to ensure the model responds correctly: 1) creating a complex system of prompt templates that tries to anticipate and reformat user inputs, or 2) fine-tuning the base model on a dataset of diverse user instructions and desired responses. Evaluate the long-term advantages and disadvantages of the fine-tuning approach compared to the prompt-templating approach for this specific challenge.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Evaluation in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Adapting a Chatbot for Informal User Instructions
A company deploys a powerful language model for an internal search tool. They find that employees get good results when they type full questions like, 'Can you find the quarterly report for the sales department from last year?' However, the model performs poorly when employees use short, keyword-style queries like 'Q3 sales report'. What is the most effective and scalable strategy to specifically train the model to correctly interpret and act upon these simplified, unconventional instructions?
Embedding Prompting Knowledge into LLM Parameters via Fine-Tuning
Strategy for Handling Diverse User Instructions