Diagnosing a Flawed Prompt for Instruction Generation
A prompt engineer provides a language model with several pairs of long text passages (inputs) and their corresponding one-sentence summaries (outputs). The goal is to have the model generate a general instruction for this summarization task. However, when the prompt is submitted, the model simply waits for a new input to summarize instead of generating the instruction. Based on this outcome, what crucial component is most likely missing from the engineer's prompt, and why is its absence causing this specific behavior?
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A developer wants a language model to generate a clear, general instruction for the task of converting informal date expressions into a standard YYYY-MM-DD format. They have several examples of inputs and their corresponding outputs. Which of the following prompts is best structured to have the model generate the desired task instruction, rather than just performing the task on a new input?
Diagnosing a Flawed Prompt for Instruction Generation