Diagnosing a Flawed Few-Shot Prompt
A developer is trying to use a large language model to break down complex cooking recipes into a list of simple, actionable steps. They provide the model with the following prompt, but the model's output is inconsistent and often just summarizes the recipe instead of decomposing it. Analyze the provided prompt. What crucial instructional component is missing from the very beginning of the prompt that is likely causing the model's inconsistent performance? Explain why the absence of this component would confuse the model, even with the examples provided.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Computing Sciences
Foundations of Large Language Models Course
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A developer is constructing a prompt to guide a large language model. The goal is for the model to break down complex customer support queries into a series of simpler, sequential steps. The developer plans to include two examples of this process within the prompt to show the model the desired output format. Which of the following introductory statements would be the most effective to place at the very beginning of this prompt?
Diagnosing a Flawed Few-Shot Prompt
Evaluating Instructional Statements for Task Decomposition