Case Study

Diagnosing a Flawed Few-Shot Prompt

A developer is trying to use a large language model to break down complex cooking recipes into a list of simple, actionable steps. They provide the model with the following prompt, but the model's output is inconsistent and often just summarizes the recipe instead of decomposing it. Analyze the provided prompt. What crucial instructional component is missing from the very beginning of the prompt that is likely causing the model's inconsistent performance? Explain why the absence of this component would confuse the model, even with the examples provided.

0

1

Updated 2025-10-04

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Computing Sciences

Foundations of Large Language Models Course

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science