Short Answer

Diagnosing a Flawed Prompt for Instruction Generation

A prompt engineer provides a language model with several pairs of long text passages (inputs) and their corresponding one-sentence summaries (outputs). The goal is to have the model generate a general instruction for this summarization task. However, when the prompt is submitted, the model simply waits for a new input to summarize instead of generating the instruction. Based on this outcome, what crucial component is most likely missing from the engineer's prompt, and why is its absence causing this specific behavior?

0

1

Updated 2025-10-04

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science