A user wants to generate a one-paragraph summary of a long article. They provide the entire article text to a language model as the prompt, but the model's output is a continuation of the article's topic rather than a summary. Based on the principles of effective prompt structure, what is the most likely reason for this poor result?
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.4 Alignment - Foundations of Large Language Models
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Tuple Representation of a Simple Prompt
A user wants to generate a one-paragraph summary of a long article. They provide the entire article text to a language model as the prompt, but the model's output is a continuation of the article's topic rather than a summary. Based on the principles of effective prompt structure, what is the most likely reason for this poor result?
Analyzing a Flawed Prompt
Deconstructing a Prompt