Learn Before
A user wants a language model to summarize a block of text and then translate the summary into French. They try two different prompts:
Prompt 1: "Summarize the text below and translate it into French. [TEXT BLOCK]" Result 1: The model provides a summary in English but does not provide a translation.
Prompt 2: "Follow these two steps:
- Summarize the text below.
- Translate the summary from step 1 into French. [TEXT BLOCK]" Result 2: The model provides an English summary followed by a correct French translation.
What does the difference between these two outcomes most clearly demonstrate?
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Iterative Refinement of Prompts
A user wants a language model to summarize a block of text and then translate the summary into French. They try two different prompts:
Prompt 1: "Summarize the text below and translate it into French. [TEXT BLOCK]" Result 1: The model provides a summary in English but does not provide a translation.
Prompt 2: "Follow these two steps:
- Summarize the text below.
- Translate the summary from step 1 into French. [TEXT BLOCK]" Result 2: The model provides an English summary followed by a correct French translation.
What does the difference between these two outcomes most clearly demonstrate?
Diagnosing Inconsistent LLM Outputs
A developer wants a language model to extract specific information about a product's battery life, screen quality, and price from a customer review. Arrange the following prompts in order from least effective to most effective for consistently achieving this goal.