Guiding LLM Summarization
A user provides a Large Language Model with a long article and the instruction 'Summarize this.' The model's pre-training has equipped it with multiple ways to summarize text, such as creating a single-sentence summary, a multi-point bulleted list, or a detailed paragraph. Explain how providing a single example of a desired summary format within the prompt helps guide the model's prediction process, even without changing the model's underlying knowledge.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Analyzing Task Ambiguity Resolution
A user wants a Large Language Model to perform a specific task: extract only the primary company name from a news headline. The model's broad pre-training means it could mistakenly extract names of people, products, or other organizations.
The final headline to be processed is: 'Tech giant InnovateCorp announces a new partnership with Global Logistics.'
Analyze the two sets of in-context examples below. Which set provides a better guiding mechanism for the model to correctly identify 'InnovateCorp' as the desired output, and what is the most accurate reason?
Set A:
- Headline: 'QuantumLeap Inc. reveals breakthrough in computing.' -> QuantumLeap Inc.
- Headline: 'Shares of AutoDrive Solutions soar after earnings report.' -> AutoDrive Solutions
Set B:
- Headline: 'CEO John Smith discusses future of AI.' -> John Smith
- Headline: 'New smartphone 'Photon' to be released next month.' -> Photon
Guiding LLM Summarization