Learn Before
Refining a CoT Prompt for Programmatic Extraction
A developer is building a system to automatically categorize customer support tickets using a language model. They are using few-shot Chain-of-Thought prompting. The model's reasoning is accurate, but the system struggles to reliably extract the final category from the model's free-form text response. Below is one of the demonstrations used in the prompt. Revise the 'Answer' portion of this demonstration to solve the developer's problem, ensuring the final category can be easily and consistently identified by a script.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A developer is creating few-shot demonstrations to teach a language model to solve word problems. They notice the model's outputs are often verbose and fail to clearly state the final numerical answer, even when the reasoning steps are correct. Review the following demonstration from their prompt:
Q: A grocery store had 50 cans of soup. They sold 15 on Monday and received a new shipment of 25. How many cans do they have now? A: The store started with 50 cans. They sold 15, so 50 - 15 = 35. Then they received 25 more, so 35 + 25 = 60. The store now has 60 cans.
Which of the following critiques best identifies the primary weakness in this demonstration that is likely causing the model's inconsistent output format?
Improving Model Output Consistency
Refining a CoT Prompt for Programmatic Extraction