Learn Before
Improving Automated Answer Extraction
Based on the provided scenario, what specific, simple modification should the developer make to the examples within the prompt to ensure the final answer is consistently and reliably identifiable for automated extraction? Explain why this modification works.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
An engineer is creating a prompt that includes several examples of a math word problem followed by a step-by-step solution. The goal is for the model to learn this reasoning pattern. However, the model's final answers are often buried within its explanatory text, making them hard to extract automatically. The engineer modifies each example by placing the token
####immediately before the final numerical answer. Why is this modification an effective strategy?Improving Prompt Structure with a Formatting Token
Improving Automated Answer Extraction
The primary purpose of using a special formatting token, such as
####, to demarcate the final answer in a few-shot prompt is to enhance the language model's internal step-by-step reasoning capabilities.