Learn Before
Improving Prompt Structure with a Formatting Token
You are given a one-shot prompt designed to teach a model how to solve simple algebra problems. The model is correctly performing the reasoning but is not consistently isolating the final answer, making it difficult to parse the output. Modify the example's answer provided below to include a special formatting token that clearly demarcates the final answer. Use #### as the special token.
Original Example: Q: If 2x + 3 = 11, what is the value of x? A: To solve for x, we first subtract 3 from both sides of the equation: 2x = 11 - 3, which simplifies to 2x = 8. Then, we divide both sides by 2 to isolate x: x = 8 / 2. The final answer is 4.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
An engineer is creating a prompt that includes several examples of a math word problem followed by a step-by-step solution. The goal is for the model to learn this reasoning pattern. However, the model's final answers are often buried within its explanatory text, making them hard to extract automatically. The engineer modifies each example by placing the token
####immediately before the final numerical answer. Why is this modification an effective strategy?Improving Prompt Structure with a Formatting Token
Improving Automated Answer Extraction
The primary purpose of using a special formatting token, such as
####, to demarcate the final answer in a few-shot prompt is to enhance the language model's internal step-by-step reasoning capabilities.