Learn Before
Using a Second Prompt to Extract Answers from Incomplete CoT Reasoning
When a Zero-Shot CoT prompt generates a chain of reasoning but no final answer, a second prompt can be employed to extract the conclusion. This follow-up prompt typically combines the original input question with the reasoning steps the model has already produced.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
First Step in Multi-Round Interaction: Direct Problem Solving
Answer Extraction via Second-Round Prompting
Using a Second Prompt to Extract Answers from Incomplete CoT Reasoning
An AI engineer provides a large language model with a complex, multi-step financial forecasting problem. The model's initial response is well-written and confident, but contains a critical calculation error in an early step, which leads to an incorrect final forecast. Which of the following strategies represents the most effective and structured next step to guide the model toward a correct solution?
Improving LLM Output for Complex Tasks
A user attempts to solve a complex multi-step data analysis task by giving a single, detailed prompt to a large language model. The model's output is logically flawed and incorrect. To better guide the model, the user decides to switch to a multi-round conversational approach. Arrange the following steps into the most effective sequence for this new approach.
Learn After
Extracting a Final Answer from a Language Model
A language model was prompted with a question. It produced a correct line of reasoning but did not state the final answer, as shown below.
Original Question: "A grocery store has 15 apples. They sell 7 and then buy 12 more. How many apples do they have now?"
Model's Reasoning: "Okay, let's break this down. The store starts with 15 apples. They sell 7, so we subtract 7 from 15, which is 15 - 7 = 8. Then, they buy 12 more, so we add 12 to the current number, which is 8 + 12 = 20."
Which of the following follow-up prompts is best designed to extract the final answer from the model's existing reasoning?
Constructing a Follow-Up Prompt