Learn Before
Multi-Round Interaction to Guide LLM Reasoning
Using a Second Prompt to Extract Answers from Incomplete CoT Reasoning
When a Zero-Shot CoT prompt generates a chain of reasoning but no final answer, a second prompt can be employed to extract the conclusion. This follow-up prompt typically combines the original input question with the reasoning steps the model has already produced.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
First Step in Multi-Round Interaction: Direct Problem Solving
Answer Extraction via Second-Round Prompting
Using a Second Prompt to Extract Answers from Incomplete CoT Reasoning
An AI engineer provides a large language model with a complex, multi-step financial forecasting problem. The model's initial response is well-written and confident, but contains a critical calculation error in an early step, which leads to an incorrect final forecast. Which of the following strategies represents the most effective and structured next step to guide the model toward a correct solution?