Answer Extraction via Second-Round Prompting
When a Chain-of-Thought prompt generates reasoning steps but no final answer, a second prompt can be used to extract the conclusion. This follow-up prompt is constructed by combining the original input with the model's generated reasoning from the first round. This combined input allows the LLM to continue its thought process and produce the correct final answer.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
First Step in Multi-Round Interaction: Direct Problem Solving
Answer Extraction via Second-Round Prompting
Using a Second Prompt to Extract Answers from Incomplete CoT Reasoning
An AI engineer provides a large language model with a complex, multi-step financial forecasting problem. The model's initial response is well-written and confident, but contains a critical calculation error in an early step, which leads to an incorrect final forecast. Which of the following strategies represents the most effective and structured next step to guide the model toward a correct solution?
Improving LLM Output for Complex Tasks
A user attempts to solve a complex multi-step data analysis task by giving a single, detailed prompt to a large language model. The model's output is logically flawed and incorrect. To better guide the model, the user decides to switch to a multi-round conversational approach. Arrange the following steps into the most effective sequence for this new approach.
Answer Extraction via Second-Round Prompting
A user gives a large language model the following prompt:
"Prompt: At a conference, there are 3 presentation rooms. Each room has 8 rows of chairs, and each row has 10 chairs. If all chairs are filled, how many people are attending the presentations? Let's think step by step."
The model provides the following output:
"Output: Okay, let's break this down. First, we need to find the total number of chairs in one room. There are 8 rows with 10 chairs each, so 8 rows * 10 chairs/row = 80 chairs per room. Then, we need to find the total number of chairs across all rooms. There are 3 rooms, so 3 rooms * 80 chairs/room = 240 chairs."
Analyze the model's output. What is the primary issue with its response?
Evaluating LLM Response Completeness
Predicting Prompting Pitfalls
Learn After
A user provides the following query to a large language model: "A grocery store has 5 apples. They buy 3 more bags of apples, with 4 apples in each bag. They then sell 7 apples. How many apples do they have left?"
The model returns the following text, which includes reasoning steps but does not state the final answer: "Okay, let's break this down. First, we calculate the total number of new apples: 3 bags * 4 apples/bag = 12 apples. The store started with 5 apples, so the new total is 5 + 12 = 17 apples. Then, 7 apples are sold."
To guide the model to provide the final answer in a subsequent turn, which of the following inputs best describes the content that should be sent as the next prompt?
Constructing a Follow-Up Prompt for Incomplete Reasoning
Resolving an Incomplete LLM Response