Learn Before
Resolving an Incomplete LLM Response
A developer is using a large language model to build a question-answering system. They observe that for some queries, the model provides a detailed reasoning process but fails to state the final answer. Based on the interaction log below, describe the specific content of the follow-up prompt the developer should send to elicit the final answer, and briefly explain the principle behind this approach.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A user provides the following query to a large language model: "A grocery store has 5 apples. They buy 3 more bags of apples, with 4 apples in each bag. They then sell 7 apples. How many apples do they have left?"
The model returns the following text, which includes reasoning steps but does not state the final answer: "Okay, let's break this down. First, we calculate the total number of new apples: 3 bags * 4 apples/bag = 12 apples. The store started with 5 apples, so the new total is 5 + 12 = 17 apples. Then, 7 apples are sold."
To guide the model to provide the final answer in a subsequent turn, which of the following inputs best describes the content that should be sent as the next prompt?
Constructing a Follow-Up Prompt for Incomplete Reasoning
Resolving an Incomplete LLM Response