Learn Before
Modifying a Prompt for Step-by-Step Reasoning
A user wants a language model to solve the following logic puzzle but finds it often gives an incorrect final answer without any justification. Modify the user's prompt to encourage the model to break down its reasoning process before providing the solution.
Original Prompt: 'There are three boxes labeled 'Apples', 'Oranges', and 'Apples and Oranges'. You know that every label is incorrect. You are allowed to pick one fruit from one box to determine the correct labels for all three boxes. Which box should you pick from?'
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Example of a Model Initiating Chain-of-Thought Reasoning
A user is trying to solve a word problem using a large language model. They use two different prompts and receive two different responses.
Prompt 1: "A farmer has 5 pens, and each pen holds 8 chickens. The farmer buys 10 more chickens. How many chickens does the farmer have in total?" Response 1: "The farmer has 50 chickens in total."
Prompt 2: "A farmer has 5 pens, and each pen holds 8 chickens. The farmer buys 10 more chickens. How many chickens does the farmer have in total? Let's think step by step." Response 2: "First, we find the initial number of chickens. The farmer has 5 pens with 8 chickens each, so 5 * 8 = 40 chickens. Then, the farmer buys 10 more chickens. So, we add those to the initial amount: 40 + 10 = 50 chickens. The farmer has 50 chickens in total."
Based on the information provided, what is the most likely reason for the difference in the structure and detail of the two responses?
Modifying a Prompt for Step-by-Step Reasoning
Improving AI Reasoning for a Multi-Step Problem