Zero-Shot CoT Example with Jack's Apples
An example of zero-shot Chain-of-Thought (CoT) prompting involves appending the instructional trigger "Let's think step by step." to the end of a prompt, such as the 'Jack's apples' word problem. This prompts the language model to independently generate intermediate reasoning steps to reach the final answer. For instance, the model might output: "1. Initial Quantity: Jack starts with apples. 2. After Dinner: He eats apples, so apples remain. 3. His Mom Gives More: His mom gives him more apples, so apples are now with him. 4. Giving to John: The next day, Jack gives apples to his friend John, so apples are left. In the end, Jack has apples left."
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Provided Answer (12) to the Example Arithmetic Reasoning Word Problem
Initial State for the Apple Problem
In-Context Learning (ICL)
A language model is presented with the following problem: "Jack has 7 apples. He ate 2 of them for dinner, but then his mom gave him 5 more apples. The next day, Jack gave 3 apples to his friend John. How many apples does Jack have left in the end?" The model processes the problem and performs the calculation
(7 + 5) - 2 - 3, arriving at the correct final answer of 7. Which of the following statements best analyzes the flaw in the model's problem-solving approach?A language model is tasked with solving the following word problem: 'Jack has 7 apples. He ate 2 of them for dinner, but then his mom gave him 5 more apples. The next day, Jack gave 3 apples to his friend John. How many apples does Jack have left in the end?' Arrange the following computational steps into the correct logical sequence that the model should follow to arrive at the final answer.
Analyzing a Flawed Arithmetic Reasoning Process
Incorrect Model Output () for the Jack's Apples Word Problem
Example of One-Shot Chain-of-Thought (COT) Prompting
Zero-Shot CoT Example with Jack's Apples
Example of a Zero-Shot COT Prompt
Comparison of Few-Shot and Zero-Shot CoT Prompting
Alternative Phrases for Triggering Chain-of-Thought Reasoning
A user wants a large language model to solve a multi-step word problem. The model's initial attempts provide only a final, incorrect answer. The user's goal is to modify the prompt to encourage the model to generate a detailed, step-by-step thought process first, which should lead to a more accurate final answer. Crucially, the user does not want to include a complete, solved example of another problem in the prompt. Which of the following prompt modifications best achieves this specific goal?
To successfully prompt a language model to generate a step-by-step thought process for a new problem, one must always include a complete, solved example of a similar problem within the prompt.
Structure of a Zero-Shot CoT Prompt for an Arithmetic Task
Identifying a Zero-Shot Reasoning Prompt
Your team is rolling out an internal LLM assistant...
You’re building an internal LLM workflow to produc...
You’re building an internal LLM assistant to help ...
You’re leading an internal enablement team buildin...
Choosing and Justifying a Prompting Strategy Under Context and Quality Constraints
Designing a Prompting Workflow for a High-Stakes, Multi-Step Task
Diagnosing and Redesigning a Prompting Approach for a Decomposed Workflow
Stabilizing an LLM Workflow for Multi-Step Policy Compliance Decisions
Debugging a Multi-Step LLM Workflow for Contract Clause Risk Triage
Designing a Robust Prompting Workflow for Multi-Step Root-Cause Analysis with Limited Examples
Zero-Shot CoT Example with Jack's Apples
Learn After
A large language model is given the following word problem: 'Jack has 7 apples. He ate 2 of them for dinner, but then his mom gave him 5 more apples. The next day, Jack gave 3 apples to his friend John. How many apples does Jack have left in the end?' Which of the following initial sentences for the model's response is most effective at signaling that it will provide a detailed, step-by-step breakdown of its reasoning process?
Improving Model Response Transparency
Analyzing a Model's Reasoning Trigger