Example of One-Shot Chain-of-Thought (COT) Prompting
An example of one-shot Chain-of-Thought (COT) prompting involves providing a language model with a single demonstration of step-by-step reasoning before asking a new question. For instance, a prompt might first present and solve a word problem about Tom's marbles by explicitly detailing each arithmetic step. This single demonstration then guides the model to apply a similar explicit reasoning process when tasked with a subsequent problem, such as calculating how many apples Jack has left.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Application of COT Prompting on GSM8K Benchmark
Structuring Logical Reasoning Steps for Demonstrations
Zero-Shot Chain-of-Thought (COT) Prompting
Application of CoT to Algebraic Calculation Problems
Benefits of Chain-of-Thought (CoT) Prompting
Incomplete Answers from Zero-Shot CoT Prompts
Chain-of-Thought as a Search Process
Supervising Intermediate Reasoning Steps for LLM Alignment
Limitations of Simple Chain-of-Thought Prompting
Creating a CoT Prompt by Incorporating Reasoning Steps
Alternative Trigger Phrases for Zero-Shot CoT Prompting
Incomplete Answers as a Potential Issue in Zero-Shot CoT Prompting
A developer is trying to improve a language model's ability to solve multi-step word problems. They compare two prompting strategies.
Strategy 1: Provide the model with a new word problem and ask for the final answer directly.
Strategy 2: Provide the model with a new word problem, but first show it an example of a similar problem where the solution is explicitly broken down into logical, sequential steps before reaching the final conclusion.
Why is Strategy 2 generally more effective for improving the model's reasoning on complex tasks?
Improving a Prompt for a Multi-Step Problem
Few-Shot Chain-of-Thought (CoT) Prompting
Practical Limitations of Chain-of-Thought Prompting
The primary benefit of a prompting technique that demonstrates a step-by-step reasoning process is that it permanently modifies the language model's internal weights, making it inherently better at solving similar problems in the future, even without the detailed prompt.
Designing a Prompting Workflow for a High-Stakes, Multi-Step Task
Choosing and Justifying a Prompting Strategy Under Context and Quality Constraints
Diagnosing and Redesigning a Prompting Approach for a Decomposed Workflow
Stabilizing an LLM Workflow for Multi-Step Policy Compliance Decisions
Debugging a Multi-Step LLM Workflow for Contract Clause Risk Triage
Designing a Robust Prompting Workflow for Multi-Step Root-Cause Analysis with Limited Examples
You’re building an internal LLM assistant to help ...
Your team is rolling out an internal LLM assistant...
You’re leading an internal enablement team buildin...
You’re building an internal LLM workflow to produc...
Example of One-Shot Chain-of-Thought (COT) Prompting
Problem-Solving Scenarios for Chain-of-Thought Prompting
Self-Consistency Method
Provided Answer (12) to the Example Arithmetic Reasoning Word Problem
Initial State for the Apple Problem
In-Context Learning (ICL)
A language model is presented with the following problem: "Jack has 7 apples. He ate 2 of them for dinner, but then his mom gave him 5 more apples. The next day, Jack gave 3 apples to his friend John. How many apples does Jack have left in the end?" The model processes the problem and performs the calculation
(7 + 5) - 2 - 3, arriving at the correct final answer of 7. Which of the following statements best analyzes the flaw in the model's problem-solving approach?A language model is tasked with solving the following word problem: 'Jack has 7 apples. He ate 2 of them for dinner, but then his mom gave him 5 more apples. The next day, Jack gave 3 apples to his friend John. How many apples does Jack have left in the end?' Arrange the following computational steps into the correct logical sequence that the model should follow to arrive at the final answer.
Analyzing a Flawed Arithmetic Reasoning Process
Incorrect Model Output () for the Jack's Apples Word Problem
Example of One-Shot Chain-of-Thought (COT) Prompting
Zero-Shot CoT Example with Jack's Apples
Few-Shot Chain-of-Thought (COT) Prompting
A user wants a language model to solve multi-step logic puzzles. They provide the following prompt. Analyze the structure of the demonstration provided within this prompt.
[START OF PROMPT] Q: A farmer has 15 sheep. All but 8 died. How many are left? A: Let's think step by step. 1. The phrase "All but 8 died" is a bit tricky. It means that 8 sheep are the ones that did *not* die. 2. Therefore, the number of sheep left is 8. The final answer is 8. Q: In a race, a runner overtakes the person in 2nd place. What position is the runner in now? [END OF PROMPT]What is the key element in the provided demonstration that guides the model to reason through the new puzzle?
Constructing a One-Shot Chain-of-Thought Prompt
Improving Language Model Performance on a Multi-Step Task
Example of One-Shot Chain-of-Thought (COT) Prompting
Learn After
A user wants a language model to solve the following multi-step arithmetic problem: 'A bookstore starts the day with 50 copies of a new novel. They sell 18 copies in the morning and receive a new shipment of 25 copies in the afternoon. If they then sell 12 more copies before closing, how many copies are left?' Which of the following prompts best demonstrates the one-shot chain-of-thought technique to guide the model towards the correct answer?
Constructing a One-Shot CoT Prompt for a Logic Puzzle
A researcher is using a language model to solve a simple logic puzzle. Their goal is to have the model determine the final position of an object after a series of movements.
Puzzle: A red ball is in a box. The box is moved to the left, and then the ball is taken out and placed to the right of the box. Where is the ball now relative to its starting position?
Researcher's Initial Prompt:
Q: A red ball is in a box. The box is moved to the left, and then the ball is taken out and placed to the right of the box. Where is the ball now relative to its starting position? A:Model's Incorrect Answer:
The ball is to the left of its starting position.The researcher realizes they need to provide a single, complete example to guide the model's reasoning. Which of the following revised prompts best applies this technique to help the model solve the puzzle correctly?