Learn Before
  • Chain-of-Thought (COT) Prompting

  • Example of a Multi-Step Arithmetic Word Problem (Jack's Apples)

One-Shot CoT Prompting

One-shot Chain-of-Thought (CoT) prompting is a technique where a large language model is provided with a single example that includes intermediate reasoning steps to guide it in solving a similar problem. The detailed solution to the 'Jack's Apples' problem serves as an illustration of the content used in such a prompt.

0

1

6 months ago

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related
  • Application of COT Prompting on GSM8K Benchmark

  • Structuring Logical Reasoning Steps for Demonstrations

  • Zero-Shot Chain-of-Thought (COT) Prompting

  • Application of CoT to Algebraic Calculation Problems

  • Benefits of Chain-of-Thought (CoT) Prompting

  • Incomplete Answers from Zero-Shot CoT Prompts

  • Chain-of-Thought as a Search Process

  • Supervising Intermediate Reasoning Steps for LLM Alignment

  • Limitations of Simple Chain-of-Thought Prompting

  • Creating a CoT Prompt by Incorporating Reasoning Steps

  • Alternative Trigger Phrases for Zero-Shot CoT Prompting

  • Incomplete Answers as a Potential Issue in Zero-Shot CoT Prompting

  • A developer is trying to improve a language model's ability to solve multi-step word problems. They compare two prompting strategies.

    Strategy 1: Provide the model with a new word problem and ask for the final answer directly.

    Strategy 2: Provide the model with a new word problem, but first show it an example of a similar problem where the solution is explicitly broken down into logical, sequential steps before reaching the final conclusion.

    Why is Strategy 2 generally more effective for improving the model's reasoning on complex tasks?

  • One-Shot CoT Prompting

  • Improving a Prompt for a Multi-Step Problem

  • Few-Shot Chain-of-Thought (CoT) Prompting

  • Practical Limitations of Chain-of-Thought Prompting

  • The primary benefit of a prompting technique that demonstrates a step-by-step reasoning process is that it permanently modifies the language model's internal weights, making it inherently better at solving similar problems in the future, even without the detailed prompt.

  • Designing a Prompting Workflow for a High-Stakes, Multi-Step Task

  • Choosing and Justifying a Prompting Strategy Under Context and Quality Constraints

  • Diagnosing and Redesigning a Prompting Approach for a Decomposed Workflow

  • Stabilizing an LLM Workflow for Multi-Step Policy Compliance Decisions

  • Debugging a Multi-Step LLM Workflow for Contract Clause Risk Triage

  • Designing a Robust Prompting Workflow for Multi-Step Root-Cause Analysis with Limited Examples

  • You’re building an internal LLM assistant to help ...

  • Your team is rolling out an internal LLM assistant...

  • You’re leading an internal enablement team buildin...

  • You’re building an internal LLM workflow to produc...

  • Provided Answer (12) to the Example Arithmetic Reasoning Word Problem

  • Provided Answer (10) to the Example Arithmetic Reasoning Word Problem

  • Initial State for the Apple Problem

  • In-Context Learning (ICL)

  • A language model is presented with the following problem: "Jack has 7 apples. He ate 2 of them for dinner, but then his mom gave him 5 more apples. The next day, Jack gave 3 apples to his friend John. How many apples does Jack have left in the end?" The model processes the problem and performs the calculation (7 + 5) - 2 - 3, arriving at the correct final answer of 7. Which of the following statements best analyzes the flaw in the model's problem-solving approach?

  • One-Shot CoT Prompting

  • Initiating a CoT Response to the Jack's Apples Problem

  • A language model is tasked with solving the following word problem: 'Jack has 7 apples. He ate 2 of them for dinner, but then his mom gave him 5 more apples. The next day, Jack gave 3 apples to his friend John. How many apples does Jack have left in the end?' Arrange the following computational steps into the correct logical sequence that the model should follow to arrive at the final answer.

  • Analyzing a Flawed Arithmetic Reasoning Process

Learn After
  • A user wants a language model to solve the following multi-step arithmetic problem: 'A bookstore starts the day with 50 copies of a new novel. They sell 18 copies in the morning and receive a new shipment of 25 copies in the afternoon. If they then sell 12 more copies before closing, how many copies are left?' Which of the following prompts best demonstrates the one-shot chain-of-thought technique to guide the model towards the correct answer?

  • Constructing a One-Shot CoT Prompt for a Logic Puzzle

  • A researcher is using a language model to solve a simple logic puzzle. Their goal is to have the model determine the final position of an object after a series of movements.

    Puzzle: A red ball is in a box. The box is moved to the left, and then the ball is taken out and placed to the right of the box. Where is the ball now relative to its starting position?

    Researcher's Initial Prompt: Q: A red ball is in a box. The box is moved to the left, and then the ball is taken out and placed to the right of the box. Where is the ball now relative to its starting position? A:

    Model's Incorrect Answer: The ball is to the left of its starting position.

    The researcher realizes they need to provide a single, complete example to guide the model's reasoning. Which of the following revised prompts best applies this technique to help the model solve the puzzle correctly?