Chain-of-Thought (COT) Prompting
Chain-of-Thought (COT) prompting is a method that encourages large language models to explicitly produce intermediate reasoning steps before generating final answers. Built on top of techniques like few-shot prompting, it addresses complex reasoning problems by prompting the model to break down complex tasks into simpler sub-tasks. This explicit decomposition of problem-solving is highly beneficial for generating accurate and interpretable outputs.
0
1
References
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.3 Prompting - Foundations of Large Language Models
Ch.5 Inference - Foundations of Large Language Models
Related
Effect of 'Thinking' Prompts on LLM Performance
Chain-of-Thought (COT) Prompting
Multi-Round Interaction to Guide LLM Reasoning
Example of a Prompt for a Direct Mathematical Calculation
Example of a Prompt for Calculating the Average of 1, 3, 5, and 7
Example of a Prompt for Calculating the Mean Square
Improving LLM Reasoning with Step-by-Step Demonstrations
In-Context Learning (ICL)
A user is trying to get a Large Language Model (LLM) to solve a complex word problem that involves multiple calculations. Their initial prompt, 'What is the answer to this problem? [Problem text]', results in a quick but incorrect numerical answer. The user then revises the prompt to: 'First, break down the problem into the necessary steps. Then, solve each step, showing your work. Finally, state the final answer. [Problem text]'. This revised prompt leads to a correct solution. Which principle of interacting with LLMs does this scenario best illustrate?
Evaluating Prompt Strategies for a Logic Puzzle
Prompting for a Reasoning Process to Mitigate Errors in Complex Tasks
Example of a Prompt for Calculating the Average of 2, 4, and 9
Improving LLM Problem-Solving by Demonstrating Reasoning Steps
The Mechanism of Reasoning Prompts
Example of a Prompt with Detailed Reasoning Steps
Chain-of-Thought (COT) Prompting
A user provides a language model with the following prompt: 'A bakery had 80 cookies at the start of the day. They sold 35 cookies in the morning and baked a fresh batch of 50 more. In the afternoon, they sold another 40 cookies. How many cookies are left?' Which of the following model responses best demonstrates an approach where a complete, multi-step reasoning path and the final conclusion are generated together in a single, continuous output?
Chatbot Reasoning Strategy Analysis
Evaluating a Reasoning Strategy for a Customer Support Chatbot
Few-Shot Learning in Prompting
Chain-of-Thought (COT) Prompting
Strategic Information Management in Context Scaling
A developer is using a large language model to classify customer feedback. The model is struggling with ambiguous statements. For the input 'The setup process was a bit of a journey,' the model inconsistently provides different classifications. Which of the following revised inputs best demonstrates the principle of improving performance by extending the model's context with helpful prior information?
Optimizing a Creative Writing Assistant
The Role of Input Context in Model Prediction Quality
Context Scaling via Dynamic External Knowledge
Chain-of-Thought (COT) Prompting
Explicitly Prompting for a Reasoning Process to Prevent Errors
A user wants a language model to solve a multi-step math word problem. The user's prompt includes an example of a different, but structurally similar, word problem along with its final numerical answer. Despite this example, the model fails to solve the new problem correctly. Which statement best analyzes the most probable cause of the model's failure?
Analyzing a Failed Prompt for a Logic Puzzle
Diagnosing LLM Prompting Failures
Example of a Probability-Based Word Problem for LLMs
Example of a Multi-Step Arithmetic Word Problem (Swimming Pool)
Example of a Mathematical Reasoning Word Problem (Jessica's Apps)
Example of a Multi-Step Arithmetic Word Problem (Tom's Marbles)
A large language model was given the following word problem: 'A bakery had 20 muffins. They sold 12 muffins and then baked 3 dozen more. How many muffins does the bakery have now?' The model produced this response: 'First, we start with 20 muffins. They sold 12, so 20 - 12 = 8. Then they baked 3 more, so 8 + 3 = 11. The final answer is 11.' Which statement best analyzes the primary reasoning failure in the model's response?
Chain-of-Thought (COT) Prompting
Example of a Multi-Step Arithmetic Word Problem (Jack's Apples)
Evaluating LLM Arithmetic Inference
A language model is tasked with solving arithmetic word problems. Below are common types of errors it might make when translating language into a sequence of mathematical operations. Match each error type with the scenario that best exemplifies it.
Improving Narrative Coherence in AI-Generated Stories
A developer observes that a language model is generating summaries of long articles that lack detail and miss key points. To address this, they modify the inference process to provide the model with the full, unabridged article text instead of a shorter, pre-processed version. Which statement best analyzes why this modification is likely to improve the quality of the generated summary?
Evaluating Context Expansion for a Chatbot
Few-Shot Learning in Prompting
Chain-of-Thought (COT) Prompting
Retrieval-Augmented Generation (RAG)
Learn After
Application of COT Prompting on GSM8K Benchmark
Structuring Logical Reasoning Steps for Demonstrations
Zero-Shot Chain-of-Thought (COT) Prompting
Application of CoT to Algebraic Calculation Problems
Benefits of Chain-of-Thought (CoT) Prompting
Incomplete Answers from Zero-Shot CoT Prompts
Chain-of-Thought as a Search Process
Supervising Intermediate Reasoning Steps for LLM Alignment
Limitations of Simple Chain-of-Thought Prompting
Creating a CoT Prompt by Incorporating Reasoning Steps
Alternative Trigger Phrases for Zero-Shot CoT Prompting
Incomplete Answers as a Potential Issue in Zero-Shot CoT Prompting
A developer is trying to improve a language model's ability to solve multi-step word problems. They compare two prompting strategies.
Strategy 1: Provide the model with a new word problem and ask for the final answer directly.
Strategy 2: Provide the model with a new word problem, but first show it an example of a similar problem where the solution is explicitly broken down into logical, sequential steps before reaching the final conclusion.
Why is Strategy 2 generally more effective for improving the model's reasoning on complex tasks?
Improving a Prompt for a Multi-Step Problem
Few-Shot Chain-of-Thought (CoT) Prompting
Practical Limitations of Chain-of-Thought Prompting
The primary benefit of a prompting technique that demonstrates a step-by-step reasoning process is that it permanently modifies the language model's internal weights, making it inherently better at solving similar problems in the future, even without the detailed prompt.
Designing a Prompting Workflow for a High-Stakes, Multi-Step Task
Choosing and Justifying a Prompting Strategy Under Context and Quality Constraints
Diagnosing and Redesigning a Prompting Approach for a Decomposed Workflow
Stabilizing an LLM Workflow for Multi-Step Policy Compliance Decisions
Debugging a Multi-Step LLM Workflow for Contract Clause Risk Triage
Designing a Robust Prompting Workflow for Multi-Step Root-Cause Analysis with Limited Examples
You’re building an internal LLM assistant to help ...
Your team is rolling out an internal LLM assistant...
You’re leading an internal enablement team buildin...
You’re building an internal LLM workflow to produc...
Example of One-Shot Chain-of-Thought (COT) Prompting
Problem-Solving Scenarios for Chain-of-Thought Prompting
Self-Consistency Method