Learn Before
Example of a Zero-Shot COT Prompt
A zero-shot Chain-of-Thought prompt elicits reasoning without providing solved examples. It typically consists of a question followed by an instruction to trigger a step-by-step thought process. For instance:
Q: Please calculate the average of the numbers 2, 4, and 9.
A: Let’s think step-by-step.
0
1
References
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Reference of Foundations of Large Language Models Course
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.3 Prompting - Foundations of Large Language Models
Related
Example of a Zero-Shot COT Prompt
Comparison of Few-Shot and Zero-Shot CoT Prompting
Alternative Phrases for Triggering Chain-of-Thought Reasoning
A user wants a large language model to solve a multi-step word problem. The model's initial attempts provide only a final, incorrect answer. The user's goal is to modify the prompt to encourage the model to generate a detailed, step-by-step thought process first, which should lead to a more accurate final answer. Crucially, the user does not want to include a complete, solved example of another problem in the prompt. Which of the following prompt modifications best achieves this specific goal?
To successfully prompt a language model to generate a step-by-step thought process for a new problem, one must always include a complete, solved example of a similar problem within the prompt.
Structure of a Zero-Shot CoT Prompt for an Arithmetic Task
Identifying a Zero-Shot Reasoning Prompt
Your team is rolling out an internal LLM assistant...
You’re building an internal LLM workflow to produc...
You’re building an internal LLM assistant to help ...
You’re leading an internal enablement team buildin...
Choosing and Justifying a Prompting Strategy Under Context and Quality Constraints
Designing a Prompting Workflow for a High-Stakes, Multi-Step Task
Diagnosing and Redesigning a Prompting Approach for a Decomposed Workflow
Stabilizing an LLM Workflow for Multi-Step Policy Compliance Decisions
Debugging a Multi-Step LLM Workflow for Contract Clause Risk Triage
Designing a Robust Prompting Workflow for Multi-Step Root-Cause Analysis with Limited Examples
Zero-Shot CoT Example with Jack's Apples
Learn After
A user wants a language model to solve a simple logic puzzle and see the model's reasoning process. Crucially, the user does not want to provide any examples of how to solve similar puzzles. The user's initial query is: 'If a red house is made of red bricks and a blue house is made of blue bricks, what is a green house made of?' Which of the following modified prompts best applies the simple, well-known technique for eliciting a step-by-step thought process without providing any examples?
Critiquing a Prompt for Eliciting Reasoning
Constructing a Reasoning-Eliciting Prompt