Learn Before
  • Text Completion (Continual Writing)

  • Example of a Prompt for Sequence Completion

Chain-of-Thought Prompting

Chain-of-Thought (CoT) prompting is a technique used to guide a large language model to detail its reasoning process before providing a final answer. This is often achieved by including a simple instruction like 'Let’s think step by step' in the prompt. By verbalizing its thought process, the model can break down complex problems into smaller, manageable parts, which often improves the accuracy and reliability of its conclusions.

0

1

6 months ago

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related
  • Example of Text Completion: Story Continuation

  • Text Completion Based on User Requirements

  • Chain-of-Thought Prompting

  • A language model is provided with the following text: 'The old bookstore was a labyrinth of towering shelves and narrow aisles. The air smelled of aging paper and leather bindings. In the quietest corner, a single book lay open on a dusty table.' Which of the following model outputs best demonstrates the task of extending the given text in a coherent manner?

  • AI Writing Assistant Feature Analysis

  • Distinguishing Text Generation Tasks

  • Example of a Prompt for Nested Bracket Completion

  • Chain-of-Thought Prompting

  • A developer is using a language model to generate a simple, syntactically correct data structure. The developer provides the model with the starting text {"name": "Alex", and wants the model to finish it, for example, as "id": 123}. Which of the following prompts is most effectively structured to guide the model toward this specific kind of completion?

  • Debugging a Sequence Completion Prompt

  • Crafting a Sequence Completion Prompt

Learn After
  • Example of a Model Initiating Chain-of-Thought Reasoning

  • A user is trying to solve a word problem using a large language model. They use two different prompts and receive two different responses.

    Prompt 1: "A farmer has 5 pens, and each pen holds 8 chickens. The farmer buys 10 more chickens. How many chickens does the farmer have in total?" Response 1: "The farmer has 50 chickens in total."

    Prompt 2: "A farmer has 5 pens, and each pen holds 8 chickens. The farmer buys 10 more chickens. How many chickens does the farmer have in total? Let's think step by step." Response 2: "First, we find the initial number of chickens. The farmer has 5 pens with 8 chickens each, so 5 * 8 = 40 chickens. Then, the farmer buys 10 more chickens. So, we add those to the initial amount: 40 + 10 = 50 chickens. The farmer has 50 chickens in total."

    Based on the information provided, what is the most likely reason for the difference in the structure and detail of the two responses?

  • Modifying a Prompt for Step-by-Step Reasoning

  • Improving AI Reasoning for a Multi-Step Problem