Learn Before
Concept

Learning Output Formatting from Demonstrations

In few-shot prompting, Large Language Models learn from the provided demonstrations of problem-solution pairs. These demonstrations teach the model not only the underlying problem-solving logic but also the specific way to format its generated output. For instance, a model learns to use special tokens for calculation annotations (like ...\ll ... \gg) and to demarcate the final answer (like ####) by observing these patterns in the few-shot examples.

0

1

Updated 2026-04-30

Contributors are:

Who are from:

Tags

Foundations of Large Language Models

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences