Learn Before
Learning Output Formatting from Demonstrations
In few-shot prompting, Large Language Models learn from the provided demonstrations of problem-solution pairs. These demonstrations teach the model not only the underlying problem-solving logic but also the specific way to format its generated output. For instance, a model learns to use special tokens for calculation annotations (like ) and to demarcate the final answer (like ####) by observing these patterns in the few-shot examples.
0
1
Tags
Foundations of Large Language Models
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
A developer is trying to get a language model to extract product codes from customer emails. They provide the following examples in the prompt before asking the model to process a new email:
Example 1: Input: 'Hi, my SuperWidget model SW-1000 is broken.' Output: 'SW-1000'
Example 2: Input: 'I need a replacement part for my SuperWidget Pro, model number SW-2500.' Output: 'SW-2500'
New Email: Input: 'My GigaGadget GG-500 won't turn on.'
The model incorrectly outputs 'SW-500'. Based on an analysis of the provided examples, what is the most likely reason for this error?
Evaluating Prompt Demonstrations
Evaluating and Improving Prompt Demonstrations
Learning Output Formatting from Demonstrations