Learn Before
Example
Example of Left-to-Right Token Generation
To illustrate the left-to-right generation process, consider generating the three tokens , , and given an initial prefix . In each step, the language model picks a token from the vocabulary to maximize the conditional probability , appending it to the end of the context sequence.
- Step 1: Given the context , the model predicts using the decision rule . The overall sequence probability becomes .
- Step 2: With the new context , the model predicts using , updating the sequence probability by multiplying it by .
- Step 3: Based on the expanded context , the model predicts using , again updating the total sequence probability by multiplying it by .
This demonstrates how each predicted token is iteratively added to the context to inform the next prediction.
0
1
Updated 2026-04-18
Tags
Foundations of Large Language Models
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences