Activity (Process)

Left-to-Right Token Generation Process

To generate continuous text, a language model performs word prediction multiple times iteratively. In each step, the model predicts the best token x^i\hat{x}_i and adds this predicted token to the existing context. The expanded context is then used to predict the subsequent token x^i+1\hat{x}_{i+1}. This step-by-step repetition forms a left-to-right generation process.

0

1

Updated 2026-04-18

Contributors are:

Who are from:

Tags

Foundations of Large Language Models

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences