Learn Before
Concept

Search Tree Structure in Token Generation

In NLP, the search space Y\mathcal{Y} for generating output sequences is commonly structured as a search tree. Each node in this tree represents a prefix subsequence, and its child nodes are formed by appending one token from the vocabulary, along with the probability of predicting that token. The tree is organized into levels, where nodes at the same level correspond to sequences of identical length. As generation progresses, new tokens are incrementally appended, causing the tree to grow deeper and wider, representing an increasing number of potential sequence extensions.

Image 0

0

1

Updated 2026-05-03

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences