Learn Before
Notation for Preceding Output Subsequence
In autoregressive language models, the notation is commonly used to represent the subsequence of output tokens that precede position . This sequence, consisting of , provides the context for predicting the token . An alternative notation for this same subsequence is yi.
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Formal Definition of LLM Inference
Notation for Preceding Output Subsequence
Deconstructing a Model's Generated Text
Representing Model Output as a Token Sequence
A Large Language Model generates the sentence: 'AI is transforming our world.' How is this output fundamentally structured by the model before being presented to the user?
Separating Input and Output Variables in LLM Formulation
Learn After
Construction of the Optimal Sequence in Greedy Search
An autoregressive model generates the token sequence , where , , and so on. What does the notation represent in this specific sequence?
True or False: For an autoregressive model generating the output sequence , the notation represents the complete subsequence .
Formula for Constructing Top-K Candidate Sequences
An autoregressive language model is generating a sequence of tokens, one at a time. To predict the fifth token in the sequence, denoted as , the model uses all the previously generated tokens as context. The standard notation for this preceding subsequence of tokens is ____.