Learn Before
Text Generation from an Initial Context
The process of generating text with a language model involves two main stages. First, an initial sequence of tokens, referred to as the context or prefix and denoted by , is established. Following this, the model proceeds to generate a subsequent sequence of tokens, known as the continuation and denoted by , based on the provided initial input.
0
1
Tags
Data Science
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Learn After
Examples of text generation
Decoding Methods to Generate Continuations in TGM
Stochastic decoding methods in TGM
Simultaneous Processing of Input Context Tokens
Building the Encoded Representation of Input
A user gives a language model the input: "Ancient Rome was a civilization known for its". The model then produces the following output: "engineering marvels, such as aqueducts and roads." Based on the two-stage process of text generation, which statement best analyzes this interaction?
Arrange the following stages into the correct sequence that describes how a language model generates text based on an initial input.
Analyzing a Code Generation Scenario