Learn Before
Building the Encoded Representation of Input
A foundational step in sequence modeling is to process the input sequence to create an encoded representation, often referred to as the context. This representation synthesizes the information contained within the input tokens and serves as the basis for subsequent processing, such as generating an output.
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Examples of text generation
Decoding Methods to Generate Continuations in TGM
Stochastic decoding methods in TGM
Simultaneous Processing of Input Context Tokens
Building the Encoded Representation of Input
A user gives a language model the input: "Ancient Rome was a civilization known for its". The model then produces the following output: "engineering marvels, such as aqueducts and roads." Based on the two-stage process of text generation, which statement best analyzes this interaction?
Arrange the following stages into the correct sequence that describes how a language model generates text based on an initial input.
Analyzing a Code Generation Scenario
Learn After
A common approach in processing textual information is to convert an input sequence of words, which can vary in length from a few words to a long paragraph, into a single numerical summary of a fixed size. This summary is intended to capture the entire meaning of the input for further tasks. Which of the following describes the most significant potential limitation of this approach?
Diagnosing Information Loss in Sequence Processing
The Role of Encoded Input Representations