Learn Before
Analyzing Performance Degradation with Long Sequences
A large language model was trained on text segments with a maximum length of 4096 tokens. When this model is later used to process a document of 8192 tokens, its performance drops significantly. From the perspective of how the model understands token order, explain the likely reason for this failure and describe the core objective of the position interpolation technique used to fix it.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Position Interpolation Mapping for Longer Sequences
Period Adjustment in Position Interpolation
Position Interpolation by Scaling the RoPE Base
A large language model was trained exclusively on documents with a maximum length of 2048 tokens. An engineer now needs to use this pre-trained model to process a new document that is 4096 tokens long without altering the model's architecture or retraining it. If the engineer applies a position interpolation technique, what is the fundamental objective of this action?
Analyzing Performance Degradation with Long Sequences
Evaluating a Strategy for Extending Context Length
Example of Interpolation by Scaling Positions