Learn Before
Period Adjustment in Position Interpolation
The core mechanism of position interpolation involves modifying the period, , of the positional encoding functions. By adjusting this period, for example by scaling it up, the model can represent new positions from sequences longer than its training data within the original learned range, .
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Position Interpolation Mapping for Longer Sequences
Period Adjustment in Position Interpolation
Position Interpolation by Scaling the RoPE Base
A large language model was trained exclusively on documents with a maximum length of 2048 tokens. An engineer now needs to use this pre-trained model to process a new document that is 4096 tokens long without altering the model's architecture or retraining it. If the engineer applies a position interpolation technique, what is the fundamental objective of this action?
Analyzing Performance Degradation with Long Sequences
Evaluating a Strategy for Extending Context Length
Example of Interpolation by Scaling Positions
Learn After
Formula for Scaling the Period in Position Interpolation
A language model was originally trained to handle text up to a maximum of 4096 tokens. To enable it to process a document with 8192 tokens without retraining, a modification is made to its positional encoding functions. Based on the principles of position interpolation, which statement best describes the nature and effect of this modification?
Analyzing a Positional Encoding Modification
Mechanism of Position Interpolation
To enable a language model to process sequences longer than its original training limit, the period of its positional encoding functions must be reduced. This adjustment ensures that the new, more distant positions are mapped within the range the model has already learned.