Learn Before
Length Extrapolation in LLMs
Length extrapolation is the capacity of a Large Language Model to effectively process sequences during inference that are longer than those encountered during its training. Models may exhibit some degree of this ability after pre-training alone.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.2 Generative Models - Foundations of Large Language Models
Related
Popular Methods for Adapting Pre-trained LLMs to Long Sequences
Strengths and Limitations of Long-Sequence Models
Pre-training and Fine-tuning Strategy for Long-Context Adaptation
Length Extrapolation in LLMs
Fine-Tuning for Architectural Adaptation in LLMs
A startup with limited computational resources and a tight deadline needs to build a system that can summarize lengthy legal documents. They have access to a powerful, general-purpose language model that was pre-trained on a massive dataset but primarily on shorter texts. Given their constraints, which of the following strategies is the most logical and efficient for them to pursue?
The primary reason for adapting existing pre-trained language models for long sequences, rather than training new models from scratch, is that pre-trained models inherently possess superior architectural designs for handling extended contexts.
Evaluating Model Development Strategies for Long-Text Analysis
Scaling Up via Long Sequence Adaptation
Fine-Tuning Pre-trained LLMs with Advanced Positional Embeddings
Learn After
Fine-tuning on Longer Sequences for Enhanced Length Extrapolation
Analyzing Model Performance on Long Documents
An AI development team trains a language model exclusively on documents with a maximum length of 4,096 tokens. After deployment, they are surprised to find that the model can coherently summarize documents up to 5,000 tokens long, but its performance degrades significantly on documents longer than 6,000 tokens. Which statement best analyzes this observation?
Explaining Unexpected Model Performance