Learn Before
Fine-Tuning Pre-trained LLMs with Advanced Positional Embeddings
Large Language Models utilizing relative or rotary positional embeddings can be pre-trained on extensive datasets. Although these pre-trained models might show some capacity to extrapolate to unobserved lengths during inference, fine-tuning them specifically on longer sequences is generally a more effective approach for adapting them to handle extended contexts.
0
1
Tags
Foundations of Large Language Models
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Popular Methods for Adapting Pre-trained LLMs to Long Sequences
Strengths and Limitations of Long-Sequence Models
Pre-training and Fine-tuning Strategy for Long-Context Adaptation
Length Extrapolation in LLMs
Fine-Tuning for Architectural Adaptation in LLMs
A startup with limited computational resources and a tight deadline needs to build a system that can summarize lengthy legal documents. They have access to a powerful, general-purpose language model that was pre-trained on a massive dataset but primarily on shorter texts. Given their constraints, which of the following strategies is the most logical and efficient for them to pursue?
The primary reason for adapting existing pre-trained language models for long sequences, rather than training new models from scratch, is that pre-trained models inherently possess superior architectural designs for handling extended contexts.
Evaluating Model Development Strategies for Long-Text Analysis
Scaling Up via Long Sequence Adaptation
Fine-Tuning Pre-trained LLMs with Advanced Positional Embeddings