Activity (Process)

Pre-training and Fine-tuning Strategy for Long-Context Adaptation

A widely used two-stage method for enabling Large Language Models to handle long contexts involves an initial pre-training phase on general, large-scale datasets, followed by a more focused fine-tuning phase using longer text sequences.

0

1

Updated 2025-10-10

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences