Problem

Primary Source of Out-of-Distribution Generalization: Pre-training vs. Fine-tuning

A key unresolved question in LLM development is whether out-of-distribution generalization is primarily a result of the extensive pre-training phase or the subsequent fine-tuning stage. Understanding the relative contributions of each phase is crucial for optimizing model training and adaptation.

0

1

Updated 2026-05-01

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences