Concept

Challenge of Opaque Pre-Training Data in Fine-Tuning

The lack of transparency regarding the specific data used during a model's pre-training phase creates a significant challenge for fine-tuning. Without knowing which instruction-response patterns the model has already been exposed to, it becomes difficult to determine which mappings need to be explicitly taught during the fine-tuning stage.

0

1

Updated 2026-05-02

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences