Comparison

Comparison of Training Objectives: Instruction Fine-Tuning vs. Pre-training

The training objective of instruction fine-tuning differs from that of standard language model pre-training. Instead of maximizing the probability of an entire sequence, instruction fine-tuning aims to maximize the conditional probability of generating the correct output (the remainder of the sequence) given a specific input prefix or instruction.

0

1

Updated 2026-04-30

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences