Learn Before
Fine-Tuning Objective as Log-Likelihood Maximization
A popular method for fine-tuning a model is to find the optimal parameters, , by maximizing the total conditional log-likelihood over a dataset of prompt-response pairs. This approach, equivalent to minimizing the negative log-likelihood loss, seeks parameters that make the observed outputs most probable given the inputs . In some cases, the prompt is decomposed into an instruction and a user input , such that . The formal expression is: where is the probability predicted by an LLM with the parameters .

0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Fine-Tuning Objective as Log-Likelihood Maximization
Training Objective as Joint Log-Likelihood Maximization of Concatenated Sequences
A machine learning engineer is fine-tuning a pre-trained language model on a specialized dataset of question-answer pairs. The chosen training objective is to adjust the model's parameters to maximize the sum of the log-probabilities of the ground-truth answers, conditioned on their corresponding questions. Which statement best analyzes the direct effect of this training objective on the model's behavior?
Interpreting Fine-Tuning Loss
Analyzing Fine-Tuning Behavior
When fine-tuning a language model, the objective of maximizing the sum of the log-likelihoods of the true responses given the prompts is mathematically equivalent to minimizing the mean squared error loss over the dataset.
Learn After
Auto-regressive Decomposition of Conditional Log-Likelihood
A machine learning engineer is fine-tuning two language models, Model A and Model B, on the same dataset of 100 prompt-response pairs. The goal is to select the model whose parameters are best optimized to make the observed responses most probable given the prompts. After one epoch of training, the engineer calculates the sum of the conditional log-probabilities for the entire dataset for each model:
- Model A: -150
- Model B: -200
Which model is performing better according to this objective, and why?
Evaluating the Log-Likelihood Maximization Objective
If a language model's parameters are perfectly optimized by maximizing the total conditional log-likelihood on a given training dataset, it means the model will assign a probability of 1.0 to every correct response sequence in that dataset.