A machine learning engineer is fine-tuning two language models, Model A and Model B, on the same dataset of 100 prompt-response pairs. The goal is to select the model whose parameters are best optimized to make the observed responses most probable given the prompts. After one epoch of training, the engineer calculates the sum of the conditional log-probabilities for the entire dataset for each model:
- Model A: -150
- Model B: -200
Which model is performing better according to this objective, and why?
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Auto-regressive Decomposition of Conditional Log-Likelihood
A machine learning engineer is fine-tuning two language models, Model A and Model B, on the same dataset of 100 prompt-response pairs. The goal is to select the model whose parameters are best optimized to make the observed responses most probable given the prompts. After one epoch of training, the engineer calculates the sum of the conditional log-probabilities for the entire dataset for each model:
- Model A: -150
- Model B: -200
Which model is performing better according to this objective, and why?
Evaluating the Log-Likelihood Maximization Objective
If a language model's parameters are perfectly optimized by maximizing the total conditional log-likelihood on a given training dataset, it means the model will assign a probability of 1.0 to every correct response sequence in that dataset.