Learn Before
A machine learning engineer is fine-tuning a pre-trained language model on a specialized dataset of question-answer pairs. The chosen training objective is to adjust the model's parameters to maximize the sum of the log-probabilities of the ground-truth answers, conditioned on their corresponding questions. Which statement best analyzes the direct effect of this training objective on the model's behavior?
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Fine-Tuning Objective as Log-Likelihood Maximization
Training Objective as Joint Log-Likelihood Maximization of Concatenated Sequences
A machine learning engineer is fine-tuning a pre-trained language model on a specialized dataset of question-answer pairs. The chosen training objective is to adjust the model's parameters to maximize the sum of the log-probabilities of the ground-truth answers, conditioned on their corresponding questions. Which statement best analyzes the direct effect of this training objective on the model's behavior?
Interpreting Fine-Tuning Loss
Analyzing Fine-Tuning Behavior
When fine-tuning a language model, the objective of maximizing the sum of the log-likelihoods of the true responses given the prompts is mathematically equivalent to minimizing the mean squared error loss over the dataset.