Evaluating Model Performance in MLM Training
A language model is being trained to predict masked words in sentences. The training objective for a given sentence is to maximize the sum of the log-probabilities of the correct words at the masked positions. A higher sum (a value closer to zero) indicates the model was more confident in its predictions.
Given the two training steps detailed below, calculate the objective value for each step and determine in which step the model's overall prediction for the masked words was better. Explain your reasoning.
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A language model is being trained using a masked language modeling objective. The original input sentence is 'A quick brown fox jumps over the lazy dog'. During a training step, the tokens 'quick' (at position 2) and 'lazy' (at position 8) are masked. The model receives the corrupted input, denoted as : '[CLS] A [MASK] brown fox jumps over the [MASK] dog'. Which of the following mathematical expressions correctly represents the training objective for this specific step, which the model aims to maximize?
A language model is being trained on a sentence where two words have been replaced with a special [MASK] token. The training objective is to maximize the sum of the log-probabilities of the original words at these two masked positions. Why is the objective formulated as a sum of log-probabilities rather than, for example, a product of the probabilities?
Evaluating Model Performance in MLM Training