Learn Before
Analyzing Language Model Training Loss
A researcher is pre-training a language model using two simultaneous objectives: one for predicting masked words and another for determining if two sentences are consecutive. They observe that the loss for the masked word task is decreasing steadily, but the total training loss remains high and is not improving. Based on how the total loss is calculated for this type of model, what is the most likely explanation for this observation?
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
BERT Training Process
An engineer is pre-training a language model that simultaneously learns to predict masked words in a sentence and to determine if two sentences are consecutive. In a single training step, the loss for the masked word prediction task is calculated as 1.8, and the loss for the sentence relationship task is 0.6. What is the total loss value that will be used to update the model's parameters for this step?
Analyzing Language Model Training Loss
Analyzing Dual-Task Model Training Performance