Evaluating a Model's Training Objective
A researcher is training a model to reconstruct original sentences from corrupted versions where some words are replaced by a [MASK] token. The training process only calculates an error signal based on the model's predictions for these [MASK] positions. The researcher observes that the model becomes very good at filling in the blanks, but struggles to generate complete, fluent sentences from scratch. Explain why this specific training method might lead to this outcome.
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A model is being trained to reconstruct an original text sequence from a corrupted version where some words have been replaced. If the model is given the corrupted input:
The quick brown [MASK] jumps over the [MASK] dog., which of the following would be the most appropriate target output for this training example?Evaluating a Model's Training Objective
Focus of the Denoising Loss Function