Learn Before
Case Study

Diagnosing a Model's Infilling Errors

An engineer is training a model to fill in masked text. They observe a recurring issue: when a multi-word phrase is replaced by a single [MASK] token, the model often generates a plausible but shorter phrase than the original. For example, given the input 'The artist carefully mixed the [MASK] on their palette,' where the original text was 'bright crimson paint,' the model frequently outputs just 'paint.' Based on this pattern of errors, what specific information, lost during the masking process, is the model failing to predict correctly?

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Application in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science