Multiple Choice

A language model is trained using an objective where every token in the input sentence is replaced by a [MASK] token. The model is then required to reconstruct the entire original sentence. How does the primary skill developed by this training method differ from a method where only a small fraction (e.g., 15%) of the tokens are masked?

0

1

Updated 2025-09-28

Contributors are:

Who are from:

Tags

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science