Multiple Choice

A research team is pre-training two separate encoder-decoder models using different variations of a masked language modeling objective.

  • Model A is trained by masking 15% of the input tokens, with each mask covering only a single token. The model's objective is to predict the original token for each masked position.
  • Model B is trained by masking 50% of the input tokens, with masks covering contiguous spans of up to 10 tokens. The model's objective is to predict the entire original text span.

Which of the following statements most accurately analyzes the likely capabilities these two models will develop based on their pre-training objectives?

0

1

Updated 2025-09-26

Contributors are:

Who are from:

Tags

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science