Match each masked language modeling (MLM) pre-training strategy for an encoder-decoder model with the primary capability it is designed to develop.
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Example of Full Sequence Generation via 100% Masking
A research team is pre-training two separate encoder-decoder models using different variations of a masked language modeling objective.
- Model A is trained by masking 15% of the input tokens, with each mask covering only a single token. The model's objective is to predict the original token for each masked position.
- Model B is trained by masking 50% of the input tokens, with masks covering contiguous spans of up to 10 tokens. The model's objective is to predict the entire original text span.
Which of the following statements most accurately analyzes the likely capabilities these two models will develop based on their pre-training objectives?
Evaluating Pre-training Objectives for a Multi-Task Model
Match each masked language modeling (MLM) pre-training strategy for an encoder-decoder model with the primary capability it is designed to develop.