Multiple Choice

A large language model is pre-trained on a vast text corpus. Its training objective is to take a sentence, randomly mask 15% of the words, and then predict only the original masked words by looking at all the surrounding unmasked words (both to the left and right). Which statement best analyzes the primary goal of this specific pre-training approach?

0

1

Updated 2025-09-28

Contributors are:

Who are from:

Tags

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Foundations of Large Language Models

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science