Multiple Choice

A research team is considering two different training strategies to build a language model using a large corpus of unlabeled text. Strategy A involves first training a preliminary model on a small, human-labeled 'seed' dataset, then using that model's predictions to create labels for the unlabeled text, and finally retraining the model on this newly labeled data. Strategy B involves no initial seed dataset; instead, it creates training tasks directly from the unlabeled text itself (e.g., by masking words and training the model to predict them) to learn from the data's inherent structure. Which statement best analyzes the fundamental difference in how these two strategies initiate the learning process?

0

1

Updated 2025-09-26

Contributors are:

Who are from:

Tags

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science