A research team is considering two different training strategies to build a language model using a large corpus of unlabeled text. Strategy A involves first training a preliminary model on a small, human-labeled 'seed' dataset, then using that model's predictions to create labels for the unlabeled text, and finally retraining the model on this newly labeled data. Strategy B involves no initial seed dataset; instead, it creates training tasks directly from the unlabeled text itself (e.g., by masking words and training the model to predict them) to learn from the data's inherent structure. Which statement best analyzes the fundamental difference in how these two strategies initiate the learning process?
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A research team is considering two different training strategies to build a language model using a large corpus of unlabeled text. Strategy A involves first training a preliminary model on a small, human-labeled 'seed' dataset, then using that model's predictions to create labels for the unlabeled text, and finally retraining the model on this newly labeled data. Strategy B involves no initial seed dataset; instead, it creates training tasks directly from the unlabeled text itself (e.g., by masking words and training the model to predict them) to learn from the data's inherent structure. Which statement best analyzes the fundamental difference in how these two strategies initiate the learning process?
Choosing a Training Methodology for a Foundational Model
A key difference between self-training and self-supervised pre-training is that self-training requires an initial model trained on a small set of labeled data to begin the learning process, whereas self-supervised pre-training can start with a randomly initialized model and only unlabeled data.