Short Answer

Contrasting Data Sourcing Methods in Model Training

A language model is being refined through a process where, for each training instance, an input prompt is selected from a collection. The model then generates a corresponding output based on its current state. This input-output pair is then immediately used for that training step. Contrast this method of obtaining the output portion of a training sample with an approach that uses a fixed, pre-written set of ideal outputs for each prompt. What is a primary advantage of the model-generating its own outputs for training?

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.4 Alignment - Foundations of Large Language Models

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science