Multiple Choice

A development team is building a text-to-text model. They have just completed the first stage of training, where the model was exposed to a massive, unlabeled dataset from the internet. What is the most likely reason they would now proceed to a second stage of training using a smaller, curated dataset of specific input-output pairs?

0

1

Updated 2025-10-10

Contributors are:

Who are from:

Tags

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science