Multiple Choice

A team is fine-tuning a large language model. They have access to a small, high-quality dataset with verified ground-truth labels, as well as a much larger dataset where labels have been generated by a weaker, smaller model. To maximize the performance of the large model by using both data sources simultaneously, which training objective should they implement?

0

1

Updated 2025-10-02

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science