Multiple Choice

A team is developing a small, efficient text-generation model (the 'student') by training it to imitate a much larger, powerful model (the 'teacher'). Their training method requires the student to learn from the full probability distribution the teacher assigns over all possible next words. They discover this is computationally infeasible because their vocabulary contains hundreds of thousands of words, and calculating the training objective for a single example requires summing over this entire vocabulary. Which of the following strategies provides the most practical solution to this specific computational problem while still using the teacher's guidance?

0

1

Updated 2025-09-28

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science