Limitations of Probability Maximization in Text Generation
A language model is tasked with completing the sentence: "The inventor of the light bulb was...". The search process, designed to find the sequence with the highest possible conditional log-probability, generates the following output: "The inventor of the light bulb was was was was was...". Explain why a search algorithm strictly aiming to maximize the sequence's probability might produce such a repetitive and unhelpful result, even if it is mathematically "optimal" according to the model's calculations.
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A language model is generating text with the goal of producing the most probable output sequence. It has already generated the phrase 'The best way to learn is by...' and must now decide the next word. The model calculates the following probabilities for the next possible words:
Pr('doing' | 'The best way to learn is by...') = 0.6Pr('reading' | 'The best way to learn is by...') = 0.3Pr('sleeping' | 'The best way to learn is by...') = 0.09Pr('car' | 'The best way to learn is by...') = 0.01
To continue constructing the sequence with the highest possible overall probability, which word should the search process select at this step?
Limitations of Probability Maximization in Text Generation
Evaluating Candidate Sequences in LLM Inference