Impact of Vocabulary Set Size on Renormalized Probabilities
A language model is generating text. At one step, it considers a restricted set of three possible next tokens: 'the' (original probability 0.6), 'a' (original probability 0.2), and 'an' (original probability 0.1). The probabilities for this set are then rescaled to form a new distribution. Now, imagine a different scenario where a fourth token, 'one' (original probability 0.05), is also included in the restricted set along with the original three. How does the inclusion of the token 'one' affect the new, rescaled probability of the token 'the'? Explain your reasoning without performing the full calculation.
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A language model predicts the probabilities for the next word in a sequence. The top four candidates are: 'happy' (0.4), 'sad' (0.2), 'angry' (0.1), and 'joyful' (0.05). A decoding method is applied that restricts the possible choices to only the top three candidates ('happy', 'sad', 'angry'). After the probabilities for this smaller set are rescaled to form a new, valid probability distribution, what is the new probability for the word 'sad'?
Debugging a Sampling Algorithm
Impact of Vocabulary Set Size on Renormalized Probabilities