Learn Before
Applying a Probabilistic Filtering Method
A language model is generating text and has produced the following five potential next words with their associated probabilities: 'run' (0.40), 'walk' (0.25), 'jump' (0.20), 'crawl' (0.10), and 'sprint' (0.05). If the model is configured to select from only the top 3 most likely options before making its final random choice, which specific words will be included in the final selection pool?
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Expansion Stage in Top-k Sampling
Ranking and Pruning Stage in Top-k Sampling
A language model is generating the next word in a sentence and has calculated the probabilities for five potential words: 'house' (0.4), 'car' (0.3), 'boat' (0.15), 'plane' (0.1), and 'train' (0.05). The model uses a sampling method where it first ranks these words by probability, keeps only a specific number of the top-ranked words, renormalizes their probabilities to sum to 1, and then samples from this smaller set. How would decreasing the number of top-ranked words kept (e.g., from 4 to 2) most likely affect the generated text over time?
A language model is using a specific decoding method to generate the next token in a sequence. Arrange the following actions into the correct chronological order.
Ranking Stage in Top-k Sampling
Selection and Sampling Stage in Top-k Sampling
Output Stage in Top-k Sampling
Output Stage in Top-k Sampling
Applying a Probabilistic Filtering Method