Learn Before
Applying Probabilistic Text Generation
Based on the provided scenario, identify which tokens will be included in the final set from which the next word is sampled. Explain your reasoning by showing how the cumulative probability is calculated.
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Ranking Stage in Top-p Sampling
Selection and Sampling Stage in Top-p Sampling
Output Stage in Top-p Sampling
Expansion Stage in Top-p Sampling
A language model is generating text and has calculated the probabilities for the following potential next tokens:
mat(0.5),floor(0.3),rug(0.1), andtable(0.05). The model is configured to use a sampling method where it first identifies the smallest set of the most probable tokens whose cumulative probability is at least 0.9. It then discards all other tokens and randomly selects the final output from this reduced set. Based on this process, what is the outcome?A language model is using a probabilistic method to generate the next word in a sentence. Arrange the following descriptions of the steps involved in this method into the correct chronological order.
Applying Probabilistic Text Generation