Learn Before
Ranking Stage in Top-p Sampling
The second stage in the Top-p sampling process involves ranking the candidate tokens generated during the expansion phase. These tokens are sorted in descending order according to their probabilities. For example, the token 'cute' with a probability of 0.34 is ranked highest, followed by 'on' with a probability of 0.32, 'sick' with 0.21, and so on.

0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Ranking Stage in Top-p Sampling
Selection and Sampling Stage in Top-p Sampling
Output Stage in Top-p Sampling
Expansion Stage in Top-p Sampling
A language model is generating text and has calculated the probabilities for the following potential next tokens:
mat(0.5),floor(0.3),rug(0.1), andtable(0.05). The model is configured to use a sampling method where it first identifies the smallest set of the most probable tokens whose cumulative probability is at least 0.9. It then discards all other tokens and randomly selects the final output from this reduced set. Based on this process, what is the outcome?A language model is using a probabilistic method to generate the next word in a sentence. Arrange the following descriptions of the steps involved in this method into the correct chronological order.
Applying Probabilistic Text Generation
Learn After
A language model has generated the following potential next tokens and their associated probabilities. Arrange these tokens in the correct order as they would appear after being sorted in descending order based on their probability.
A language model has produced a set of potential next tokens with their corresponding probabilities: 'cat' (0.22), 'dog' (0.35), 'bird' (0.15), 'fish' (0.18), and 'hamster' (0.10). Which option below correctly shows these tokens sorted in descending order of their probability?
Debugging a Text Generation Pipeline