Learn Before
Ranking Stage in K-Best Selection
The ranking stage, labeled as step ② in the process, follows the expansion of candidate tokens. In this step, all generated candidates are sorted in descending order based on their probabilities. For example, the candidates 'cute' (Pr=0.34), 'on' (Pr=0.32), 'sick' (Pr=0.21), 'are' (Pr=0.12), and '.' (Pr=0.01) are ranked. This stage prepares for the final selection by identifying which candidates will be kept and which will be discarded or 'pruned'.

0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Example of K-Best Selection with a Beam Width of 3
Ranking Stage in Beam Search
Expansion Stage in K-Best Selection
A language model generates the following potential next words and their corresponding probabilities: 'house' (0.25), 'car' (0.40), 'boat' (0.15), 'plane' (0.18), and 'train' (0.02). If a selection process is used to keep only the top 3 most probable words, which set of words will be chosen?
A language model is generating the next word in a sentence. Arrange the following actions into the correct sequence for selecting the most promising candidates.
Ranking Stage in K-Best Selection
Output Stage in K-Best Selection
Impact of Selection Parameter on Text Generation
Learn After
A language model is generating the next word in a sequence. After an initial step, it has produced a set of candidate words, each with an associated probability. Arrange these candidates in the correct order as they would appear after the ranking stage, from highest probability to lowest.
A language model is generating the next token for a sequence and has produced the following candidate tokens with their associated probabilities:
mat(0.45),floor(0.20),couch(0.30),window(0.05). After the ranking stage is completed, which of the following statements best describes the state of these candidates?Debugging the Token Selection Process