Learn Before
Short Answer

Dynamic Candidate Set in Probabilistic Text Generation

A text generation model uses a decoding method where it selects the next word from the smallest set of most likely words whose combined probability exceeds a fixed threshold, p = 0.9. Describe two different scenarios for the model's predicted word probabilities that would result in a) a very small set of candidate words and b) a very large set of candidate words, even though the threshold p is the same in both cases.

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Data Science

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science