Learn Before
Multiple Choice

A language model is generating a two-token sequence. At the first step, it calculates the probability for the next token: 'Token A' has a probability of 0.6, and 'Token B' has a probability of 0.4. If the model chooses 'Token A', the most probable subsequent token is 'Token C' (with a conditional probability of 0.5). If the model had chosen 'Token B', the most probable subsequent token would be 'Token D' (with a conditional probability of 0.9). A text generation algorithm is used that, at every step, commits to the single token with the highest immediate probability. Based on this process, which sequence will be generated and why?

0

1

Updated 2025-09-26

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science

Related