Multiple Choice

An engineer is using a text generation model with a beam search decoding strategy where the beam width is set to 3. The goal is to generate a list of possible sentence completions. At a certain step, the algorithm has produced the following partial sentences (hypotheses) with their associated scores (higher is better):

  1. "The cat sat on the mat" (Score: -0.8) [This is a complete sentence]
  2. "The cat sat on the rug" (Score: -1.2)
  3. "The cat sat on the chair" (Score: -1.5)
  4. "The cat sat on the table" (Score: -1.9)

Given that the first hypothesis is a complete sentence, how does the algorithm proceed to generate a final list of multiple, distinct outputs?

0

1

Updated 2025-09-29

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Computing Sciences

Foundations of Large Language Models Course

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science