Case Study

Impact of Search Width on Text Generation

A language model is generating a sentence completion. In one configuration (Scenario A), it considers multiple potential next words but only keeps the single most probable word to continue the sentence. In another configuration (Scenario B), it keeps the top 3 most probable words at each step. Scenario A produces the sentence 'The cat sat on the mat.' Scenario B is able to produce the more creative sentence 'The cat sat on the windowsill.' What is the primary drawback of the process used in Scenario A, and how does it relate to which candidate words are kept versus discarded at each step?

0

1

Updated 2025-10-05

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science