Learn Before
A team is refining a language model's story-generation capabilities. Their primary strategy is to increase the maximum number of tokens the model can produce in a single output, aiming for more comprehensive and detailed narratives. What is the most significant potential downside the team should anticipate as a direct result of only extending the generation length?
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Critique of Generation Length Strategy
Improving LLM Summarization Quality
A team is refining a language model's story-generation capabilities. Their primary strategy is to increase the maximum number of tokens the model can produce in a single output, aiming for more comprehensive and detailed narratives. What is the most significant potential downside the team should anticipate as a direct result of only extending the generation length?
Generating and Verifying Thinking Paths