Learn Before
Enhancing Reasoning Output in a Language Model
An engineer is using a pre-trained language model to solve multi-step mathematical word problems. The model consistently provides the correct final numerical answer but fails to output the intermediate calculations and logical steps it used to arrive at the solution. This makes it impossible to verify the model's reasoning process. What is one specific adjustment the engineer can make to the text generation process itself (without retraining the model) to encourage the model to produce a more detailed, step-by-step output? Explain the rationale behind your suggested adjustment.
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Enhancing Reasoning Output in a Language Model
An engineer is using a large language model to generate detailed, step-by-step tutorials for a complex software library. They find that the model's generated tutorials are accurate but often too concise, omitting crucial explanatory details. To elicit a more thorough and explicit reasoning path in the output, which of the following decoding adjustments is the most direct and effective strategy?
Mechanism of Decoding Penalties for Reasoning