A developer is using a large language model to solve a complex, multi-step reasoning problem. The goal is for the model to first break the problem down into a sequence of simpler sub-problems and then solve them in order. The developer provides the model with the complex problem and the simple instruction: 'Here is a problem. Solve it.' The model attempts to answer directly but fails. Which of the following best explains why the model failed to break the problem down as intended?
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Example of an Instructional Prompt in a Few-Shot Setting for Sub-Problem Decomposition
Example of LLM Generating Sub-Problems for a Duration Question
A developer is using a large language model to solve a complex, multi-step reasoning problem. The goal is for the model to first break the problem down into a sequence of simpler sub-problems and then solve them in order. The developer provides the model with the complex problem and the simple instruction: 'Here is a problem. Solve it.' The model attempts to answer directly but fails. Which of the following best explains why the model failed to break the problem down as intended?
Sequential Sub-Problem Solving with Contextual QA Pairs
A developer wants to guide a Large Language Model to break down a complex problem into simpler sub-problems. Arrange the following components into the most effective and logical sequence for a one-shot prompt to accomplish this task.
Guiding an LLM for Problem Decomposition
Formula for Least-to-Most Sub-Problem Generation