A developer is prompting a large language model to solve multi-step logic puzzles. They are comparing two few-shot prompting strategies. Strategy A provides examples showing only the puzzle and the final answer. Strategy B provides examples showing the puzzle, a step-by-step reasoning process, and then the final answer. Which strategy is more likely to yield consistently accurate results for new, complex puzzles, and why?
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Evaluation in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Example of a Mathematical Reasoning Task for LLMs
Constructing a Few-Shot Prompt for Multi-Step Reasoning
A developer is prompting a large language model to solve multi-step logic puzzles. They are comparing two few-shot prompting strategies. Strategy A provides examples showing only the puzzle and the final answer. Strategy B provides examples showing the puzzle, a step-by-step reasoning process, and then the final answer. Which strategy is more likely to yield consistently accurate results for new, complex puzzles, and why?
Limitation of Question-Answer Pair Demonstrations in Few-Shot Prompting
Diagnosing Prompting Failures in Multi-Step Tasks