Multiple Choice

A user gives a large language model a complex multi-step logic puzzle. They try two different prompts:

Prompt A: "What is the final answer to the following puzzle? [Puzzle text]" Result A: The model provides a single, incorrect answer.

Prompt B: "Think step-by-step to solve the following puzzle. First, break down the problem into smaller parts. Then, reason through each part before providing the final answer. [Puzzle text]" Result B: The model provides a detailed, step-by-step breakdown of its reasoning, arriving at the correct final answer.

Based on these results, what is the most accurate explanation for the difference in the model's performance?

0

1

Updated 2025-10-02

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science