Learn Before
Multiple Choice

A developer is trying to improve a language model's ability to solve multi-step word problems. They compare two prompting strategies.

Strategy 1: Provide the model with a new word problem and ask for the final answer directly.

Strategy 2: Provide the model with a new word problem, but first show it an example of a similar problem where the solution is explicitly broken down into logical, sequential steps before reaching the final conclusion.

Why is Strategy 2 generally more effective for improving the model's reasoning on complex tasks?

0

1

Updated 2025-09-26

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.3 Prompting - Foundations of Large Language Models

Ch.5 Inference - Foundations of Large Language Models

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science

Related