Multiple Choice

A user is trying to get a Large Language Model (LLM) to solve a complex word problem that involves multiple calculations. Their initial prompt, 'What is the answer to this problem? [Problem text]', results in a quick but incorrect numerical answer. The user then revises the prompt to: 'First, break down the problem into the necessary steps. Then, solve each step, showing your work. Finally, state the final answer. [Problem text]'. This revised prompt leads to a correct solution. Which principle of interacting with LLMs does this scenario best illustrate?

0

1

Updated 2025-10-01

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science