Short Answer

Diagnosing LLM Prompting Failures

A developer is trying to get a large language model to solve multi-step physics problems. They provide the model with a prompt that includes one example problem and its final numerical answer. However, when given a new problem, the model consistently calculates the wrong answer. Based on this scenario, explain the primary limitation of the developer's example.

0

1

Updated 2025-10-10

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.3 Prompting - Foundations of Large Language Models

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science