Learn Before
Case Study

Troubleshooting a Prompting Strategy

A developer is trying to get a large language model to extract the main subject from a user's question and format it as a single keyword. They provide the model with several examples within the prompt before giving it a new, unseen question. However, the model's output for the new question is unreliable. Analyze the examples provided and identify the most significant flaw that prevents the model from learning the desired task. Explain your reasoning.

0

1

Updated 2025-09-26

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.5 Inference - Foundations of Large Language Models

Ch.1 Pre-training - Foundations of Large Language Models

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science