Debugging an In-Context Learning Prompt
A data scientist is using a large language model to classify customer reviews as 'Positive', 'Negative', or 'Neutral'. They provide the model with several examples in the prompt to guide its behavior, but the model's classifications are inconsistent and frequently incorrect. Analyze the provided prompt structure and explain the most likely reason for the model's poor performance.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A developer is trying to get a language model to solve multi-step arithmetic word problems by providing it with an example. Consider the example they use: 'Tom has 12 marbles. He wins 7 more marbles in a game with his friend but then loses 5 marbles the next day. His brother gives him another 3 marbles as a gift. How many marbles does Tom have now?' Which statement best analyzes why this is an effective example for this purpose?
Debugging an In-Context Learning Prompt
When constructing a prompt to guide a language model, a single demonstration is considered complete if it only presents the problem that the model is expected to solve.