Diagnosing Model Instability in a Sentiment Analyzer
A company, 'GenAI Solutions', has built a sentiment analysis tool. They observe that the tool's accuracy fluctuates dramatically depending on the exact instruction given to the underlying language model. For example, using the instruction 'Classify the sentiment of this text:' yields 90% accuracy, while 'Is this review positive or negative?' drops the accuracy to 65% on the same dataset. Based on the principles of building robust predictive systems, explain the fundamental flaw in an approach that relies on a single, fixed instruction. Then, describe how a methodology that formally accounts for the uncertainty in the choice of instruction would lead to a more consistently reliable model.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Evaluation in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A research team is developing a system to generate summaries of scientific articles. They are concerned that the quality of the summary is highly sensitive to the specific phrasing of the instruction given to the language model. They compare two methods to address this sensitivity:
- Method A: The team manually creates 10 different, high-quality instructions, generates a summary for each, and then averages the results to produce a final summary.
- Method B: The team uses a model that mathematically treats the instruction as a variable and integrates over the entire distribution of all possible instructions to produce a single, final summary.
Based on these descriptions, which method is inherently more robust against variations in instruction phrasing, and why?
Mechanism of Robustness in Bayesian Prompt Ensembling
Diagnosing Model Instability in a Sentiment Analyzer