Learn Before
Conceptual Shift in Prompt Handling
A team is using a large language model for a classification task. Their current method involves trying out a few hand-crafted prompts and aggregating the model's outputs. A consultant suggests they should instead adopt a framework where the prompt is treated as an unobserved variable, and the final prediction is derived by considering the entire space of possible prompts. Contrast these two approaches. Specifically, explain the fundamental difference in how the 'prompt' is conceptualized in the consultant's suggested framework compared to the team's current method.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Computing Sciences
Foundations of Large Language Models Course
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Formula for the Predictive Distribution in Bayesian Prompt Ensembling
Robustness of the Bayesian Prompt Ensembling Model
An AI development team observes that their model's performance on a specific problem is highly dependent on the exact phrasing of the input prompt. Their current strategy involves testing a small, fixed set of prompts and aggregating the outputs. To build a more fundamentally robust system that is less sensitive to these variations, which of the following represents the most effective conceptual shift in their approach?
Conceptual Shift in Prompt Handling
According to the Bayesian view of prompt ensembling, the process is fundamentally about identifying the single best prompt that maximizes the likelihood of the desired output for a given problem.
Uniform Prior Assumption in NLP Prompting