Calculating Predictive Probability with Prompt Priors
A research team is using a language model to solve a complex reasoning problem, p. They have two different prompts they can use, x_A and x_B. Based on prior experience, they believe prompt x_A is much more likely to be suitable for this type of problem than prompt x_B. After running the model, they want to determine the overall probability of getting the specific correct answer, y_correct. Using the principles of probabilistic integration over different information sources, calculate the final predictive probability for the correct answer, Pr(y_correct|p). Show your work.
0
1
Tags
Ch.3 Prompting - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Computational Infeasibility of the Bayesian Predictive Distribution Integral
A team is using a probabilistic method to combine the outputs from a language model for a variety of different prompts (x) to solve a single problem (p). The final probability of a specific output (y) is calculated by integrating over all possible prompts. The formula for this is: Pr(y|p) = ∫ Pr(y|x) Pr(x|p) dx. In this formula, Pr(y|x) is the model's likelihood of the output given a prompt, and Pr(x|p) is a prior distribution representing the assumed suitability of a prompt for the problem. How would the calculation of Pr(y|p) be affected if the prior distribution Pr(x|p) was assumed to be uniform, meaning every possible prompt is considered equally suitable?
Calculating Predictive Probability with Prompt Priors
A probabilistic approach to combining outputs from different prompts for a single problem involves the following formula: Match each mathematical term from the formula with its correct conceptual description.