Concept

Uniform Prior Assumption in NLP Prompting

While the Bayesian treatment of prompt ensembling relies on a prior distribution of prompts Pr(xp)\Pr(\mathbf{x}|p) for a given problem pp, it is common practice in Natural Language Processing (NLP) to assume a non-informative or uniform prior. Consequently, instead of mathematically computing the full predictive distribution integral, practitioners construct a set of diverse prompts and calculate the output using straightforward combination models.

0

1

Updated 2026-04-30

Contributors are:

Who are from:

Tags

Foundations of Large Language Models

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences