Short Answer

Determining Individual Predictions in Prompt Ensembling

A data scientist is using a language model to classify a user comment: 'The interface is a bit clunky, but the features are powerful.' They use two different prompts to get a prediction.

Prompt 1 (x₁): 'Classify the sentiment of this comment: ...' Prompt 2 (x₂): 'Is the following user feedback positive, negative, or neutral? ...'

The model returns the following conditional probabilities for each prompt:

  • For x₁: Pr(Positive|x₁)=0.6, Pr(Negative|x₁)=0.3, Pr(Neutral|x₁)=0.1
  • For x₂: Pr(Positive|x₂)=0.5, Pr(Negative|x₂)=0.2, Pr(Neutral|x₂)=0.3

Based on the mathematical principle of maximizing the conditional probability for each prompt, what is the individual prediction (ŷᵢ) for Prompt 1 and Prompt 2?

0

1

Updated 2025-10-03

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Application in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science