Learn Before
A language model is used for a sentiment classification task. To improve reliability, two different instructions are given to the model for the same input text, resulting in two sets of output probabilities for the possible classes (Positive, Neutral, Negative).
- Output from Instruction 1: {Positive: 0.7, Neutral: 0.2, Negative: 0.1}
- Output from Instruction 2: {Positive: 0.5, Neutral: 0.4, Negative: 0.1}
If you combine these two outputs by taking the simple average of their probabilities for each class, what is the final combined probability for the 'Positive' class?
0
1
Tags
Data Science
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A language model is used for a sentiment classification task. To improve reliability, two different instructions are given to the model for the same input text, resulting in two sets of output probabilities for the possible classes (Positive, Neutral, Negative).
- Output from Instruction 1: {Positive: 0.7, Neutral: 0.2, Negative: 0.1}
- Output from Instruction 2: {Positive: 0.5, Neutral: 0.4, Negative: 0.1}
If you combine these two outputs by taking the simple average of their probabilities for each class, what is the final combined probability for the 'Positive' class?
A language model is used for a sentiment classification task. To improve reliability, three different instructions are given to the model for the same input text, resulting in three sets of output probabilities for the classes 'Positive', 'Neutral', and 'Negative'. The outputs are as follows:
- Output 1: {Positive: 0.6, Neutral: 0.3, Negative: 0.1}
- Output 2: {Positive: 0.7, Neutral: 0.2, Negative: 0.1}
- Output 3: {Positive: 0.5, Neutral: 0.4, Negative: 0.1}
If the final prediction is determined by taking a simple average of the probabilities for each class across all three outputs, what will be the final predicted class?
Effect of Averaging on Prediction Confidence