A language model is being evaluated. For a given input sequence x and a potential output sequence y, the model calculates log Pr([x, y]) = -3.5 and log Pr(x) = -5.2. Based on these values, it is reasonable to conclude that the model's probability calculations are functioning correctly.
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.5 Inference - Foundations of Large Language Models
Evaluation in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
SFT as Language Model Training on Concatenated Sequences
Calculating Conditional Log-Probability Using an LLM
Selective Loss Computation in Joint Probability Language Modeling
Calculating Conditional Log-Probability
An engineer is evaluating a language model and calculates the following log-probabilities for an input sequence
xand an output sequencey: the joint log-probabilitylog Pr([x, y])and the marginal log-probabilitylog Pr(x). They observe that the value oflog Pr([x, y])is significantly more negative than the value oflog Pr(x). Based on the fundamental relationship between joint, conditional, and marginal probabilities, what is the most accurate conclusion?A language model is being evaluated. For a given input sequence
xand a potential output sequencey, the model calculateslog Pr([x, y]) = -3.5andlog Pr(x) = -5.2. Based on these values, it is reasonable to conclude that the model's probability calculations are functioning correctly.