Learn Before
A language model is given the context: 'The chef carefully added the final, crucial ingredient to the simmering stew: a pinch of...'. The model must predict the next word. Below are the conditional probabilities, Pr(next_word | context), calculated by two different models for four possible next words.
| Next Word | Model A Probability | Model B Probability |
|---|---|---|
| salt | 0.65 | 0.20 |
| concrete | 0.02 | 0.45 |
| laughter | 0.03 | 0.15 |
| thyme | 0.30 | 0.20 |
Based on this data, which of the following statements is the most accurate analysis of the models' understanding of the context?
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Ch.5 Inference - Foundations of Large Language Models
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Fundamental LLM Training Objective
LLM Policy as a Probability Distribution
A language model is given the context: 'The chef carefully added the final, crucial ingredient to the simmering stew: a pinch of...'. The model must predict the next word. Below are the conditional probabilities,
Pr(next_word | context), calculated by two different models for four possible next words.Next Word Model A Probability Model B Probability salt 0.65 0.20 concrete 0.02 0.45 laughter 0.03 0.15 thyme 0.30 0.20 Based on this data, which of the following statements is the most accurate analysis of the models' understanding of the context?
Mathematical Notation for Text Generation Probability
Evaluating Language Model Suitability
Predicting Next-Word Likelihood
Loss Function for Language Modeling