An autoregressive language model has generated the phrase 'After the long hike, we were all very'. To determine the next word, the model evaluates several options from its vocabulary. Which of the following calculations best represents the core principle the model uses to decide which word (e.g., 'tired', 'hungry', 'happy') is most likely to come next?
0
1
Tags
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Unconventional Formula for Conditional Sequence Probability
An autoregressive language model has generated the phrase 'After the long hike, we were all very'. To determine the next word, the model evaluates several options from its vocabulary. Which of the following calculations best represents the core principle the model uses to decide which word (e.g., 'tired', 'hungry', 'happy') is most likely to come next?
Deconstructing Next-Token Prediction
An autoregressive model is in the process of generating a sentence. So far, it has produced the sequence of words: 'The cat sat on the'. The model is now trying to determine the most probable next word. Which of the following mathematical expressions correctly represents the probability the model is calculating for the specific word 'mat' to be the next word in the sequence?