Learn Before
Comparison of Permuted and Causal Language Modeling
Both Permuted Language Modeling and Causal Language Modeling involve making sequential predictions of tokens. The primary distinction is that Causal Language Modeling restricts predictions to the natural, fixed sequence of the text (e.g., strictly left-to-right), whereas Permuted Language Modeling permits the sequential prediction of tokens to occur in any arbitrarily determined order.
0
1
Tags
Foundations of Large Language Models
Ch.1 Pre-training - Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Probability Factorization for Arbitrary Order Token Prediction
A language model is pre-trained using an objective where, for the input sentence 'The model learns from text', it might be tasked to predict the word 'learns' based on the context of 'text' and 'The', while the word 'model' is not yet visible to it. In the next step, it might predict 'model' based on 'text', 'The', and the newly predicted 'learns'. What is the primary advantage of this training approach compared to a standard left-to-right sequential prediction?
A language model is being pre-trained on the sentence 'The quick brown fox jumps' using a permuted objective. The model is given a random permutation of the token positions: (3, 5, 1, 4, 2). Arrange the words from the sentence in the order they will be auto-regressively predicted during this training step.
Pre-training Objective Selection
Comparison of Permuted and Causal Language Modeling
Implementing Permutation via Self-Attention Masks