Learn Before
An autoregressive language model is being trained on a single data instance. The model is provided with the input context tokens ['The', 'quick', 'brown'] and is trained to generate the target completion tokens ['fox', 'jumps']. During the backward pass for this specific training step, from which token positions will the error signals (gradients) used to update the model's weights primarily originate?
0
1
Tags
Ch.4 Alignment - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
An autoregressive language model is being trained on a single data instance. The model is provided with the input context tokens
['The', 'quick', 'brown']and is trained to generate the target completion tokens['fox', 'jumps']. During the backward pass for this specific training step, from which token positions will the error signals (gradients) used to update the model's weights primarily originate?Debugging Language Model Training
When fine-tuning an autoregressive language model on a dataset where each example consists of an input prompt and a target completion, the training loss is calculated across all tokens in the combined sequence (prompt + completion) to ensure the model understands the full context.
Example of Loss Calculation in Instruction Fine-Tuning