Short Answer

Parameter Updates in Supervised LLM Training

Imagine a language model is being trained on the following example: Input: 'The capital of France is' Target next token: 'Paris'

During this training step, the model's current prediction for the next token gives 'London' a probability of 0.3, 'Paris' a probability of 0.2, and 'Berlin' a probability of 0.1 (with other words having the remaining probability).

Based on the standard objective of maximizing the likelihood of the correct output, describe how the model's internal parameters will be adjusted in response to this specific training example.

0

1

Updated 2025-10-08

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Application in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science