True/False

When fine-tuning an autoregressive language model on a dataset where each example consists of an input prompt and a target completion, the training loss is calculated across all tokens in the combined sequence (prompt + completion) to ensure the model understands the full context.

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Application in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science