Multiple Choice

A machine learning engineer is fine-tuning a pre-trained language model to function as a helpful assistant. The training data consists of pairs of instructions and desired responses. For each pair, the instruction and response are combined into a single sequence, and the model is trained to predict the next token at each position. However, due to a configuration error, the training loss is calculated across the entire combined sequence (both the instruction and the response tokens), instead of only on the response tokens. What is the most likely undesirable outcome of this training setup?

0

1

Updated 2025-09-28

Contributors are:

Who are from:

Tags

Ch.4 Alignment - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science