Short Answer

Analyzing a Flawed Pre-training Strategy

A data scientist is pre-training an encoder-decoder model on a large text corpus. For each document, they create a training example by selecting a single, random sentence as the input for the encoder and the immediately following sentence as the target for the decoder. After extensive training, they observe that the model is very good at generating a plausible next sentence, but it fails to generate long, coherent multi-paragraph continuations that rely on the broader context of the original document. Based on the principles of this training approach, explain the most likely flaw in the data preparation strategy that is causing this specific performance issue.

0

1

Updated 2025-10-10

Contributors are:

Who are from:

Tags

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science