Short Answer

Unintended Learning in Sentence Relationship Models

A language model is trained on a task where it must determine if Sentence B is the actual sentence that follows Sentence A. For negative examples (where B is not the next sentence), the training data is constructed by always pairing Sentence A with a random sentence from a completely different document. Explain a potential superficial shortcut the model might learn from this setup, and why this shortcut fails to capture a true understanding of sentence coherence.

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.1 Pre-training - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science