Short Answer

Explaining Extrapolation Failure in Positional Embeddings

A language model is designed to understand the order of words by learning a unique numerical representation for each position in a sequence (e.g., position 1, position 2, etc.). This model is trained exclusively on documents that are 512 words long or shorter. Explain the fundamental reason why this model would likely struggle to process a document that is 800 words long.

0

1

Updated 2025-10-02

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science