Essay

Diagnosing Long-Context Failures Across Positional Schemes

You are on-call for an internal LLM platform. A model trained with a 2k-token context is being deployed for 16k-token customer documents. After the change, offline evals show two distinct failure modes: (1) the model increasingly confuses repeated section headers and cross-references that are ~6k–12k tokens apart (it treats far-apart repeats as if they were closer than they are), and (2) the model’s attention becomes overly local, missing long-range dependencies even when the relevant evidence is clearly present earlier in the document. The team is considering three interventions without full retraining: (A) extend RoPE via position interpolation by scaling the RoPE base (i.e., adjust the RoPE frequency base so longer positions map into the trained range), relying on the idea that a scaled RoPE can be expressed as the original rotation with a transformed angle; (B) replace positional handling with ALiBi (fixed linear distance penalties in attention scores); (C) replace positional handling with a T5-style relative position bias (learned bucketed biases shared across many offsets).

Write a recommendation memo that: (i) explains, using the mechanisms of RoPE rotation/angle transformation, ALiBi’s linear bias, and T5’s bucketed relative bias, which intervention(s) are most likely to mitigate each failure mode and why; (ii) identifies at least one tradeoff or new risk introduced by your chosen approach (e.g., distortion of relative distances under interpolation, loss of expressivity vs learnability, behavior on very large offsets); and (iii) proposes one concrete diagnostic you would run to validate that the positional method is behaving as intended at 16k (describe what you would measure and what outcome would support your hypothesis).

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.3 Prompting - Foundations of Large Language Models

Related