Case Study

Post-Retrofit Regression: Separating Positional-Method Effects from Scaling Choices

You are on-call for an internal LLM platform team. A decoder-only model was trained with RoPE for a 4k-token context. To support 32k tokens without full retraining, the team shipped a retrofit that (a) scales the RoPE base (i.e., changes the RoPE frequency base parameter by a factor λ) and (b) also adds a relative positional bias term in attention. Two variants were A/B tested:

Variant A: Adds a fixed, non-learned linear distance penalty to attention scores (bias becomes more negative as |i−j| grows). Variant B: Adds a learned relative bias that buckets offsets into a limited number of bins, sharing one parameter per bucket.

After rollout, both variants pass short-context evals (≤4k). At 32k, you see a specific regression: the model can still retrieve facts from far earlier in the prompt, but it increasingly mis-orders events and confuses “which clause modifies which” in long legal/contract sentences (errors look like degraded relative-position precision rather than pure forgetting). Latency and memory budgets are tight, so you can only change ONE thing quickly: either (1) remove the added attention bias and rely only on RoPE base scaling, or (2) keep the added bias but revert the RoPE base scaling (λ back to 1), or (3) keep both but change how RoPE is scaled by exploiting the idea that a scaled RoPE can be implemented as the original RoPE with a transformed rotation angle.

Which option (1/2/3) is the best first fix to try, and justify your choice by explicitly linking: (i) how RoPE encodes relative position via rotations, (ii) what base scaling/interpolation changes about those rotations across dimensions, and (iii) how a linear bias (Variant A) versus bucketed learned bias (Variant B) affects relative-position resolution at very large offsets.

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.3 Prompting - Foundations of Large Language Models

Related