Short Answer

Mechanism of RoPE Base Scaling

A language model, originally trained with rotary position embeddings on sequences of up to 2048 tokens, needs to be adapted to handle sequences of 8192 tokens. An engineer proposes to achieve this by increasing the base parameter used to calculate the rotational frequencies. Explain the underlying mechanism that makes this approach effective. Specifically, how does modifying the base parameter change the position encodings to accommodate the longer context?

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science