True/False

A language model's context window is being extended from an original maximum length m_l to a new, larger maximum length m. The technique used modifies the rotary position embedding function (Ro) by scaling the position index i according to the formula: New Effective Position = (m_l / m) * i. This formula implies that the effective position for a token at the very end of the new, extended context (position m-1) is mapped to a position that falls outside the range of the original model's trained positions, which is [0, m_l-1].

0

1

Updated 2025-10-08

Contributors are:

Who are from:

Tags

Ch.3 Prompting - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science