Learn Before
Concept

Application of RoPE to d-dimensional Embeddings

To apply Rotary Positional Embeddings (RoPE) to a d-dimensional token embedding, the vector is reinterpreted as a complex vector with d/2 components. This is achieved by grouping consecutive pairs of elements from the original vector, where each pair forms a complex number. The rotational transformation is then applied to each of these d/2 complex numbers independently.

0

1

Updated 2026-04-29

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related