Learn Before
Formula

Application of RoPE to Token Embeddings

The final embedding for a token at position ii, denoted as ei\mathbf{e}_i, is obtained by applying the Rotary Positional Embedding (RoPE) transformation to the token's original embedding xi\mathbf{x}_i. This is represented by the function Ro(xi,iθ)\mathrm{Ro}(\mathbf{x}_i, i\theta), where ii is the position and θ\theta represents the rotational frequency parameters. The formula is: ei=Ro(xi,iθ)\mathbf{e}_i = \mathrm{Ro}(\mathbf{x}_i, i\theta)

Image 0

0

1

Updated 2025-10-08

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.3 Prompting - Foundations of Large Language Models

Related