Concept

Rotary Positional Embeddings

Similar to sinusoidal embeddings, rotary positional embeddings (RoPE) utilize fixed, hard-coded values to represent positions. However, instead of adding positional vectors to token embeddings, RoPE models positional context by rotating the token embeddings in a complex vector space. This results in a multiplicative integration of positional information, distinguishing it from the additive approach common in other methods.

Image 0

0

1

Updated 2026-04-29

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Learn After