Short Answer

Explaining Positional Invariance in Rotational Embeddings

In a system that uses rotational transformations to encode token positions, an engineer observes that the inner product between the representations of the words 'the' (at position 3) and 'cat' (at position 5) is identical to the inner product between the same words when they appear at positions 10 and 12. Briefly explain the mathematical property of this encoding scheme that accounts for this observation.

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science