Consider two token embeddings, x and y, encoded with a rotational position scheme at positions 10 and 7, respectively. Their resulting inner product is calculated. If these same two tokens are instead placed at positions 25 and 22, how would the new inner product compare to the original one?
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Consider two token embeddings, x and y, encoded with a rotational position scheme at positions 10 and 7, respectively. Their resulting inner product is calculated. If these same two tokens are instead placed at positions 25 and 22, how would the new inner product compare to the original one?
Invariance in Rotational Position Encodings
Evaluating a Positional Encoding Implementation