Analysis of a Positional Encoding Method
Consider an information-processing mechanism where each input item is represented by a 'query' vector and a 'key' vector. Before these vectors are compared, each is rotated in a 2D plane. The angle of rotation for an item at position 'p' is directly proportional to 'p'. The comparison score between a query from position 't' and a key from position 's' is then calculated using the dot product of their respective rotated vectors. A key mathematical property of this operation is that the resulting dot product value is a function of the original, un-rotated vectors and the difference in their positions, (t-s).
Based on this description, explain why this mechanism is said to capture relative positional information rather than absolute positional information.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
An attention mechanism incorporates positional information by applying a unique rotation to each query and key vector based on its absolute position in a sequence. The attention score between a query from position 't' and a key from position 's' is then computed. A key property of this rotation is that the dot product between the rotated query and key vectors is a function of the original vectors and the difference in their positions (t-s). Based on this information, what can be concluded about the attention scores produced by this mechanism?
Analysis of a Positional Encoding Method
Evaluating a Model's Performance Discrepancy