In a system that encodes sequential position, an initial vector x is transformed to represent position m by applying a cumulative rotation, resulting in vector v_m. Similarly, the vector for position m+1 is v_{m+1}. Based on this mechanism, what is the direct geometric transformation that relates v_m to v_{m+1}?
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Composition Property of Rotations
Representing 2D Vector Rotation in Complex Space
Definition of the 2D Rotation Matrix
Consider a system where the position of a token in a sequence is encoded by rotating its initial vector embedding,
x. The total angle of rotation is directly proportional to the token's position,m. If the vector for a token at position 3 is obtained by rotatingxby a total angle of3θ, what is the correct transformation to find the vector for the same token at position 9?A positional encoding system represents a token's position by sequentially rotating its initial vector embedding, denoted as
x, by a fixed angleθfor each step forward in a sequence. Arrange the following vector states to show the correct order of transformations for a token as its position advances from 1 to 3.In a system that encodes sequential position, an initial vector
xis transformed to represent positionmby applying a cumulative rotation, resulting in vectorv_m. Similarly, the vector for positionm+1isv_{m+1}. Based on this mechanism, what is the direct geometric transformation that relatesv_mtov_{m+1}?