Multiple Choice

A language model encodes token positions by applying a unique, position-dependent rotational transformation to each token's initial embedding. The final, position-aware embedding for a token is the result of this transformation. If the exact same token (e.g., 'model') appears at position 4 and later at position 12 in a sequence, which statement best describes the relationship between their final embeddings, e4\mathbf{e}_4 and e12\mathbf{e}_{12}?

0

1

Updated 2025-09-29

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.3 Prompting - Foundations of Large Language Models

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science