Formula

Input Embedding with Positional Encoding

To encode positional context into a sequence, a positional embedding is combined with the token embedding. For a token at position ii, given its position-independent token embedding xiRd\mathbf{x}_i \in \mathbb{R}^{d} and its positional embedding PE(i)Rd\mathrm{PE}(i) \in \mathbb{R}^{d}, the final input representation ei\mathbf{e}_i is calculated by adding them together: ei=xi+PE(i)\mathbf{e}_i = \mathbf{x}_i + \mathrm{PE}(i).

Image 0

0

1

Updated 2026-04-23

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences