Learn Before
Concept

Contextual Token Representation in Sub-layers

In a Transformer architecture, both the input and output of a sub-layer are structured as an m×dm \times d matrix, where mm denotes the sequence length and dd represents the dimensionality. Within these matrices, the ii-th row serves as a contextual representation for the ii-th token in the sequence, encoding its meaning relative to the surrounding tokens.

0

1

Updated 2026-04-18

Contributors are:

Who are from:

Tags

Foundations of Large Language Models

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related