Multi-Head Attention Output Calculation
Given a representation matrix , the multi-head self-attention function computes its output by concatenating the results from multiple individual attention heads. This relationship is formalized as:
In this equation, signifies the concatenation of its inputs. Each element represents the output derived from applying Query-Key-Value (QKV) attention to a specific sub-space of the initial representation. Finally, the concatenated results are projected via multiplication with a parameter matrix to yield the final sequence representation.

0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Related
Multi-Head Attention Output Calculation
Causal Attention Output for a Single Head and Token
In a multi-head attention mechanism, each individual attention head computes its output using its own unique Query, Key, and Value matrices, which are distinct linear projections of the same input. What is the primary functional consequence of this design choice?
Debugging an Attention Head
Dimensionality of an Attention Head Output
You are examining the computation for a single attention head within a multi-head attention layer. Arrange the following steps in the correct chronological order to produce the output for this individual head.
Autoregressive Individual Attention Head Computation
Multi-Head Attention Output Calculation
In a multi-head attention mechanism, the model's overall embedding dimension is 768. If this mechanism is configured with 12 separate, parallel attention heads, what is the dimension of the output vector produced by a single one of these heads?
Relationship Between Head and Model Dimensions
In a multi-head attention mechanism where the overall model dimension is
d_modeland there areτparallel attention heads (whereτ > 1), the output vector of a single attention head has a dimension ofd_model.
Learn After
A multi-head attention layer in a model has 8 parallel attention heads. For a single input token, the output from each of these 8 heads is a vector with 64 dimensions. The mechanism's next step is to concatenate these 8 vectors into a single, larger vector. This larger vector is then multiplied by a final weight matrix to produce the layer's final output vector for that token. What is the dimensionality of the single vector that results from the concatenation step, before the final matrix multiplication is applied?
After each parallel attention head has computed its individual output vector, what is the correct sequence of operations to produce the final output of the multi-head attention layer?
Determining Weight Matrix Dimensions in Multi-Head Attention