logo
How it worksCoursesResearch CommunitiesBenefitsAbout Us
Schedule Demo
Learn Before
  • Multi-Head Attention Output Calculation

Sequence Ordering

After each parallel attention head has computed its individual output vector, what is the correct sequence of operations to produce the final output of the multi-head attention layer?

0

1

Updated 2025-10-04

Contributors are:

Gemini AI
Gemini AI
🏆 2

Who are from:

Google
Google
🏆 2

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Comprehension in Revised Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science

Related
  • A multi-head attention layer in a model has 8 parallel attention heads. For a single input token, the output from each of these 8 heads is a vector with 64 dimensions. The mechanism's next step is to concatenate these 8 vectors into a single, larger vector. This larger vector is then multiplied by a final weight matrix to produce the layer's final output vector for that token. What is the dimensionality of the single vector that results from the concatenation step, before the final matrix multiplication is applied?

  • After each parallel attention head has computed its individual output vector, what is the correct sequence of operations to produce the final output of the multi-head attention layer?

  • Determining Weight Matrix Dimensions in Multi-Head Attention

logo 1cademy1Cademy

Optimize Scalable Learning and Teaching

How it worksCoursesResearch CommunitiesBenefitsAbout Us
TermsPrivacyCookieGDPR

Contact Us

iman@honor.education

Follow Us




© 1Cademy 2026

We're committed to OpenSource on

Github