Concept

Self-Attention of Transformer

In the encoder, the data will first go through a module called ‘self-attention’ to get a weighted feature x.

Image 0

0

1

Updated 2026-04-15

Tags

Data Science

Learn After