Multiple Choice

An engineer observes that during the training of a transformer-based model, several attention heads within the same layer consistently produce nearly identical attention patterns for a wide variety of inputs. Despite the model having many heads, this redundancy seems to limit the model's ability to capture diverse linguistic features. This scenario highlights a key motivation for developing more advanced attention mechanisms. What is the most direct problem with the standard multi-head attention design that this observation reveals?

0

1

Updated 2025-10-02

Contributors are:

Who are from:

Tags

Data Science

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science