Multiple Choice

A self-attention model incorporates positional awareness by adding a bias term directly to the query-key dot product for each pair of positions (i, j). This bias term's value depends on the relative distance between i and j. What is the primary implication of this approach compared to the alternative of adding positional vectors to the input token embeddings?

0

1

Updated 2025-10-01

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science