Learn Before
Calculating a Pre-Softmax Attention Score with Linear Bias
In a transformer model that incorporates a linear positional bias directly into its attention mechanism, you need to compute the pre-Softmax attention score between a query vector at position 3 and a key vector at position 1. The formula for this score involves adding a positional bias term to the standard scaled dot-product of the query and key. Use the provided information to calculate this final score.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Complete ALiBi Attention Formula
Calculating a Pre-Softmax Attention Score with Linear Bias
In a model that adds a linear positional bias to its attention calculation, a query at position
i=10attends to two keys at positionsj1=5andj2=2. Assuming the scaled dot-product portion of the score is identical for both keys, how will the addition of the positional bias termPE(i, j)affect their final pre-Softmax attention scores?Interaction of Semantic and Positional Scores