Case Study

Calculating a Pre-Softmax Attention Score with Linear Bias

In a transformer model that incorporates a linear positional bias directly into its attention mechanism, you need to compute the pre-Softmax attention score between a query vector at position 3 and a key vector at position 1. The formula for this score involves adding a positional bias term to the standard scaled dot-product of the query and key. Use the provided information to calculate this final score.

0

1

Updated 2025-09-29

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Application in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science