Short Answer

Analyzing the Role of the Mask in Attention

In the scaled dot-product attention formula, Attention(Q, K, V) = Softmax((QK^T / sqrt(d)) + Mask) * V, the Mask matrix is optional. For a task like generating a sentence one word at a time, where the prediction for a word can only depend on the words that came before it, explain the specific role of the Mask matrix. Describe how it is structured and how it mathematically prevents the model from attending to future positions.

0

1

Updated 2025-10-09

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.5 Inference - Foundations of Large Language Models

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science