A key characteristic of all sparse attention models is that the set of attended-to indices for a given token is dynamically determined by finding other tokens with the most similar content.
0
1
Tags
Data Science
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Content-based Sparse Attention
Positional-based Sparse Attention
Classifying a Novel Sparse Attention Mechanism
An engineer develops a sparse attention mechanism where, for any given token, the set of other tokens it can attend to is defined by a pre-determined, structured pattern based on their relative distance in the sequence. For example, a token might only attend to the 8 tokens immediately preceding it. This attention pattern does not change, regardless of the specific words or meaning of the input text. Based on how the set of attended-to indices is defined, how should this mechanism be classified?
A key characteristic of all sparse attention models is that the set of attended-to indices for a given token is dynamically determined by finding other tokens with the most similar content.