A k-NN-based attention model will produce identical outputs to a sparse attention model if its datastore is populated with key-value pairs from a large, external corpus of text that is different from the current input sequence.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
An engineer is designing a language model that uses a retrieval-based component for its attention mechanism. They observe that under a specific configuration, this retrieval-based model behaves identically to a sparse attention model that only considers previous tokens within the same input sequence. Which of the following configurations of the retrieval component's datastore would cause this functional equivalence?
A k-NN-based attention model will produce identical outputs to a sparse attention model if its datastore is populated with key-value pairs from a large, external corpus of text that is different from the current input sequence.
Condition for Equivalence in Attention Models
Architectural Trade-offs in Attention Mechanisms