Short Answer

Computational Cost Scaling in Attention Mechanisms

Consider two language models processing a very long sequence of text one token at a time. Model A uses an attention mechanism where the memory component it attends to has a constant, predetermined size. Model B uses a standard attention mechanism where the memory component grows to include every previous token. Compare how the computational cost of calculating attention for each new token changes as the sequence gets longer for Model A versus Model B. Explain the fundamental reason for this difference.

0

1

Updated 2025-10-02

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science