Short Answer

Comparing Memory Usage of Attention Mechanisms

A large language model is processing a very long document (sequence length m). Compare the growth of the Key-Value (KV) cache's memory footprint as m increases for two scenarios: (1) the model uses standard attention, and (2) the model uses sliding window attention with a fixed window size m_w. Explain which approach is more scalable for processing extremely long sequences and why.

0

1

Updated 2025-10-08

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science