Learn Before
KV Cache Memory Scaling
A machine learning engineer is profiling a large language model's memory usage during text generation. They observe that when they double the number of decoder layers in the model, the total memory consumed by the key-value (KV) cache also approximately doubles, even when the input text length remains the same. Based on the structural organization of the KV cache across the model's architecture, explain this observation.
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
KV Cache Memory Scaling
A developer is examining the internal state of a 12-layer Transformer decoder after it has processed an input prompt. They notice that the generated Key-Value (KV) cache is not a single, large data structure, but is instead organized as a collection of 12 separate caches. What is the fundamental reason for this layer-wise organization?
Accessing a Specific Layer's KV Cache