Case Study

KV Cache Memory Scaling

A machine learning engineer is profiling a large language model's memory usage during text generation. They observe that when they double the number of decoder layers in the model, the total memory consumed by the key-value (KV) cache also approximately doubles, even when the input text length remains the same. Based on the structural organization of the KV cache across the model's architecture, explain this observation.

0

1

Updated 2025-10-02

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science