Essay

Diagnosing and Redesigning KV-Cache Memory Behavior in a Multi-Tenant LLM Serving Stack

You operate a GPU-based LLM inference service that uses continuous batching to serve many concurrent chat sessions. Each request has (a) a prompt that must be processed before generation starts and (b) a variable-length generated response. Production telemetry shows two symptoms: (1) latency spikes occur when many new requests arrive with long prompts that share a common system prefix (e.g., the same 200-token policy header), and (2) after several hours of mixed traffic, the service sometimes fails to admit a new long request even though ~25–35% of GPU memory is reported free.

Write an engineering recommendation memo that proposes a coherent end-to-end approach to reduce both the latency spikes and the admission failures. Your memo must explicitly connect: how the KV cache is created and grows across the prompt-processing stage versus token-by-token generation; how reusing KV states for shared prompt prefixes changes the amount of prompt work performed; why the observed “free memory but cannot allocate” symptom can occur in KV-cache allocation; and how a paged/block-based KV-cache allocator would change the failure mode and memory utilization. Conclude by stating at least two concrete tradeoffs/risks (e.g., memory overhead, eviction policy complexity, access patterns) and how you would validate the improvement with metrics or experiments.

Image 0

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.5 Inference - Foundations of Large Language Models

Related