Case Study

Post-incident analysis: KV-cache growth, fragmentation, and shared-prefix reuse in a streaming LLM service

You are the on-call engineer for an internal LLM gateway that serves two workloads on the same GPU pool: (A) a chat product where every request begins with the same 600-token system prompt, and (B) an agent workflow that sends highly variable prompts (50–4000 tokens) and often streams 800–1500 generated tokens. The serving stack uses continuous batching and stores each sequence’s KV cache in GPU memory during generation.

After a traffic spike, you observe the following symptoms over a 30-minute window:

  1. Median time-to-first-token (TTFT) increases sharply, but tokens/second during streaming generation degrades only mildly.
  2. GPU memory monitoring shows ~25% free memory, yet new long agent requests frequently fail to start with an out-of-memory allocation error.
  3. When you temporarily disable reuse of the shared 600-token system prompt (i.e., you always recompute it per request), TTFT gets worse but the OOM allocation failures become less frequent.

Assume the model is a standard autoregressive Transformer with a KV cache; inference consists of an initial prompt-processing stage that populates the KV cache followed by token-by-token generation that appends to the KV cache.

As the incident owner, propose ONE coherent serving change (a single design choice, not a list) that best explains and addresses all three symptoms at once. Your answer must (i) identify the most likely root cause linking TTFT behavior and the “free memory but OOM” paradox, and (ii) justify why your chosen change improves the situation by explicitly referencing how it affects KV-cache allocation during prompt processing vs. token-by-token generation, and how it interacts with shared-prefix reuse.

Image 0

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.5 Inference - Foundations of Large Language Models

Related