Case Study

Root-cause and mitigation plan for OOMs and latency spikes during shared-prefix, long-generation traffic

You are the on-call engineer for an internal LLM gateway that serves two high-volume products on the same GPU pool: (A) a customer-support chat agent and (B) a report generator. Both products use the same 220-token system prompt, but user prompts vary from 20–2,000 tokens. Typical outputs are 50 tokens for (A) and 1,500 tokens for (B). The serving stack uses continuous batching and stores each request’s KV cache in a single contiguous allocation that grows as decoding proceeds.

Over the last week, you observe two symptoms that often occur together during peak hours:

  1. New long requests fail to start with an out-of-memory error even when monitoring shows ~25% of GPU memory is free.
  2. P99 token latency during streaming generation increases steadily over time, especially when many long outputs are in flight.

A teammate proposes a quick fix: “Enable prefix caching for the shared system prompt; that will reduce compute and should also fix the memory issues.” Another teammate proposes: “Switch KV cache allocation to a paged/block-based scheme (PagedAttention-style) to eliminate fragmentation; prefix caching is optional.”

As the incident lead, choose which proposal you would implement first (prefix caching first vs paged KV caching first), and justify your decision by explicitly connecting: (i) what happens in prefilling vs decoding, (ii) how the KV cache grows and is reused across decoding steps, (iii) why the system can OOM despite free memory (fragmentation), and (iv) how your chosen change affects both memory behavior and end-to-end latency for these two products. Your answer should also name one concrete risk/tradeoff introduced by your chosen change.

Image 0

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.5 Inference - Foundations of Large Language Models

Related