Essay

Evaluating a serving design that combines prefix caching with paged KV memory under mixed prompt lengths

You operate an internal LLM inference service for employees. Traffic has two dominant patterns: (1) many requests start with the same 200-token “policy + tool instructions” prefix and then diverge, and (2) a smaller number of ad‑hoc requests have long, unique prompts (2,000–6,000 tokens). The service uses continuous batching and must keep p95 latency stable.

A proposed redesign includes: (a) prefix caching that stores the KV cache state for the shared 200-token prefix so future requests can skip recomputing that portion of the prompt, and (b) PagedAttention (paged KV caching) so each sequence’s KV cache grows in fixed-size pages rather than requiring a single contiguous allocation.

Write an evaluation that explains, in one coherent argument, how this redesign changes GPU compute and memory behavior across BOTH the prefilling phase and the token-by-token decoding phase. Your answer must:

  • Explain what work is avoided (and what is not avoided) when a request hits the prefix cache, and how that changes prefilling cost and time-to-first-token.
  • Explain why decoding still depends on the KV cache and how KV cache growth during decoding interacts with variable output lengths.
  • Analyze how memory fragmentation can arise in a standard contiguous KV allocation scheme under this workload, and how paged KV allocation changes the failure/throughput profile.
  • Identify at least two concrete tradeoffs/risks introduced by combining prefix caching with paged KV caching (e.g., memory overhead, eviction behavior, page table/indirection costs, cache hit-rate sensitivity), and recommend one operational policy (e.g., what to cache/evict or when to disable caching) to keep p95 latency stable.

Assume the model is an autoregressive Transformer decoder and that the KV cache stores keys/values for all previously processed tokens (prompt + generated tokens).

Image 0

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.5 Inference - Foundations of Large Language Models

Related