Case Study

Attention Redesign for a Long-Context Customer-Support Copilot Under GPU Memory Pressure

You own inference performance for an internal customer-support copilot that must answer with citations from a running conversation + attached policy docs. In production, the model runs autoregressively with a KV cache and must support up to 64k tokens of context. After a traffic spike, you observe two issues: (1) GPU memory is the primary limiter (OOMs occur before compute saturates), and (2) quality regressions appear specifically when the answer depends on a few far-back policy passages rather than recent chat turns.

You are allowed to change ONLY the attention mechanism in the decoder blocks (no retrieval system changes, no extra memory modules). You can choose among: standard scaled dot-product attention, sparse attention (restricting each query to attend to a subset of past tokens), linear attention (kernel feature map with no softmax, enabling re-ordered multiplications), multi-query attention (shared K/V across all heads), or grouped-query attention (K/V shared within groups of heads).

Propose a single attention design (you may combine at most TWO of the listed techniques in the same layer, e.g., a KV-sharing variant plus a sparsity pattern), and justify it by explicitly explaining: (a) how your choice changes KV-cache memory growth and/or size compared with standard scaled dot-product attention, (b) how it affects the model’s ability to use a small number of long-range tokens (the far-back policy passages) during generation, and (c) one concrete trade-off or failure mode you would monitor after deployment.

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.5 Inference - Foundations of Large Language Models

Data Science

Related