Case Study

Attention Redesign for a Multi-Tenant LLM with Long Context and Strict KV-Cache Budgets

You own the inference architecture for a multi-tenant internal LLM used for two workloads: (A) interactive chat (typical prompt 2–8k tokens, strict latency SLO) and (B) long-document analysis (typical prompt 64–128k tokens, latency less strict but must fit on a single GPU). The current model uses standard scaled dot-product multi-head self-attention with a causal mask and a conventional KV cache. In production, you observe two issues: (1) GPU memory spikes linearly with context length and causes OOM for workload B, and (2) for workload A, throughput is limited by KV-cache bandwidth during decoding. You are allowed to change ONLY the attention mechanism (you may choose among sparse attention, linear attention, multi-query attention (MQA), grouped-query attention (GQA), or keep dense scaled dot-product attention), and you must justify the choice in terms of both compute/memory behavior and expected quality risks.

Case Study Prompt: Propose a single attention design (it can be one mechanism or a combination, e.g., “X + Y”) that you would deploy across both workloads, and explain why it best addresses BOTH observed issues. Your answer must explicitly connect: (i) how scaled dot-product attention’s softmax-based QK^T computation and masking relate to the bottlenecks, (ii) how your chosen mechanism(s) change the attention computation pattern (dense vs sparse vs kernelized/linearized) and/or KV sharing (MQA/GQA), and (iii) the main quality trade-off(s) you would monitor (e.g., loss of long-range dependencies, reduced head diversity) and why they arise from the mechanism.

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.5 Inference - Foundations of Large Language Models

Data Science

Related