Essay

Choosing an Attention Stack for a Regulated, Long-Document Review Assistant

You are leading model architecture decisions for an internal LLM that helps legal/compliance teams review very long contracts (up to 200k tokens) in a regulated environment. Two hard constraints: (1) the system must support streaming generation with a strict causal mask (no looking ahead), and (2) the serving budget is dominated by GPU memory, especially the KV cache, because many users keep long sessions open. A third requirement is quality: the model must reliably connect distant definitions and cross-references, not just local context.

Write a recommendation memo (as if to engineering leadership) that proposes a concrete attention design for inference-time decoding that combines at least two of the following ideas in a coherent way: scaled dot-product attention, sparse attention, linear attention, multi-query attention (MQA), and grouped-query attention (GQA). Your memo must:

  • Explain how your design preserves the core Q–K–V weighted-sum behavior under a causal mask, and what changes (if any) you are making to the Softmax-based scaled dot-product formulation.
  • Analyze the memory and latency implications, explicitly addressing both (a) the quadratic attention-matrix cost and (b) the KV-cache footprint during long-session decoding.
  • Justify how your design maintains long-range retrieval quality (e.g., cross-references) despite efficiency changes, and identify at least one failure mode or trade-off you would monitor in production.

Assume you can change the attention implementation but not the overall product requirement of streaming, token-by-token generation.

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.5 Inference - Foundations of Large Language Models

Data Science

Related