Case Study

Attention Architecture Choice for On-Device Meeting Summarization with 60k Context

You are leading an applied research team shipping an on-device meeting summarization feature. The model must handle up to 60,000 tokens of context and generate summaries autoregressively. On the target device, the dominant bottleneck is inference-time memory (KV cache), and you have a strict cap that rules out storing a full per-head KV cache for all layers. However, product quality requirements include: (1) reliably capturing a few “global” decisions made early in the meeting, and (2) accurately summarizing local details from the last ~2,000 tokens. You can change only the attention mechanism (not the tokenizer, not the number of layers), and you must keep causal masking.

Propose a concrete attention design for the self-attention layers that combines (a) the core scaled dot-product attention idea, with (b) one long-context efficiency strategy (sparse attention or linear attention), and (c) one KV-sharing strategy (MQA or GQA). In your answer, justify how your design simultaneously addresses the KV-cache memory cap and the two quality requirements, and explicitly call out at least one trade-off or failure mode your design introduces compared with standard dense multi-head attention.

Image 0

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.5 Inference - Foundations of Large Language Models

Data Science

Related