Essay

Choosing a Memory Architecture for Long-Context Enterprise Summarization

You are deploying an LLM to generate an executive summary and a risk register from a 200-page contract plus a 6-month email thread. The system must run on a fixed-GPU budget with predictable latency, but it also must correctly reference obligations introduced early in the contract when they become relevant later (e.g., a definition on page 3 that changes the meaning of a clause on page 180).

Write a recommendation memo (as if to engineering leadership) that evaluates two candidate designs for the model’s context encoding during inference:

A) A fixed-size sliding-window attention cache that only retains the most recent N tokens (local attention). B) A dual-memory “Compressive Transformer”-style cache with a fixed-size high-fidelity local memory (Mem) plus a fixed-size compressed long-term memory (CMem), updated recurrently as the document is processed in segments.

In your memo, explain how each design encodes context, how segment-based recurrent updates would work operationally, and the key trade-offs you expect in (1) memory footprint/latency predictability, (2) ability to use distant context at the right time, and (3) failure modes (what kinds of important information are most likely to be lost or misused). Conclude with a justified choice for this use case and one concrete mitigation you would add to address the chosen design’s biggest weakness.

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.2 Generative Models - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Ch.4 Alignment - Foundations of Large Language Models

Related