Multiple Choice

An LLM inference system is tasked with generating a lengthy, multi-paragraph response where the final output length is unpredictable. The system manages its key-value (KV) cache by partitioning it into a collection of non-contiguous, fixed-size blocks. What is the most significant advantage of this memory management strategy specifically for handling the dynamic growth of the sequence during this task?

0

1

Updated 2025-10-02

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science