Concept

Memory Fragmentation in LLM Inference

During the process of generating text, language models continuously allocate and deallocate memory, particularly for the KV cache. This dynamic memory usage can lead to fragmentation, where the available memory is split into numerous small, non-contiguous blocks. The diagram visualizes this with interspersed used and free memory blocks. This fragmentation poses a significant challenge, as it can prevent the allocation of large, contiguous memory chunks needed for new or growing sequences, thereby reducing system efficiency.

Image 0

0

1

Updated 2026-05-02

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Computing Sciences

Foundations of Large Language Models Course

Related