Learn Before
Accessing a Specific Layer's KV Cache
A developer is debugging a 24-layer Transformer decoder during inference. They suspect an issue with the self-attention mechanism in the 10th layer. Given that the complete KV cache is a collection of individual caches from each layer, describe the specific data structure the developer must access to inspect the keys and values computed by only the 10th layer.
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
KV Cache Memory Scaling
A developer is examining the internal state of a 12-layer Transformer decoder after it has processed an input prompt. They notice that the generated Key-Value (KV) cache is not a single, large data structure, but is instead organized as a collection of 12 separate caches. What is the fundamental reason for this layer-wise organization?
Accessing a Specific Layer's KV Cache