Learn Before
Calculating KV Cache Size per Token
Consider a Transformer-based model with the following specifications: 12 layers, 8 attention heads per layer, and a key/value vector dimensionality of 64 for each head. When processing a single new token, what is the total number of floating-point values that must be added to the model's entire key-value cache? Show the formula you used for your calculation.
0
1
Tags
Ch.2 Generative Models - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Application in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
An engineer modifies a large language model by doubling the number of attention heads per layer while simultaneously halving the dimensionality of each head's key/value vectors. Assuming all other parameters (like the number of layers and sequence length) remain constant, how does this architectural change affect the multi-dimensional structure of the model's key-value (KV) cache?
KV Cache Structure Trade-offs
Calculating KV Cache Size per Token