Learn Before
Computational Cost of Autoregressive Generation
An autoregressive Transformer model is generating a long sequence of text, one token at a time. If this model does not store the intermediate 'key' and 'value' states from its attention mechanism, describe the primary computational inefficiency that arises with each new token generated. Explain how this inefficiency is affected as the sequence grows longer.
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Memory Bottleneck from KV Cache in LLMs
An auto-regressive language model is generating text and has already produced a sequence of 100 tokens. To generate the 101st token, it must calculate self-attention. If the model stores the 'key' and 'value' vectors for the first 100 tokens, which of the following best describes the computational steps required for the self-attention mechanism at this specific step?
Optimizing Chatbot Inference Speed
Computational Cost of Autoregressive Generation