Multiple Choice

During autoregressive text generation, a model has already processed N tokens and stored their corresponding key and value vectors in a cache. When the model processes the (N+1)-th token, how is this cache utilized and modified to compute the output for this new step?

0

1

Updated 2025-09-28

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science