Short Answer

Analysis of Computational Costs in Transformer Inference

Explain why the decoding phase in a Transformer model's inference process is typically more computationally expensive than the prefilling phase. Go beyond simply stating that it's a sequential process and identify at least two distinct contributing factors.

0

1

Updated 2025-10-05

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science