Efficiency of Concurrent LLM Operations
An LLM inference system is actively generating tokens for several ongoing requests. A new request with a long prompt arrives. The system's scheduler decides to process the initial, computationally-heavy phase for the new request at the same time it generates the next single token for the ongoing requests. Explain why this concurrent approach is more efficient for the overall system than processing these two types of tasks sequentially.
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
Example of a Request Completing in Continuous Batching (Iteration 5)
An LLM inference system is actively generating tokens for two separate user requests that are already in progress. A third user submits a new request to the system. To maximize overall throughput by overlapping different types of computation, what actions will the system perform in the next single computational step?
LLM Inference Scheduling Decision
Efficiency of Concurrent LLM Operations