Short Answer

Analyzing Performance Trade-offs in LLM Serving

An LLM inference system uses a scheduling strategy that prioritizes starting the computation for new, incoming requests to keep the hardware as busy as possible. If a very long new request (e.g., summarizing a large document) is added to a batch that also contains several shorter requests already in the process of generating their output, explain the mechanism by which the shorter requests experience an increase in their token-by-token generation time.

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science