Multiple Choice

An engineer is monitoring a text generation inference server that groups incoming requests into batches. They observe that while the time-to-completion for any single request within a running batch is very fast, the server's overall throughput (requests processed per hour) is low, with significant periods of hardware idleness. What is the most likely cause of this performance profile?

0

1

Updated 2025-09-28

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science