Multiple Choice

An engineering team is analyzing the performance of a new LLM inference server that uses a system to group incoming requests for efficient processing. They observe that the server's hardware is consistently busy, indicating high throughput. However, user feedback is negative, with many complaining that response times are extremely unpredictable; a short question might get an answer instantly one moment, but a similar short question might take many seconds the next. The server handles a mix of long document analysis requests and short conversational queries. What is the most probable explanation for this high variability in response time for short queries?

0

1

Updated 2025-09-28

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science