A company deploys a real-time translation service powered by a large language model. Their server fleet is composed of a mix of new, high-speed processing units and older, slower units. Despite optimizing for parallel computation, they observe that system-wide performance is poor and response times are highly inconsistent, failing to meet their service-level agreement for speed. Which statement best analyzes the root cause of this performance issue?
0
1
Tags
Ch.5 Inference - Foundations of Large Language Models
Foundations of Large Language Models
Foundations of Large Language Models Course
Computing Sciences
Analysis in Bloom's Taxonomy
Cognitive Psychology
Psychology
Social Science
Empirical Science
Science
Related
A company deploys a real-time translation service powered by a large language model. Their server fleet is composed of a mix of new, high-speed processing units and older, slower units. Despite optimizing for parallel computation, they observe that system-wide performance is poor and response times are highly inconsistent, failing to meet their service-level agreement for speed. Which statement best analyzes the root cause of this performance issue?
LLM Deployment Strategy Evaluation
Analyzing Fleet Design for Low-Latency LLM Inference