Case Study

LLM Inference Performance Analysis

An engineering team is analyzing the performance of their new LLM inference server, which uses a continuous batching scheduler that prioritizes starting new requests to maximize hardware utilization. They observe high overall throughput but receive user complaints about unpredictable response times. Based on the simplified performance log below, analyze the data and explain the most likely reason for the high latency and variability experienced by the shorter requests.

0

1

Updated 2025-10-02

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science