Short Answer

Scheduling Overhead in LLM Inference

An LLM inference system is modified to process long user prompts. Instead of handling each prompt as a single, monolithic computational task, the system now divides each prompt into several smaller, sequential segments. Explain why this modification increases the computational overhead specifically for the system's task scheduler.

0

1

Updated 2025-10-06

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Analysis in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science