Multiple Choice

A team has successfully pre-trained a 100-billion parameter language model across a cluster of GPUs using a combination of tensor and pipeline parallelism. They are now tasked with deploying this model for a high-throughput, low-latency inference service. Which of the following approaches represents the most sound and efficient strategy for deploying the model?

0

1

Updated 2025-09-26

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Evaluation in Bloom's Taxonomy

Cognitive Psychology

Psychology

Social Science

Empirical Science

Science