Case Study

Designing a Speculative Decoding Control Policy for a Latency-Sensitive Product

You are implementing speculative decoding for a customer-facing writing assistant. You have two models available: a small, fast draft model (cheap per token but less accurate) and a large verification model (expensive per forward pass but accurate). The verification model can score a whole drafted block of tokens in one parallel forward pass, and the system must only append the longest consecutively accepted prefix of the drafted block; at the first rejected token, the remaining drafted tokens are discarded and the verification model must generate the next token to continue.

Your SLO is p95 end-to-end latency < 250 ms, and you have a hard budget of at most 2 verification-model forward passes per user request on average. In production you observe that for long prompts, the draft model often proposes 8 tokens, but the first rejection frequently happens at token 2 or 3, causing many discarded tokens and little speedup.

Create a concrete control policy (describe it as pseudocode or a step-by-step algorithm) that dynamically chooses (a) how many tokens the draft model should propose each cycle (τ), and (b) when to fall back to using the verification model directly, in order to maximize throughput while respecting the verification-pass budget and the “consecutively accepted tokens only” rule. Your policy must explicitly use the fact that verification is parallel, and it must specify what signals you track online (e.g., recent consecutive-acceptance lengths) and how those signals change τ and/or trigger fallback.

0

1

Updated 2026-02-06

Contributors are:

Who are from:

Tags

Ch.5 Inference - Foundations of Large Language Models

Foundations of Large Language Models

Foundations of Large Language Models Course

Computing Sciences

Related