Multi-Draft Speculative Sampling: Canonical Architectures and Theoretical Limits
Abstract
We consider multi-draft speculative sampling, where the proposal sequences are sampled independently from different draft models. At each step, a token-level draft selection scheme takes a list of valid tokens as input and produces an output token whose distribution matches that of the target model. Previous works have demonstrated that the optimal scheme (which maximizes the probability of accepting one of the input tokens) can be cast as a solution to a linear program. In this work we show that the optimal scheme can be decomposed into a two-step solution: in the first step an importance sampling (IS) type scheme is used to select one intermediate token; in the second step (single-draft) speculative sampling is applied to generate the output token. For the case of two identical draft models we further 1) establish a necessary and sufficient condition on the distributions of the target and draft models for the acceptance probability to equal one and 2) provide an explicit expression for the optimal acceptance probability. Our theoretical analysis also motives a new class of token-level selection scheme based on weighted importance sampling. Our experimental results demonstrate consistent improvements in the achievable block efficiency and token rates over baseline schemes in a number of scenarios.
Community
We show that the optimal scheme for multi-draft speculative decoding can be decomposed into a two-step solution: token selection, followed by (single-draft) speculative sampling. Based on this result, we provide an explicit expression for the optimal acceptance probability.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ParallelSpec: Parallel Drafter for Efficient Speculative Decoding (2024)
- Dynamic-Width Speculative Beam Decoding for Efficient LLM Inference (2024)
- DySpec: Faster Speculative Decoding with Dynamic Token Tree Structure (2024)
- Improving Multi-candidate Speculative Decoding (2024)
- Learning Harmonized Representations for Speculative Sampling (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper