EAGLE3 Draft Head โ Qwen3-32B
A speculative decoding draft head for Qwen/Qwen3-32B, trained using the EAGLE3 method on Google Cloud TPU with the SpecJAX framework.
EAGLE3 draft heads accelerate autoregressive generation by proposing multiple tokens per step that a target model then verifies in parallel โ typically achieving 2-3x throughput gains with no change in output quality.
This is the first EAGLE3 draft head trained with TP=8 tensor parallelism spanning multiple hosts.
Usage
SGLang (GPU)
Qwen3 EAGLE3 is natively supported in SGLang.
python -m sglang.launch_server \
--model Qwen/Qwen3-32B \
--speculative-algorithm EAGLE3 \
--speculative-draft-model-path thoughtworks/Qwen3-32B-Eagle3 \
--speculative-num-steps 5 \
--speculative-eagle-topk 4 \
--dtype bfloat16
Thinking mode
Qwen3 supports an optional thinking mode (/think and /no_think tokens). This draft head was trained on generic instruction-following data and is compatible with both modes:
# Disable thinking mode for pure instruction-following workloads
python -m sglang.launch_server \
--model Qwen/Qwen3-32B \
--speculative-algorithm EAGLE3 \
--speculative-draft-model-path thoughtworks/Qwen3-32B-Eagle3 \
--speculative-num-steps 5 \
--speculative-eagle-topk 4 \
--dtype bfloat16 \
--chat-template qwen3-instruct-no-thinking
sglang-jax (TPU)
Qwen3 EAGLE3 is natively supported in sglang-jax. Note: sglang-jax's EAGLE3 pipeline is functional but not yet performance-optimized.
python -m sgl_jax.launch_server \
--model-path Qwen/Qwen3-32B \
--speculative-algorithm EAGLE3 \
--speculative-draft-model-path thoughtworks/Qwen3-32B-Eagle3 \
--speculative-eagle-topk 1 \
--speculative-num-steps 3 \
--speculative-num-draft-tokens 4 \
--tp-size 8 --dtype bfloat16
Python (SGLang client)
import sglang as sgl
llm = sgl.LLM(
model="Qwen/Qwen3-32B",
speculative_algorithm="EAGLE3",
speculative_draft_model_path="thoughtworks/Qwen3-32B-Eagle3",
speculative_num_steps=5,
speculative_eagle_topk=4,
dtype="bfloat16",
)
Training Details
| Parameter | Value |
|---|---|
| Framework | SpecJAX โ pure JAX, no Flax/PyTorch |
| Hardware | Google Cloud TPU v4-32 (4 hosts x 4 chips, TP=8, DP=2) |
| Dataset | 54K mixed: ShareGPT (45%) + UltraChat-200K (35%) + Open-PerfectBlend (20%) |
| Epochs | 3 |
| Steps | 9,966 total |
| Optimizer | AdamW, cosine LR decay, 3% warmup |
| Learning rate | 1.5e-4 |
| Batch size | B=1, sequence length T=2048, gradient accumulation 8 |
| TTT length | 7 (multi-step speculative rollout) |
| Training time | ~12 hours |
| Precision | bfloat16 |
Training Method
This model uses EAGLE3's Test-Time Training (TTT) objective with a rollout length of 7. At each training step, the draft head autoregressively proposes 7 tokens; the target model provides ground-truth hidden states and logits for all positions; a geometric loss (0.8^k weighting) trains the draft to match the target at each position.
Qwen3's architecture includes per-head QK RMSNorm and tied word embeddings. The draft head is trained to match Qwen3's output distribution at every speculative position.
Note on learning rate: The standard LR of 3e-4 used for smaller models diverged for 32B. A reduced LR of 1.5e-4 (linear batch-size scaling) was required for stable convergence.
Performance
Token acceptance rates on generic instruction-following data (ShareGPT-style prompts):
| Position | Acceptance Rate |
|---|---|
| acc_0 (1st draft token) | 59.3% |
| acc_1 | 55.7% |
| acc_2 | 53.9% |
Measured on held-out evaluation data. Actual throughput gains depend on hardware, prompt distribution, and runtime version.
Model Architecture
The draft head is a single-layer transformer that operates on the target model's hidden states:
| Parameter | Value |
|---|---|
| Architecture | LlamaForCausalLM (1 decoder layer) |
| Hidden size | 5120 |
| Attention heads | 40 (GQA: 8 KV heads) |
| Vocabulary size | 151,936 (full target vocab) |
| Draft vocab size | 32,000 (top tokens by training frequency) |
| Parameters | ~530M |
Limitations
- Trained on English-dominant instruction data; performance may degrade on non-English inputs or highly domain-specific content.
- Acceptance rates are measured on generic chat data (non-thinking mode) and may differ under extended thinking prompts.
- This is a v1 checkpoint trained on generic data. A v2 with target-model-regenerated training data is planned.
License
This model is released under the Apache License 2.0, consistent with the base model's license.
References
@article{li2025eagle3,
title={EAGLE3: Scalable Speculative Decoding with Training-Free Multi-Draft Speculation},
author={Li, Yuhui and Wei, Fangyun and Zhang, Chao and Zhang, Hongyang},
journal={arXiv preprint arXiv:2503.01840},
year={2025}
}
- Downloads last month
- 260
Model tree for thoughtworks/Qwen3-32B-Eagle3
Base model
Qwen/Qwen3-32B