d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation
Paper
β’
2601.07568
β’
Published
β’
1
This repository contains the d3LLM-Dream model, an ultra-fast diffusion language model introduced in the paper d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation.
d3LLM-Dream is an ultra-fast diffusion language model that achieves high generation speed while maintaining competitive performance. It strikes a balance between accuracy and parallelism by using pseudo-trajectory distillation during training and entropy-based multi-block decoding during inference.
For more chat examples and evaluation scripts, visit the official repository.
@article{arxiv'26:d3llm,
title = {d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation},
author = {Yu-Yang Qian and Junda Su and Lanxiang Hu and Peiyuan Zhang and Zhijie Deng and Peng Zhao and Hao Zhang},
journal = {ArXiv preprint},
volume = {arXiv:2601.07568},
year = {2026}
}
Base model
Dream-org/Dream-v0-Instruct-7B