Text Generation
Transformers
Safetensors
English
gidd
custom_code

Scaling Behavior of Discrete Diffusion Language Models

Dimitri von Rütte, Janis Fluri, Antonio Orvieto, Omead Pooladzandi, Bernhard Schölkopf, Thomas Hofmann

arXiv arXiv GitHub

This repository contains the model checkpoints from the paper "Scaling Behavior of Discrete Diffusion Language Models".

In our paper, we investigate the scaling behavior of discrete diffusion language models (DLMs) for different noise types (masking, uniform, and hybrid-noise), finding that all of them scale well in compute-bound settings and especially in token-bound settings, with uniform noise coming out on top for the latter. To confirm these findings, we train scaled-up models to compute optimality. Specifically, we train two 3B models (masked and uniform diffusion) as well as a 10B parameter uniform diffusion model, which, to the best of our knowledge, is the largest public uniform diffusion model to date.

Model Size Train. PPL Diffusion type HuggingFace Link
gidd-unif-10b 10B 9.15 uniform https://huggingface.co/dvruette/gidd-unif-10b
gidd-mask-3b 3B 11.3 masked https://huggingface.co/dvruette/gidd-mask-3b
gidd-unif-3b 3B 11.7 uniform https://huggingface.co/dvruette/gidd-unif-3b

Quick Start

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "dvruette/gidd-unif-10b"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, trust_remote_code=True, torch_dtype=torch.bfloat16)
model.eval().to(device)

prompt = "In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English."
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=True).input_ids[:, :-1].to(device)

generated_ids = model.generate(
    inputs=inputs,
    max_length=128,
    block_length=128,
    steps=256,
    sampling_method="adaptive",
    temperature=0.0,
    show_progress=True,
)

print(tokenizer.batch_decode(generated_ids, skip_special_tokens=False)[0])

Training

  • This model is trained on diffusion language modeling with either masking or uniform noise. During pre-training, 20% of sequences had a random fraction of the context unperturbed, making this model capable of conditional generation (prompt completion).
  • The training data is a random subset of Nemotron-CC without quality filtering.
  • The tokenizer is a BPE tokenizer trained on a 256 GB subset of Nemotron-CC using the HF tokenizers library.
  • Please refer to the paper for further training details.

Evaluation

Benchmarks run with lm-evaluation-harness on ARC-E, ARC-C, WinoGrande, PIQA, OpenBookQA, BoolQ, GSM8k; likelihood-based multiple-choice except GSM8k.

Model Train FLOPs ARC-E ARC-C WinoG PIQA OBQA BoolQ GSM8k
gidd-mask-3b 1e21 49.9 29.4 51.6 64.8 30.6 60.9 1.67
gidd-unif-3b 1e21 50.6 29.4 51.1 63.5 28.8 56.4 2.05
gidd-unif-10b 1e22 61.8 35.7 55.5 66.3 32.8 60.3 2.43

Risks and limitations

  • This is a base model trained on Nemotron-CC, a large-scale web corpus. Due to this, it can reproduce social biases and generate toxic or harmful content.
  • Performance on math/coding is limited in the reported setup due to lack of dedicated math/coding data.

Citation

@article{von2025scaling,
  title={Scaling Behavior of Discrete Diffusion Language Models},
  author={von R{\"u}tte, Dimitri and Fluri, Janis and Pooladzandi, Omead and Sch{\"o}lkopf, Bernhard and Hofmann, Thomas and Orvieto, Antonio},
  journal={arXiv preprint arXiv:2512.10858},
  year={2025}
}
Downloads last month
35
Safetensors
Model size
10B params
Tensor type
F32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train dvruette/gidd-unif-10b

Collection including dvruette/gidd-unif-10b