Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
eaddario 
posted an update 8 days ago
Post
161
Experimental global target bits‑per‑weight quantization of Qwen/Qwen3.5-4B and Qwen/Qwen3.5-9B

Unlike standard llama.cpp quantizations that rely on fixed type heuristics (e.g., Q4_K_M), the Target BPW approach optimizes per-tensor precision where it matters the most, and produces high quality models that meet a precise global file size target.

Key Advantages:
- VRAM Maximization: Can generate high quality models sized exactly to fit hardware constraints (e.g., fitting the model into exactly 24GB VRAM).
- Data-Driven Precision: Quantization mix is determined by actual weight error sensitivity rather than hardcoded rules, often yielding better PPL/KLD size trade-offs.

Full benchmarks (PPL, KLD, ARC, MMLU, etc.) and methodology in the models' cards

eaddario/Qwen3.5-4B-GGUF
eaddario/Qwen3.5-9B-GGUF

Yes, but is difference in size expected? I can't see difference by file size.

·

On this occasion, no difference in size is expected.

I'm benchmarking quality instead of size, and to facilitate apples-to-apples comparisons, models IQ1_M, IQ2_M, Q3_K, Q4_K, Q5_K, Q6_K and Q8_0 were quantized at the same bits-per-weight (bpw) of naive models, and Q4_K-B and Q4_K-U were matched to the ones produced by Bartwoski and Unsloth respectively.

The file sizes are the same, but the quality is better.

You're welcome to the enhanced versions of llama-imatrix and llama-quantize if you require a particular size. If this is not practical, let me know which ones you'd need, and I'll be happy to upload.

Bro tip: the mmlu dataset for llama.cpp is pretty bad, you can use https://huggingface.co/datasets/Green-Sky/mmlu-redux-2.0-for-llama.cpp/blob/main/mmlu-redux-2-ok%2Bexpert.bin instead. The data is both of higher quality (mmlu redux based) AND the context is better. I give it all choices and then let it decide with the ABCD letter.

While looking at the original mmlu conversion for llama.cpp i noticed that some answers are like "both a and c" or similar, so they should never be probably for a model that did not get fed all the choices in the first place.

·

Thank you @Green-Sky ! I'm planning to have a go at the Gemma 4s over the weekend and I'll take your dataset for a spin