About

weighted/imatrix quants of https://huggingface.co/zai-org/GLM-4.6V

For a convenient overview and download list, visit our model page for this model.

static quants are available at https://huggingface.co/mradermacher/GLM-4.6V-GGUF

This is a vision model - mmproj files (if any) will be in the static repository.

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF imatrix 0.3 imatrix file (for creating your own quants)
GGUF i1-IQ1_S 34.0 for the desperate
GGUF i1-IQ1_M 35.6 mostly desperate
GGUF i1-IQ2_XXS 38.3
GGUF i1-IQ2_XS 40.6
GGUF i1-IQ2_S 41.0
GGUF i1-IQ2_M 43.1
GGUF i1-Q2_K 43.7 IQ3_XXS probably better
GGUF i1-Q2_K_S 43.8 very low quality
GGUF i1-IQ3_XXS 47.3 lower quality
GGUF i1-IQ3_XS 48.3
GGUF i1-Q3_K_S 50.8 IQ3_XS probably better
GGUF i1-IQ3_S 50.8 beats Q3_K*
GGUF i1-IQ3_M 51.5
GGUF i1-Q3_K_M 55.4 IQ3_S probably better
GGUF i1-Q3_K_L 57.7 IQ3_M probably better
GGUF i1-IQ4_XS 58.2
GGUF i1-Q4_0 60.6 fast, low quality
GGUF i1-Q4_K_S 64.8 optimal size/speed/quality
GGUF i1-Q4_1 67.1
GGUF i1-Q4_K_M 70.5 fast, recommended
GGUF i1-Q5_K_S 75.8
GGUF i1-Q5_K_M 80.7
GGUF i1-Q6_K 96.0 practically like static Q6_K

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9

FAQ / Model Request

See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized.

Thanks

I thank my company, nethype GmbH, for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.

Downloads last month
7,464
GGUF
Model size
107B params
Architecture
glm4moe
Hardware compatibility
Log In to add your hardware

1-bit

2-bit

3-bit

4-bit

5-bit

6-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mradermacher/GLM-4.6V-i1-GGUF

Base model

zai-org/GLM-4.6V
Quantized
(19)
this model