Qwen3-VL-4B-Instruct-Unredacted-MAX-GGUF

Qwen3-VL-4B-Instruct-Unredacted-MAX is a sophisticated and unredacted evolution of the original Qwen3-VL-4B-Instruct model, meticulously fine-tuned using advanced abliterated training techniques designed to significantly reduce or neutralize internal refusal mechanisms that typically limit model responses, while simultaneously retaining and enhancing the core multimodal reasoning and understanding capabilities inherent to the Qwen3-VL architecture; this results in a highly capable 4-billion-parameter vision-language model that can process complex visual inputs and generate unrestricted, detailed, and contextually rich descriptions, captions, and analyses across a wide variety of domains—including artistic, technical, forensic, scientific, and abstract content—enabling use cases such as advanced data annotation, accessibility enhancements, creative storytelling, historical or medical dataset curation, and rigorous red-teaming studies, all while maintaining a strong balance between high-fidelity output, nuanced reasoning, and computational efficiency suitable for modern GPU hardware.

Qwen3-VL-4B-Instruct-Unredacted-MAX [GGUF]

File Name Quant Type File Size File Link
Qwen3-VL-4B-Instruct-Unredacted-MAX.BF16.gguf BF16 8.05 GB Download
Qwen3-VL-4B-Instruct-Unredacted-MAX.F16.gguf F16 8.05 GB Download
Qwen3-VL-4B-Instruct-Unredacted-MAX.Q8_0.gguf Q8_0 4.28 GB Download
Qwen3-VL-4B-Instruct-Unredacted-MAX.mmproj-bf16.gguf mmproj-bf16 839 MB Download
Qwen3-VL-4B-Instruct-Unredacted-MAX.mmproj-f16.gguf mmproj-f16 839 MB Download
Qwen3-VL-4B-Instruct-Unredacted-MAX.mmproj-q8_0.gguf mmproj-q8_0 454 MB Download

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
2,794
GGUF
Model size
4B params
Architecture
qwen3vl
Hardware compatibility
Log In to add your hardware

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/Qwen3-VL-4B-Instruct-Unredacted-MAX-GGUF

Collection including prithivMLmods/Qwen3-VL-4B-Instruct-Unredacted-MAX-GGUF