Qwen 2 VL 2B Instruct (GGUF, Q4_K_M)
Production-ready GGUF quantization of Qwen/Qwen2-VL-2B-Instruct for distributed visual understanding and reasoning โ powered by the Aether edge inference runtime.
Highlights
- 2B parameters โ Compact first-gen Qwen vision-language model. Lightweight visual understanding.
- ~1.2 GB Q4_K_M quantized โ optimized for distributed edge inference
- Qwen2-VL architecture โ proven, stable, well-tested
- Aether runtime compatible โ layer-sharded across distributed nodes via Edgework.ai
Model Details
| Property | Value |
|---|---|
| Base model | Qwen/Qwen2-VL-2B-Instruct |
| Parameters | 2B |
| Architecture | Qwen2-VL |
| Quantization | Q4_K_M |
| Format | GGUF |
| Size | ~1.2 GB |
| License | apache-2.0 |
Usage
With llama.cpp
./llama-cli -m qwen2-vl-2b-instruct-q4_k_m.gguf -p "Your prompt here" -n 256
With Aether (Distributed Inference)
This model is deployed across the Aether distributed inference network. Weights are layer-sharded and distributed across multiple edge nodes for parallel inference.
Deployment Architecture
This model runs on the Aether distributed inference runtime โ our custom engine that shards model layers across multiple nodes for parallel execution:
- Coordinator receives requests and manages token generation
- Layer nodes each hold a subset of model layers
- Hidden states flow between nodes via gRPC
- Zero cold start via warm pool scheduling
Deployed via Edgework.ai โ bringing fast, cheap, and private inference as close to the user as possible.
About
Published by AFFECTIVELY ยท Managed by @buley
We quantize and publish production-ready models for distributed edge inference via the Aether runtime. Every release is tested for correctness and stability before publication.
- All models ยท GitHub ยท Edgework.ai