Outlier-70B V3.3
Ternary Mixture-of-Experts overlay for Qwen2.5-32B-Instruct. Sparse architecture: shared full-precision FFN plus a gated ternary expert FFN per layer. Built by a solo founder on a Mac Studio as part of the Outlier research line feeding the Outlier desktop app for Apple Silicon.
Quick facts
- Scale: 70B total / ~32B active
- Architecture: Ternary MoE overlay (shared FFN + alpha times expert FFN, TQ1_0 packing)
- Experts: 280
- Frozen base: Qwen2.5-32B-Instruct
- MMLU 5-shot: 83.10% plus or minus 0.30% (n=14042, lm-evaluation-harness v0.4.9.1)
- License: Apache 2.0
- Intended use: research, benchmarking, derivative fine-tunes
- Production Mac deployment: use an MLX 4-bit shipping tier instead
V3.3 alpha-fixed: +1.61pp over V3.2 from a 15 KB overlay (280 scalars, 18 min on one B200).
What this is
An overlay checkpoint. This repo contains expert delta weights, router, modeling file, and config. Inference loads the base model separately and applies this overlay via the reference runtime.
What this is NOT
- Not a standalone checkpoint; the frozen base above is required
- Not calibrated for production throughput; shipping-tier MLX 4-bit variants exist for that
- Not claiming to beat the base Qwen on raw MMLU at every scale; the product thesis is MMLU per GB of RAM, not raw MMLU (see comparison below)
Honest comparison vs base Qwen
| Metric | Outlier-70B V3.3 | Qwen2.5-32B-Instruct FP16 |
|---|---|---|
| MMLU 5-shot | 83.10% | Qwen published values (see base card) |
The overlay compresses the expert path to ~1.6 bits per weight via TQ1_0 while preserving the shared FFN at full precision. This trades some raw accuracy for a smaller expert memory footprint when paired with int4 base quantization.
Quickstart
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
base_id = "Qwen/Qwen2.5-32B-Instruct"
overlay_id = "Outlier-Ai/Outlier-70B-V3.3"
tok = AutoTokenizer.from_pretrained(base_id)
model = AutoModelForCausalLM.from_pretrained(
overlay_id,
trust_remote_code=True,
torch_dtype=torch.float16,
)
prompt = "Explain ternary mixture-of-experts in one paragraph."
inputs = tok(prompt, return_tensors="pt")
out = model.generate(**inputs, max_new_tokens=256)
print(tok.decode(out[0], skip_special_tokens=True))
Apple Silicon production use: pick an MLX 4-bit repo from the
Outlier-Ai organization. Those load via mlx_lm.generate.
Benchmarks (verified)
| Benchmark | Score | n | Harness |
|---|---|---|---|
| MMLU 5-shot | 83.10% plus or minus 0.30% | 14042 | lm-evaluation-harness v0.4.9.1 |
| HellaSwag 10-shot | 85.95% | 14042 | lm-evaluation-harness v0.4.9.1 |
| ARC-Challenge 25-shot | 73.46% | 14042 | lm-evaluation-harness v0.4.9.1 |
| ARC-Easy 25-shot | 91.62% | 14042 | lm-evaluation-harness v0.4.9.1 |
| Winogrande 5-shot | 81.29% | 14042 | lm-evaluation-harness v0.4.9.1 |
| TruthfulQA 0-shot | 67.12% | 14042 | lm-evaluation-harness v0.4.9.1 |
Evaluation artifacts retained on Outlier infrastructure; ground-truth summary at outlier.host.
Provenance
- Harness: lm-evaluation-harness v0.4.9.1
- Sample size: n = 14042
- Config: 5-shot, bf16, batch_size=1
- Date produced: 2026-04-13
- Canonical ground truth:
OUTLIER_GROUND_TRUTH_v13.md
Supersedes
Related
- Production Mac shipping tiers: Outlier-Ai organization
- Consumer Edition collection: Outlier-Ai/outlier-consumer-edition-69e2fb4a0df119ea1747275e
- Research collection: Outlier-Ai/outlier-research-69e2fb3a71984614b3c7a279
- Server V3.2 collection: Outlier-Ai/outlier-server-v32-69e2fb4b71984614b3c7a4a3
- Desktop app: https://outlier.host
- Founders lifetime ($200, 500-seat cap): https://buy.polar.sh/polar_cl_mJfYZsEpEMDcYrgxzvTdnahSeSQNq1UYLqV0l08CUhW
- Discord: discord.gg/Hapennmdn9
What is Outlier?
Outlier is a Mac-native, offline by default AI platform. One desktop app, a curated model library, Bring-Your-Own-Model support, projects with codebase indexing, a 9-tool coding agent, an artifacts panel, SQLite memory, and an OpenAI-compatible local API - all Apache 2.0, all local, no cloud round-trip, no per-token billing.
- Desktop app: https://outlier.host
- Founders lifetime (500-seat cap, $200 one-time): support Outlier
- Discord: discord.gg/Hapennmdn9
- Org: huggingface.co/Outlier-Ai
Built solo in Grand Rapids, Michigan. 19 days to 35+ HF repos and 8,305 downloads, under $1,200 in compute spend. Mission: a local AI model that codes as well as the cloud flagships, funded by users who want it - not investors who need exits.
Known limits
- Overlay checkpoint; the frozen base above is required.
- Shared FFN runs at full precision on load - plan RAM accordingly if you compose this without int4 quantization on the base.
- Raw MMLU at 83.10% is compared above against the same-scale Qwen FP16 release; the product thesis is MMLU per GB of RAM, not raw MMLU.
- English-tuned. Multilingual behavior inherits the base model directly.
Attribution
Base weights by the Qwen team at Alibaba, released under Apache 2.0. Outlier contributes the ternary MoE overlay, alpha-fix scalars, and training pipeline. All capability credit for the base is upstream. Qwen team: qwenlm.github.io.
Patents and citation
Architecture, training pipeline, and inference engine covered by US provisional patents 64/026,886, 64/030,368, and 64/034,028 (Kerr & Company LLC, 2026).
@misc{kerr2026outlier70bv3.3,
title={ Outlier-70B V3.3: Ternary Mixture-of-Experts Overlay for Qwen2.5 },
author={ Kerr, Matthew },
year={ 2026 },
howpublished={ \url{ https://huggingface.co/Outlier-Ai/Outlier-70B-V3.3 } },
}
Contact
Matt Kerr - outlier.host - @outlier_ai
- Downloads last month
- 1,188
Model tree for Outlier-Ai/Outlier-70B-V3.3
Collections including Outlier-Ai/Outlier-70B-V3.3
Evaluation results
- MMLU 5-shot on MMLUself-reported83.100