Outlier-40B V3.3

Ternary Mixture-of-Experts overlay for Qwen2.5-14B-Instruct. Sparse architecture: shared full-precision FFN plus a gated ternary expert FFN per layer. Built by a solo founder on a Mac Studio as part of the Outlier research line feeding the Outlier desktop app for Apple Silicon.

Quick facts

  • Scale: 40B total / ~14B active
  • Architecture: Ternary MoE overlay (shared FFN + alpha times expert FFN, TQ1_0 packing)
  • Experts: 224
  • Frozen base: Qwen2.5-14B-Instruct
  • MMLU 5-shot: 77.80% plus or minus 0.33% (n=14042, lm-evaluation-harness v0.4.9.1)
  • License: Apache 2.0
  • Intended use: research, benchmarking, derivative fine-tunes
  • Production Mac deployment: use an MLX 4-bit shipping tier instead

V3.3 alpha-fixed overlay on frozen Qwen2.5-14B-Instruct.

What this is

An overlay checkpoint. This repo contains expert delta weights, router, modeling file, and config. Inference loads the base model separately and applies this overlay via the reference runtime.

What this is NOT

  • Not a standalone checkpoint; the frozen base above is required
  • Not calibrated for production throughput; shipping-tier MLX 4-bit variants exist for that
  • Not claiming to beat the base Qwen on raw MMLU at every scale; the product thesis is MMLU per GB of RAM, not raw MMLU (see comparison below)

Honest comparison vs base Qwen

Metric Outlier-40B V3.3 Qwen2.5-14B-Instruct FP16
MMLU 5-shot 77.80% Qwen published values (see base card)

The overlay compresses the expert path to ~1.6 bits per weight via TQ1_0 while preserving the shared FFN at full precision. This trades some raw accuracy for a smaller expert memory footprint when paired with int4 base quantization.

Quickstart

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

base_id = "Qwen/Qwen2.5-14B-Instruct"
overlay_id = "Outlier-Ai/Outlier-40B"

tok = AutoTokenizer.from_pretrained(base_id)
model = AutoModelForCausalLM.from_pretrained(
    overlay_id,
    trust_remote_code=True,
    torch_dtype=torch.float16,
)

prompt = "Explain ternary mixture-of-experts in one paragraph."
inputs = tok(prompt, return_tensors="pt")
out = model.generate(**inputs, max_new_tokens=256)
print(tok.decode(out[0], skip_special_tokens=True))

Apple Silicon production use: pick an MLX 4-bit repo from the Outlier-Ai organization. Those load via mlx_lm.generate.

Benchmarks (verified)

Benchmark Score n Harness
MMLU 5-shot 77.80% plus or minus 0.33% 14042 lm-evaluation-harness v0.4.9.1
HellaSwag 10-shot 84.64% 14042 lm-evaluation-harness v0.4.9.1
ARC-Challenge 25-shot 73.12% 14042 lm-evaluation-harness v0.4.9.1
ARC-Easy 25-shot 91.29% 14042 lm-evaluation-harness v0.4.9.1
Winogrande 5-shot 80.98% 14042 lm-evaluation-harness v0.4.9.1
TruthfulQA 0-shot 67.49% 14042 lm-evaluation-harness v0.4.9.1

Evaluation artifacts retained on Outlier infrastructure; ground-truth summary at outlier.host.

Provenance

  • Harness: lm-evaluation-harness v0.4.9.1
  • Sample size: n = 14042
  • Config: 5-shot, bf16, batch_size=1
  • Date produced: 2026-04-13
  • Canonical ground truth: OUTLIER_GROUND_TRUTH_v13.md

Supersedes

Related

What is Outlier?

Outlier is a Mac-native, offline by default AI platform. One desktop app, a curated model library, Bring-Your-Own-Model support, projects with codebase indexing, a 9-tool coding agent, an artifacts panel, SQLite memory, and an OpenAI-compatible local API - all Apache 2.0, all local, no cloud round-trip, no per-token billing.

Built solo in Grand Rapids, Michigan. 19 days to 35+ HF repos and 8,305 downloads, under $1,200 in compute spend. Mission: a local AI model that codes as well as the cloud flagships, funded by users who want it - not investors who need exits.

Known limits

  • Overlay checkpoint; the frozen base above is required.
  • Shared FFN runs at full precision on load - plan RAM accordingly if you compose this without int4 quantization on the base.
  • Raw MMLU at 77.80% is compared above against the same-scale Qwen FP16 release; the product thesis is MMLU per GB of RAM, not raw MMLU.
  • English-tuned. Multilingual behavior inherits the base model directly.

Attribution

Base weights by the Qwen team at Alibaba, released under Apache 2.0. Outlier contributes the ternary MoE overlay, alpha-fix scalars, and training pipeline. All capability credit for the base is upstream. Qwen team: qwenlm.github.io.

Patents and citation

Architecture, training pipeline, and inference engine covered by US provisional patents 64/026,886, 64/030,368, and 64/034,028 (Kerr & Company LLC, 2026).

@misc{kerr2026outlier40b,
  title={ Outlier-40B V3.3: Ternary Mixture-of-Experts Overlay for Qwen2.5 },
  author={ Kerr, Matthew },
  year={ 2026 },
  howpublished={ \url{ https://huggingface.co/Outlier-Ai/Outlier-40B } },
}

Contact

Matt Kerr - outlier.host - @outlier_ai

Downloads last month
1,994
Safetensors
Model size
15B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Outlier-Ai/Outlier-40B

Base model

Qwen/Qwen2.5-14B
Adapter
(311)
this model

Collections including Outlier-Ai/Outlier-40B

Evaluation results