Claude Code in a Box
Collection
How to replace Claude Code with a Mac Studio: https://spicyneuron.substack.com/p/a-mac-studio-for-local-ai-6-months • 5 items • Updated • 2
GLM 5.1 optimized to run comfortably on a Mac Studio M3 512. This is the smaller, compact version. Quality-first version here.
# Start server at http://localhost:8080/chat/completions
uvx --from mlx-lm mlx_lm.server \
--host 127.0.0.1 \
--port 8080 \
--model spicyneuron/GLM-5.1-MLX-2.9bit
| metric | baa-ai/GLM-5.1-RAM-270GB-MLX | 2.9 bit (this model) | 3.6 bit |
|---|---|---|---|
| bpw | 3.110 | 2.906 | 3.645 |
| base memory | 269.303 | 251.702 | 315.648 |
| peak memory (1024/512) | 291.257 | 272.358 | 341.020 |
| prompt tok/s (1024) | 194.958 ± 0.075 | 194.216 ± 0.167 | 190.508 ± 0.880 |
| gen tok/s (512) | 21.381 ± 0.050 | 19.527 ± 0.035 | 17.873 ± 0.156 |
| kl mean | 0.686 ± 0.054 | 0.268 ± 0.009 | 0.117 ± 0.004 |
| kl p95 | 1.478 ± 0.054 | 0.537 ± 0.009 | 0.236 ± 0.004 |
| perplexity | 4.780 ± 0.020 | 4.118 ± 0.016 | 3.945 ± 0.016 |
| piqa | 0.776 ± 0.010 | 0.794 ± 0.009 | 0.820 ± 0.017 |
Tested on a Mac Studio M3 Ultra with:
mlx_lm.kld --baseline-model path/to/mlx-full-precision
mlx_lm.perplexity --sequence-length 2048 --seed 123
mlx_lm.benchmark --prompt-tokens 1024 --generation-tokens 512 --num-trials 5
mlx_lm.evaluate --tasks piqa --seed 123 --num-shots 0 --limit 500
Note:
mlx_lm.kld is approximate, based on top_k not full logits. Here's the code.Quantized with a mlx-lm fork, drawing inspiration from Unsloth/AesSedai/ubergarm style mixed-precision GGUFs. MLX quantization options differ from llama.cpp, but the principles are the same:
2-bit
Base model
zai-org/GLM-5.1