AI & ML interests

None defined yet.

Articles

danielhanchenΒ 
posted an update 5 days ago
danielhanchenΒ 
posted an update 8 days ago
danielhanchenΒ 
posted an update 14 days ago
view post
Post
3411
100,000+ models trained with Unsloth have now been open-sourced on πŸ€—Hugging Face! πŸ¦₯

Here are the most popular ones you can run local:
1. TeichAI - GLM-4.7-Flash distilled from Claude 4.5 Opus (high)
2. Zed - Qwen Coder 7B fine-tuned for stronger coding
3. DavidAU - Llama-3.3-8B distilled from Claude 4.5 Opus (high)
4. huihui - gpt-oss made β€œabliberated”

Links to models:
1. TeichAI: TeichAI/GLM-4.7-Flash-Claude-Opus-4.5-High-Reasoning-Distill-GGUF
2. Zed: zed-industries/zeta
3. DavidAU: DavidAU/Llama3.3-8B-Instruct-Thinking-Claude-4.5-Opus-High-Reasoning
4. huihui: huihui-ai/Huihui-gpt-oss-20b-BF16-abliterated

See all the 100K latest models fine-tuned with Unsloth here: https://huggingface.co/models?other=u
  • 2 replies
Β·
danielhanchenΒ 
posted an update 19 days ago
danielhanchenΒ 
posted an update 22 days ago
view post
Post
5657
You can now run Qwen3.5 locally! πŸ’œ
Qwen3.5-397B-A17B is an open MoE vision reasoning LLM for agentic coding & chat. It performs on par with Gemini 3 Pro, Claude Opus 4.5 & GPT-5.2.

GGUF: unsloth/Qwen3.5-397B-A17B-GGUF
Run Dynamic 3-bit on a 192GB Mac for 20 tokens/s.

Guide: https://unsloth.ai/docs/models/qwen3.5
Β·
danielhanchenΒ 
posted an update 23 days ago
danielhanchenΒ 
posted an update 28 days ago
view post
Post
5184
We collaborated with Hugging Face to enable you to train MoE models 12Γ— faster with 35% less VRAM via our new Triton kernels (no accuracy loss). πŸ€—

Train gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe
  • 1 reply
Β·
danielhanchenΒ 
posted an update about 1 month ago
view post
Post
3700
We created a tool-calling guide for local LLMs!

Learn how to use any open model like Qwen3-Coder-Next and GLM-4.7-Flash for function calling.

Guide: https://unsloth.ai/docs/basics/tool-calling-guide-for-local-llms

We provide hands-on examples for: story writing, Python execution, terminal tool calls, maths and more.
Β·
danielhanchenΒ 
posted an update about 1 month ago
danielhanchenΒ 
posted an update about 1 month ago
danielhanchenΒ 
posted an update about 2 months ago
view post
Post
2628
You can now fine-tune embedding models in our free Unsloth notebook! πŸ€—

Fine-tuning embedding models improves retrieval & RAG by aligning vectors to your domain-specific notion of similarity, improving search, clustering, and recommendations on your data.

⭐ Blog + Notebooks: https://unsloth.ai/docs/new/embedding-finetuning

Unsloth trains embedding models 1.8-3.3x faster with 20% less VRAM, 2x longer context & no accuracy loss vs. FA2 setups.

We'd like to thank Hugging Face and Unsloth contributor: electroglyph for making this possible!
Β·
danielhanchenΒ 
posted an update about 2 months ago
danielhanchenΒ 
posted an update about 2 months ago
view post
Post
2869
You can now do reinforcement learning training with 7Γ— longer context and no accuracy loss, via our new batching algorithms.

Long reasoning chains in RL are costly, but now we enable you to train gpt-oss with GRPO & reach 380K context on a 192GB GPU.

Blog: https://unsloth.ai/docs/new/grpo-long-context
danielhanchenΒ 
posted an update 2 months ago
danielhanchenΒ 
posted an update 3 months ago
danielhanchenΒ 
posted an update 3 months ago
danielhanchenΒ 
posted an update 3 months ago
danielhanchenΒ 
posted an update 3 months ago
danielhanchenΒ 
posted an update 3 months ago
view post
Post
3903
Mistral's new Ministral 3 models can now be Run & Fine-tuned locally! (16GB RAM)
Ministral 3 have vision support and the best-in-class performance for their sizes.
14B Instruct GGUF: unsloth/Ministral-3-14B-Instruct-2512-GGUF
14B Reasoning GGUF: unsloth/Ministral-3-14B-Reasoning-2512-GGUF

🐱 Step-by-step Guide: https://docs.unsloth.ai/new/ministral-3
All GGUFs, BnB, FP8 etc. variants uploads: https://huggingface.co/collections/unsloth/ministral-3
Β·
danielhanchenΒ 
posted an update 3 months ago