This model was trained on a large reasoning dataset derived from Pony Alpha (an early checkpoint of GLM-5).

  • 🧬 Datasets

    • TeichAI/Pony-Alpha-15k
  • 🏗 Base Model

    • LiquidAI/LFM2.5-1.2B-Thinking
  • Use Cases

    • Coding
    • Science
    • Deep research
  • Dataset Stats

    • Cost: $0 USD
    • Total tokens (input + output): 43.3M

Sampling Parameters

Liquid AI recommends the following sampling parameters:

Setting Value
temperature 0.05
top_k 50
repeat_penalty 1.05

This LFM-2.5 model was trained 2× faster using Unsloth and Hugging Face's TRL library.

Made with Unsloth

Downloads last month
131
Safetensors
Model size
1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for TeichAI/LFM2.5-1.2B-Thinking-Pony-Alpha-Distill

Finetuned
(21)
this model
Quantizations
1 model

Dataset used to train TeichAI/LFM2.5-1.2B-Thinking-Pony-Alpha-Distill