Thinking / Reasoning Models - Reg and MOEs.
Collection
QwQ,DeepSeek, EXONE, DeepHermes, and others "thinking/reasoning" AIs / LLMs in regular model type, MOE (mix of experts), and Hybrid model formats. • 121 items • Updated
• 20
Mistral Nemo Instruct trained with Claude Deep Reasoning (compact thinking output) dataset combined with Brainstorm 20x (also trained) .
This model's "thinking" is temp stable (.1 to 2.5+) and variable.
Thinking blocks are compact: 2-6 paragraphs on avergage.
If the model knows the answer -> no thinking otherwise thinking auto-activates.
You can also use:
Think deeply: [prompt] here.
Model also supports system prompt which affects both thinking and output generation.
[ more to come ]