Post
99
π GLM-5-Turbo just dropped: The "OpenClaw Native" Model
Z.ai just shipped a 744B MoE beast that's 2-3x faster than GLM-5, with 200K context + 128K max output.
What's different:
π Tool call stability (no more random failures mid-chain)
π Complex instruction decomposition (breaks down messy prompts)
π Time-aware execution (understands scheduled/persistent tasks)
π High-throughput long-chain efficiency (doesn't choke on 50-step workflows)
π ZClawBench: Leads mainstream models in OpenClaw scenarios
π° Trade-off: +20% price vs GLM-5
Anyone trying it yet? π
Z.ai just shipped a 744B MoE beast that's 2-3x faster than GLM-5, with 200K context + 128K max output.
What's different:
π Tool call stability (no more random failures mid-chain)
π Complex instruction decomposition (breaks down messy prompts)
π Time-aware execution (understands scheduled/persistent tasks)
π High-throughput long-chain efficiency (doesn't choke on 50-step workflows)
π ZClawBench: Leads mainstream models in OpenClaw scenarios
π° Trade-off: +20% price vs GLM-5
Anyone trying it yet? π