DISABLE REPEAT PENALTY or set = 1.0

#13
by danielhanchen - opened

I've seen and spoken to over 40 people and it seems a lot of people are still experiencing issues with GLM-4.7-Flash but after they disable repeat penalty or set it to 1.0, it all got solved.

So please, turn it off as it screws up the model badly and maybe set by default for you! https://huggingface.co/unsloth/GLM-4.7-Flash-GGUF

Remember

  • For general use-case: --temp 1.0 --top-p 0.95
  • For tool-calling: --temp 0.7 --top-p 1.0
  • If using llama.cpp, set --min-p 0.01 as llama.cpp's default is 0.1
  • Repeat penalty: Disable it, or set --repeat-penalty 1.0

Let us know if you're still receiving bad outputs after this (keep in mind sometimes you may get bad outputs or looping - as such with any other model like GPT5 or Gemini, this is normal, but if it happens a lot this isn't normal).

danielhanchen pinned discussion

It's so confusing... I spent a lot of time with different parameters using llama cli. Comparing local UD-Q4_K_XL.gguf to the inference provider version on their hf page. And my local gguf always worse.
Then i just launched it in llama server mode, without any parameters, and suddenly the results seem to match (just llama-server -m GLM-4.7-Flash-UD-Q4_K_XL.gguf). Or it's just randomness, but it can't just answer correctly every time. I don't mean coding, but some logical reasoning questions

It's so confusing... I spent a lot of time with different parameters using llama cli. Comparing local UD-Q4_K_XL.gguf to the inference provider version on their hf page. And my local gguf always worse.
Then i just launched it in llama server mode, without any parameters, and suddenly the results seem to match (just llama-server -m GLM-4.7-Flash-UD-Q4_K_XL.gguf)

llama-server does not turn on repeat penalty by default so this makes sense! Glad it's working better now though

I feel like it's more about the size of the model. While huge proprietary models always answer correctly to usual reasoning questions, with models like this, it's much more random, on first try the answer may be correct, next try it's wrong

I feel like it's more about the size of the model. While huge proprietary models always answer correctly to usual reasoning questions, with models like this, it's much more random, on first try the answer may be correct, next try it's wrong

This is a possibility as well. It happens a lot to get a never ending loop even in chatgpt 5 and gemini

Also you're using a quantized 4-bit version of a 16bit model

I'd say Daniel is right about keeping in mind this is already a reduced version of a larger model, and quantized on top of that. I won't trust a 16-bit 30b parameter model's reasoning for real world usage unless some kind of breakthrough happens. I mean if we get there, great! But I'm not expecting it, I'm more interested in small models giving good token throughput on grunt work (rag, coding, summarizing) and orchestration and leaving reasoning to solve a Master's degree thesis in math to the >200B parameter models in 16bit.

That's my take for 2026, let's see what the year brings.

On looping, I've found GLM 4.7 Flash UD Q4_K_XL and Q6_K_XL to be fairly reliable on llama.cpp with the recommended params: --temp 0.7 --top-p 1.0 --min-p 0.01 --jinja and repeat-penalty disabled for sure. I'm using a context of 65536 for coding - less than that is not enough for my tasks. Over the last 3 days I've switched between those parameters and tried --temp 0.3and --min-p 0.0. I don't see much of a difference. I get more looping on my daily-driver gpt-oss 20b (original, UD Q8, UD Q6) than I do on GLM now.

To detect looping tendencies I use some basic questions to fill up the context to about 500-1000 tokens produced by the model (so not template or prompt tokens) then I hit with a rather open-ended listing question that doesn't trigger censorship. This following series was devastating on gemma 3n and gpt-oss 20b. It also quickly triggers looping on GLM 4.7 Flash when params are out of whack:

  • Hi.
  • How did the St. Lawrence river, in Canada, get its name?
  • Tell me more about the saint.
  • What other saints are from the 3rd century?
  • ... ask about any saints from the list ...

In my case, the last 2 questions trigger a loop within 2 tries when the model is susceptible. I tried GLM 4.7-REAP UD Q4_K_XL, it passes the looping test with recommended params although the answers are always hilariously wrong, but that's fine since the REAP version is about capability per bit and not general knowledge.

I keep getting gibberish output when asking it to create a minesweeper clone. It outputs a pretty decent looking html page, but keeps adding random characters until at some point it just breaks and starts looping gibberish output.

I'm running the 16-bit version on Strix Halo with:

llama-server --no-mmap -ngl 999 -fa 1 --no-direct-io --host 0.0.0.0 --jinja --models-max 1 --models-preset ./config.ini

config.ini contains:

[unsloth/GLM-4.7-Flash-GGUF:BF16]
temp = 1.0
top-p = 0.95
min-p = 0.01
jinja = true
fit = on
repeat-penalty = 1.0
c = 65536

Re-downloaded it after the sha256 sums are the same as https://huggingface.co/unsloth/GLM-4.7-Flash-GGUF/tree/main/BF16

$ sha256sum .cache/llama.cpp/unsloth_GLM-4.7-Flash-GGUF_BF16_GLM-4.7-Flash-BF16-00001-of-00002.gguf           
922fc4bbabfc5bfeb100312b7b4d51dd54a45f17d6188ead13c9a1952f061889  .cache/llama.cpp/unsloth_GLM-4.7-Flash-GGUF_BF16_GLM-4.7-Flash-BF16-00001-of-00002.gguf

$ sha256sum .cache/llama.cpp/unsloth_GLM-4.7-Flash-GGUF_BF16_GLM-4.7-Flash-BF16-00002-of-00002.gguf           
fc3b83f60bd00b9b7b045aa0fc6a9f551f65c1e855a0ad8d17fab18391d1b45f .cache/llama.cpp/unsloth_GLM-4.7-Flash-GGUF_BF16_GLM-4.7-Flash-BF16-00002-of-00002.gguf

I did notice that the 2nd part of the 16bit model didn't get updated. Is this intended?

llama-server is 8f91ca54e

shimmyshimmer unpinned discussion
shimmyshimmer pinned discussion

Are you guys running locally or cloud pls and if locally whats the system config

Sign up or log in to comment