Unexpected loss spikes and performance degradation when fine-tuning Gemma 4 (google/gemma-4-31B-it)

#73
by rstaruch - opened

Have you also encountered problems when fine-tuning Gemma 4 (google/gemma-4-31B-it) on text-only data? When I fine-tune the model on a grammatical error correction task, the loss quickly converges to low values (0.01–0.03), but every 1–2 logging steps there are spikes to values like 0.10 or 0.20, whereas smaller models did not exhibit such spikes. The overall model performance during evaluation is worse compared to the original google/gemma-4-31B-it without fine-tuning, which clearly indicates that something is wrong. For other models, fine-tuning provides successful evaluation gains.

Google org

Hi @rstaruch
Thanks for the detailed report. I wanted to understand your setup/configs a bit better.
I would like to know

  1. Are you doing full fine-tuning, LoRA/QLoRA, or something else? If LoRA, what are your r, lora_alpha, and target modules?
  2. What learning rate, scheduler and optimizer are you using? . Did you the same when testing for smaller models?
  3. How are you formatting the inputs/outputs before feeding them to the tokenizer?
  4. Can you also please share the smaller models which you compare against?
    Could you also try dropping your learning rate and reducing the epochs to check if that helps?

Thanks

Sign up or log in to comment