Unexpected loss spikes and performance degradation when fine-tuning Gemma 4 (google/gemma-4-31B-it)
#73
by rstaruch - opened
Have you also encountered problems when fine-tuning Gemma 4 (google/gemma-4-31B-it) on text-only data? When I fine-tune the model on a grammatical error correction task, the loss quickly converges to low values (0.01β0.03), but every 1β2 logging steps there are spikes to values like 0.10 or 0.20, whereas smaller models did not exhibit such spikes. The overall model performance during evaluation is worse compared to the original google/gemma-4-31B-it without fine-tuning, which clearly indicates that something is wrong. For other models, fine-tuning provides successful evaluation gains.
Hi @rstaruch
Thanks for the detailed report. I wanted to understand your setup/configs a bit better.
I would like to know
- Are you doing full fine-tuning, LoRA/QLoRA, or something else? If LoRA, what are your r, lora_alpha, and target modules?
- What learning rate, scheduler and optimizer are you using? . Did you the same when testing for smaller models?
- How are you formatting the inputs/outputs before feeding them to the tokenizer?
- Can you also please share the smaller models which you compare against?
Could you also try dropping your learning rate and reducing the epochs to check if that helps?
Thanks