gpt-gqa-RoPE
This model is a fine-tuned version of on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 6.0753
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 128
- eval_batch_size: 128
- seed: 20
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 106
- training_steps: 1064
- mixed_precision_training: Native AMP
Training results
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 9.5076 | 0.0590 | 106 | 9.2289 |
| 7.6979 | 0.1179 | 212 | 7.6048 |
| 7.0209 | 0.1769 | 318 | 6.9358 |
| 6.6416 | 0.2359 | 424 | 6.5840 |
| 6.4055 | 0.2948 | 530 | 6.3631 |
| 6.2688 | 0.3538 | 636 | 6.2237 |
| 6.1788 | 0.4127 | 742 | 6.1389 |
| 6.1354 | 0.4717 | 848 | 6.0954 |
| 6.1184 | 0.5307 | 954 | 6.0783 |
| 6.1140 | 0.5896 | 1060 | 6.0753 |
| 6.1140 | 0.5919 | 1064 | 6.0753 |
Framework versions
- Transformers 5.5.4
- Pytorch 2.11.0+cu130
- Datasets 4.8.4
- Tokenizers 0.22.2
- Downloads last month
- 130
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support