Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Paper
• 2203.05482 • Published
• 7
This is a merge of pre-trained language models created using mergekit.
This model was merged using the linear merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: flammenai/Mahou-1.3-llama3-8B
parameters:
weight: 1.0
- model: Danielbrdz/Barcenas-Llama3-8b-ORPO
parameters:
weight: 1.0
- model: Weyaxi/Einstein-v6.1-Llama3-8B
parameters:
weight: 1.0
merge_method: linear
tokenizer_source: union
dtype: float16
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 20.69 |
| IFEval (0-Shot) | 41.23 |
| BBH (3-Shot) | 30.87 |
| MATH Lvl 5 (4-Shot) | 7.10 |
| GPQA (0-shot) | 5.37 |
| MuSR (0-shot) | 9.18 |
| MMLU-PRO (5-shot) | 30.42 |