Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Paper
•
2203.05482
•
Published
•
7
This is a merge of pre-trained language models created using mergekit.
This model was merged using the linear merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
dtype: bfloat16
merge_method: linear # use linear so we can include multiple models, albeit at a zero weight
parameters:
weight: 1.0 # weight everything as 1 unless specified otherwise - linear with one model weighted at 1 is a no-op like passthrough
slices:
- sources:
- layer_range: [0, 1]
model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
- layer_range: [0, 1]
model: D:/text-generation-webui/models/taide_Llama3-TAIDE-LX-8B-Chat-Alpha1
parameters:
weight: 0
- sources:
- layer_range: [1, 24]
model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
- layer_range: [1, 24]
model: D:/text-generation-webui/models/taide_Llama3-TAIDE-LX-8B-Chat-Alpha1
- sources:
- layer_range: [24, 32]
model: D:/text-generation-webui/models/taide_Llama3-TAIDE-LX-8B-Chat-Alpha1
parameters:
weight: 0
- layer_range: [24, 32]
model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
6-bit
8-bit