Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time
Paper
•
2203.05482
•
Published
•
7
Another merge that seems promising to me. My goal was to create an RP model with natural flow.
This model was merged using the Linear merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: Magnum-Picaro-0.7-v2-12b
parameters:
weight: 0.4
- model: MN-Violet-Lotus-12B
parameters:
weight: 0.6
merge_method: linear
dtype: float16