File size: 1,426 Bytes
71305c6 c6ffbd9 71305c6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
license: apache-2.0
tags:
- merge
- mergekit
- CorticalStack/pastiche-crown-clown-7b-dare-dpo
- CultriX/NeuralTrix-7B-dpo
- CorticalStack/neurotic-crown-clown-7b-ties
---
<img src="shadow_clown.png" alt="Shadow clown logo" width="800" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
# shadow-clown-7B-dare
shadow-clown-7B-dare is a DARE merge of the following models using [mergekit](https://github.com/cg123/mergekit):
* [CorticalStack/pastiche-crown-clown-7b-dare-dpo](https://huggingface.co/CorticalStack/pastiche-crown-clown-7b-dare-dpo)
* [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo)
* [CorticalStack/neurotic-crown-clown-7b-ties](https://huggingface.co/CorticalStack/neurotic-crown-clown-7b-ties)
See the paper [Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch](https://arxiv.org/abs/2311.03099) for more on the method.
## 🧩 Configuration
```yaml
models:
- model: yam-peleg/Experiment26-7B
- model: CorticalStack/pastiche-crown-clown-7b-dare-dpo
parameters:
density: 0.52
weight: 0.4
- model: CultriX/NeuralTrix-7B-dpo
parameters:
density: 0.52
weight: 0.2
- model: CorticalStack/neurotic-crown-clown-7b-ties
parameters:
density: 0.52
weight: 0.3
merge_method: dare_ties
base_model: yam-peleg/Experiment26-7B
parameters:
int8_mask: true
dtype: bfloat16
``` |