pastiche-crown-clown-7B-dare
pastiche-crown-clown-7B-dare is a DARE merge of the following models using mergekit:
- bardsai/jaskier-7b-dpo-v5.6
- mlabonne/AlphaMonarch-7B
- mlabonne/NeuralMonarch-7B
- macadeliccc/MBX-7B-v3-DPO
See the paper Language Models are Super Mario: Absorbing Abilities from Homologous Models as a Free Lunch for more on the method.
🧩 Configuration
models:
- model: bardsai/jaskier-7b-dpo-v5.6
- model: mlabonne/AlphaMonarch-7B
parameters:
density: 0.53
weight: 0.2
- model: mlabonne/NeuralMonarch-7B
parameters:
density: 0.53
weight: 0.4
- model: macadeliccc/MBX-7B-v3-DPO
parameters:
density: 0.53
weight: 0.4
merge_method: dare_ties
base_model: bardsai/jaskier-7b-dpo-v5.6
parameters:
int8_mask: true
dtype: bfloat16
- Downloads last month
- 104
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.