Experimental Merge. I have no idea what the hell I'm doing.
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using unsloth/Mistral-Nemo-Instruct-2407 as a base.
Models Merged
The following models were included in the merge:
- cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
- Kooten/Mistral-Nemo-Instruct-2407-norefuse-OAS
- intervitens/mini-magnum-12b-v1.1
- NeverSleep/Lumimaid-v0.2-12B
Configuration
The following YAML configuration was used to produce this model:
base_model: unsloth/Mistral-Nemo-Instruct-2407
models:
- model: intervitens/mini-magnum-12b-v1.1
parameters:
density: 0.5
weight: 0.2
- model: NeverSleep/Lumimaid-v0.2-12B
parameters:
density: 0.5
weight: 0.5
- model: Kooten/Mistral-Nemo-Instruct-2407-norefuse-OAS
parameters:
density: 0.6
weight: 1.0
- model: cognitivecomputations/dolphin-2.9.3-mistral-nemo-12b
parameters:
density: 0.6
weight: 0.5
merge_method: dare_ties
dtype: float16
- Downloads last month
- 10
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.