merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using MrRobotoAI/llama3-8B-Special-Dark-v2.0 as a base.
Models Merged
The following models were included in the merge:
- OwenArli/Awanllm-Llama-3-8B-Cumulus-v1.0
- refuelai/Llama-3-Refueled
- TIGER-Lab/MAmmoTH2-8B-Plus
- SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
- turboderp/llama3-turbcat-instruct-8b
- openchat/openchat-3.6-8b-20240522
Configuration
The following YAML configuration was used to produce this model:
models:
- model: openchat/openchat-3.6-8b-20240522
parameters:
weight: 0.1429
density: 0.9
- model: OwenArli/Awanllm-Llama-3-8B-Cumulus-v1.0
parameters:
weight: 0.1429
density: 0.9
- model: refuelai/Llama-3-Refueled
parameters:
weight: 0.1429
density: 0.9
- model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_Alpha
parameters:
weight: 0.1429
density: 0.9
- model: TIGER-Lab/MAmmoTH2-8B-Plus
parameters:
weight: 0.1429
density: 0.9
- model: turboderp/llama3-turbcat-instruct-8b
parameters:
weight: 0.1429
density: 0.9
- model: MrRobotoAI/llama3-8B-Special-Dark-v2.0
parameters:
weight: 0.1429
density: 0.9
merge_method: dare_ties
base_model: MrRobotoAI/llama3-8B-Special-Dark-v2.0
parameters:
normalize: true
int8_mask: true
dtype: float16
- Downloads last month
- 9
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.