Azazelle's picture
Upload folder using huggingface_hub
4433e73 verified
---
base_model:
- AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0
- DevQuasar/llama3_8b_chat_brainstorm
- NousResearch/Hermes-2-Theta-Llama-3-8B
- chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
- openchat/openchat-3.6-8b-20240522
- nbeerbower/llama-3-gutenberg-8B
- saishf/Neural-SOVLish-Devil-8B-L3
- NousResearch/Meta-Llama-3-8B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the breadcrumbs_ties merge method using [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) as a base.
### Models Merged
The following models were included in the merge:
* [AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0](https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0)
* [DevQuasar/llama3_8b_chat_brainstorm](https://huggingface.co/DevQuasar/llama3_8b_chat_brainstorm)
* [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B)
* [chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO](https://huggingface.co/chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO)
* [openchat/openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522)
* [nbeerbower/llama-3-gutenberg-8B](https://huggingface.co/nbeerbower/llama-3-gutenberg-8B)
* [saishf/Neural-SOVLish-Devil-8B-L3](https://huggingface.co/saishf/Neural-SOVLish-Devil-8B-L3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Meta-Llama-3-8B-Instruct
- model: NousResearch/Hermes-2-Theta-Llama-3-8B # 6/10
parameters:
density: 0.4
weight: 0.15
- model: openchat/openchat-3.6-8b-20240522 # 5/10
parameters:
density: 0.3
weight: 0.11
- model: DevQuasar/llama3_8b_chat_brainstorm # 6/10
parameters:
density: 0.4
weight: 0.15
- model: AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0 # 6/10
parameters:
density: 0.4
weight: 0.15
- model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO # 6/10
parameters:
density: 0.4
weight: 0.15
- model: saishf/Neural-SOVLish-Devil-8B-L3 # 6/10
parameters:
density: 0.4
weight: 0.15
- model: nbeerbower/llama-3-gutenberg-8B # 6/10
parameters:
density: 0.4
weight: 0.15
merge_method: breadcrumbs_ties
base_model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
normalize: false
rescale: true
gamma: 0.01
dtype: float16
```