|
--- |
|
base_model: |
|
- failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 |
|
- DeepMount00/Llama-3-8b-Ita |
|
- meta-llama/Meta-Llama-3-8B-Instruct |
|
- nbeerbower/llama-3-gutenberg-8B |
|
- jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0 |
|
- VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
license: apache-2.0 |
|
--- |
|
# dare_linear |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) as a base. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3) |
|
* [DeepMount00/Llama-3-8b-Ita](https://huggingface.co/DeepMount00/Llama-3-8b-Ita) |
|
* [nbeerbower/llama-3-gutenberg-8B](https://huggingface.co/nbeerbower/llama-3-gutenberg-8B) |
|
* [jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0](https://huggingface.co/jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0) |
|
* [VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: meta-llama/Meta-Llama-3-8B-Instruct |
|
parameters: |
|
density: 0.5 |
|
weight: 1.0 |
|
- model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 |
|
parameters: |
|
density: 0.5 |
|
weight: 1.0 |
|
- model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct |
|
parameters: |
|
density: 0.5 |
|
weight: 1.0 |
|
- model: DeepMount00/Llama-3-8b-Ita |
|
parameters: |
|
density: 0.5 |
|
weight: 1.0 |
|
- model: nbeerbower/llama-3-gutenberg-8B |
|
parameters: |
|
density: 0.5 |
|
weight: 1.0 |
|
- model: jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0 |
|
parameters: |
|
density: 0.5 |
|
weight: 1.0 |
|
merge_method: dare_linear |
|
tokenizer_source: union |
|
base_model: meta-llama/Meta-Llama-3-8B-Instruct |
|
parameters: |
|
int8_mask: true |
|
dtype: bfloat16 |
|
``` |