|
--- |
|
base_model: |
|
- Undi95/Meta-Llama-3-8B-Instruct-hf |
|
- mpasila/Llama-3-LimaRP-Instruct-8B |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
license: llama3 |
|
--- |
|
# Llama-3-MetaRP-V2-8B |
|
|
|
This might have issues with prompt template due to Unsloth messing up the prompt format for Llama 3.. (it added gpt and user that did not exist in the original Llama 3 Instruct format) |
|
|
|
It appears to have destroyed some of the prompt following abilities. So I wonder if there's a better way to merge models with the instruct model. |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Undi95/Meta-Llama-3-8B-Instruct-hf](https://huggingface.co/Undi95/Meta-Llama-3-8B-Instruct-hf) as a base. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [mpasila/Llama-3-LimaRP-Instruct-8B](https://huggingface.co/mpasila/Llama-3-LimaRP-Instruct-8B) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: mpasila/Llama-3-LimaRP-Instruct-8B |
|
parameters: |
|
density: 0.15 |
|
weight: |
|
- filter: mlp |
|
value: 0.5 |
|
- value: 0 |
|
merge_method: dare_ties |
|
base_model: Undi95/Meta-Llama-3-8B-Instruct-hf |
|
parameters: |
|
normalize: true |
|
int8_mask: true |
|
dtype: bfloat16 |
|
``` |