|
--- |
|
base_model: |
|
- NousResearch/Meta-Llama-3-8B-Instruct |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
|
|
--- |
|
|
|
This is meant for further finetuning, it is iffy as-is. Made using a new structure I call "ripple merge" that works backwards and forwards through the model. |
|
|
|
Other frankenmerge methods were failing at sizes over 11b. |
|
|
|
--- |
|
|
|
# Llama-3-15b-Instruct |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the passthrough merge method. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
slices: |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [0, 15] |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [14, 15] |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [13, 14] |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [12, 13] |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [11, 12] |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [10, 11] |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [9, 10] |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [8, 23] |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [21, 22] |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [20, 21] |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [19, 20] |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [18, 19] |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [17, 18] |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [16, 17] |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [15, 16] |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [14, 15] |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [13, 14] |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [12, 13] |
|
- sources: |
|
- model: NousResearch/Meta-Llama-3-8B-Instruct |
|
layer_range: [12, 32] |
|
|
|
merge_method: passthrough |
|
dtype: float16 |
|
|
|
|
|
|
|
|
|
``` |
|
|