metadata
base_model:
- VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
- nbeerbower/llama-3-gutenberg-8B
- jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0
- meta-llama/Meta-Llama-3-8B-Instruct
- DeepMount00/Llama-3-8b-Ita
- failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
library_name: transformers
tags:
- mergekit
- merge
linear
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the linear merge method using meta-llama/Meta-Llama-3-8B-Instruct as a base.
Models Merged
The following models were included in the merge:
- VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
- nbeerbower/llama-3-gutenberg-8B
- jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0
- DeepMount00/Llama-3-8b-Ita
- failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
Configuration
The following YAML configuration was used to produce this model:
models:
- model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
density: 0.5
weight: 1.0
- model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
parameters:
density: 0.5
weight: 1.0
- model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
parameters:
density: 0.5
weight: 1.0
- model: DeepMount00/Llama-3-8b-Ita
parameters:
density: 0.5
weight: 1.0
- model: nbeerbower/llama-3-gutenberg-8B
parameters:
density: 0.5
weight: 1.0
- model: jpacifico/French-Alpaca-Llama3-8B-Instruct-v1.0
parameters:
density: 0.5
weight: 1.0
merge_method: linear
tokenizer_source: union
base_model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
int8_mask: true
dtype: bfloat16