llama 3 experiments
Collection
Merge and finetune experiments with llama 3, aiming for decensorship and benchmark performance.
β’
17 items
β’
Updated
This model is based on Llama-3-8b, and is governed by META LLAMA 3 COMMUNITY LICENSE AGREEMENT
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Model Stock merge method using nbeerbower/llama-3-slerp-kraut-dragon-8B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
- model: DeepMount00/Llama-3-8b-Ita
- model: mlabonne/ChimeraLlama-3-8B-v3
merge_method: model_stock
base_model: nbeerbower/llama-3-slerp-kraut-dragon-8B
dtype: bfloat16