--- base_model: - grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge - tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1 library_name: transformers tags: - mergekit - merge license: cc-by-nc-4.0 pipeline_tag: text-generation --- # llama-3-Nephilim-v3-8B This repo contains a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). GGUF quants are [here](https://huggingface.co/grimjim/llama-3-Nephilim-v3-8B-GGUF). Although none of the components of this merge were trained for roleplay nor intended for it, the model can be used effectively in that role. Tested with temperature 1 and minP 0.01. This model leans toward being creative, so adjust temperature upward or downward as desired. There are initial format consistency issues with the merged model, but this can be mitigated in an Instruct prompt. Additionally, promptsteering was employed to vary the text generation output to avoid some of the common failings observed during text generation with Llama 3 8B models. The complete Instruct prompt used during testing is available below. - [context template](https://huggingface.co/debased-ai/SillyTavern-settings/blob/main/advanced_formatting/context_template/Llama%203%20Instruct%20Immersed2.json) - [instruct prompt](https://huggingface.co/debased-ai/SillyTavern-settings/blob/main/advanced_formatting/instruct_mode/Llama%203%20Instruct%20Immersed2.json) Built with Meta Llama 3. ## Merge Details ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge](https://huggingface.co/grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge) as a base. ### Models Merged The following models were included in the merge: * [tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1](https://huggingface.co/tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge dtype: bfloat16 merge_method: task_arithmetic parameters: normalize: false slices: - sources: - layer_range: [0, 32] model: grimjim/Llama-3-Instruct-8B-SPPO-Iter3-SimPO-merge - layer_range: [0, 32] model: tokyotech-llm/Llama-3-Swallow-8B-Instruct-v0.1 parameters: weight: 0.1 ```