--- base_model: [] library_name: transformers tags: - mergekit - merge --- # llama3-8B-DarkIdol-2.3-Uncensored-32K This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using ./llama3-8B-DarkIdol-2.3b as a base. ### Models Merged The following models were included in the merge: * ./Meta-Llama-3-8B-abliterated * ./Llama-3-8B-LexiFun-Uncensored-V1 * ./Llama-3-8B-Lexi-Uncensored * ./Llama-3-8B-Lexi-Smaug-Uncensored * ./Configurable-Hermes-2-Pro-Llama-3-8B * ./Unsafe-Llama-3-8B ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: ./Meta-Llama-3-8B-abliterated - model: ./Llama-3-8B-LexiFun-Uncensored-V1 - model: ./Llama-3-8B-Lexi-Uncensored - model: ./Llama-3-8B-Lexi-Smaug-Uncensored - model: ./Unsafe-Llama-3-8B - model: ./Configurable-Hermes-2-Pro-Llama-3-8B - model: ./llama3-8B-DarkIdol-2.3b merge_method: model_stock base_model: ./llama3-8B-DarkIdol-2.3b dtype: bfloat16 ```