CursedMatrix-8B-v9 / README.md
Khetterman's picture
Update README.md
cd1c224 verified
---
base_model:
- bluuwhale/L3-SthenoMaidBlackroot-8B-V1
- bunnycore/Llama-3.1-8B-OmniMatrix
- bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1
- Casual-Autopsy/L3-Super-Nova-RP-8B
- Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
- d0rj/Llama-3-8B-saiga-suzume-ties
- DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B
- DreadPoor/CoolerCoder-8B-Model_Stock
- DreadPoor/L3.1-BaeZel-8B-Della
- DreadPoor/Trinas_Nectar-8B-model_stock
- IlyaGusev/saiga_llama3_8b
- invisietch/EtherealRainbow-v0.3-8B
- invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B
- jeiku/Chaos_RP_l3_8B
- mlabonne/Daredevil-8B
- MrRobotoAI/Loki-.Epic_Fiction.-8b
- PJMixers/LLaMa-3-CursedStock-v2.0-8B
- ResplendentAI/Nymph_8B
- rityak/L3.1-DarkStock-8B
- saishf/Neural-SOVLish-Devil-8B-L3
- saishf/SOVL-Mega-Mash-V2-L3-8B
- sethuiyer/Dr.Samantha-8B
- SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA
- v000000/L3-8B-BlueSerpentine
- v000000/L3.1-Storniitova-8B
- win10/ArliAI-RPMax-v1.3-merge-8B
- ZeroXClem/Llama-3-8B-ProLong-SAO-Roleplay-512k
library_name: transformers
tags:
- mergekit
- merge
- bfloat16
- safetensors
- 8b
- chat
- creative
- roleplay
- conversational
- not-for-all-audiences
language:
- en
- ru
---
# CursedMatrix-8B-v9
>The long journey from despair to acceptable perfection.
![CursedMatrixLogo256.png](https://cdn-uploads.huggingface.co/production/uploads/673125091920e70ac26c8a2e/8TFyICKPCNowo3jf3Q7y2.png)
This is an interesting merge of **27 cool models**, created using [mergekit](https://github.com/arcee-ai/mergekit).
Enjoy exploring :)
## Merge Details
### Method
This model was merged using the multistep process and remerge with some model variations for best result.
### Models
The following models were included in the merge:
* [bluuwhale/L3-SthenoMaidBlackroot-8B-V1](https://huggingface.co/bluuwhale/L3-SthenoMaidBlackroot-8B-V1)
* [bunnycore/Llama-3.1-8B-OmniMatrix](https://huggingface.co/bunnycore/Llama-3.1-8B-OmniMatrix)
* [bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1](https://huggingface.co/bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1)
* [Casual-Autopsy/L3-Super-Nova-RP-8B](https://huggingface.co/Casual-Autopsy/L3-Super-Nova-RP-8B)
* [Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B](https://huggingface.co/Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B)
* [d0rj/Llama-3-8B-saiga-suzume-ties](https://huggingface.co/d0rj/Llama-3-8B-saiga-suzume-ties)
* [DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B](https://huggingface.co/DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B)
* [DreadPoor/CoolerCoder-8B-Model_Stock](https://huggingface.co/DreadPoor/CoolerCoder-8B-Model_Stock)
* [DreadPoor/L3.1-BaeZel-8B-Della](https://huggingface.co/DreadPoor/L3.1-BaeZel-8B-Della)
* [DreadPoor/Trinas_Nectar-8B-model_stock](https://huggingface.co/DreadPoor/Trinas_Nectar-8B-model_stock)
* [IlyaGusev/saiga_llama3_8b](https://huggingface.co/IlyaGusev/saiga_llama3_8b)
* [invisietch/EtherealRainbow-v0.3-8B](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B)
* [invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B](https://huggingface.co/invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B)
* [jeiku/Chaos_RP_l3_8B](https://huggingface.co/jeiku/Chaos_RP_l3_8B)
* [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B)
* [MrRobotoAI/Loki-.Epic_Fiction.-8b](https://huggingface.co/MrRobotoAI/Loki-.Epic_Fiction.-8b)
* [PJMixers/LLaMa-3-CursedStock-v2.0-8B](https://huggingface.co/PJMixers/LLaMa-3-CursedStock-v2.0-8B)
* [ResplendentAI/Nymph_8B](https://huggingface.co/ResplendentAI/Nymph_8B)
* [rityak/L3.1-DarkStock-8B](https://huggingface.co/rityak/L3.1-DarkStock-8B)
* [saishf/Neural-SOVLish-Devil-8B-L3](https://huggingface.co/saishf/Neural-SOVLish-Devil-8B-L3)
* [saishf/SOVL-Mega-Mash-V2-L3-8B](https://huggingface.co/saishf/SOVL-Mega-Mash-V2-L3-8B)
* [sethuiyer/Dr.Samantha-8B](https://huggingface.co/sethuiyer/Dr.Samantha-8B)
* [SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA)
* [v000000/L3-8B-BlueSerpentine](https://huggingface.co/v000000/L3-8B-BlueSerpentine)
* [v000000/L3.1-Storniitova-8B](https://huggingface.co/v000000/L3.1-Storniitova-8B)
* [win10/ArliAI-RPMax-v1.3-merge-8B](https://huggingface.co/win10/ArliAI-RPMax-v1.3-merge-8B)
* [ZeroXClem/Llama-3-8B-ProLong-SAO-Roleplay-512k](https://huggingface.co/ZeroXClem/Llama-3-8B-ProLong-SAO-Roleplay-512k)
### Configuration
The following YAML configurations was used to produce this model:
```yaml
### ::: Generation 1 merges :
# CursedMatrix-8B-v1
models:
- model: bunnycore/Llama-3.1-8B-OmniMatrix
parameters:
density: [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1]
weight: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
- model: PJMixers/LLaMa-3-CursedStock-v2.0-8B
parameters:
density: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
weight: [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1]
merge_method: ties
base_model: saishf/SOVL-Mega-Mash-V2-L3-8B
dtype: bfloat16
# TitanPlanet-8B-v1
models:
- model: bunnycore/Llama-3.1-8B-TitanFusion-Mix-2.1
parameters:
density: [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1]
weight: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
- model: MrRobotoAI/Loki-.Epic_Fiction.-8b
parameters:
density: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
weight: [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1]
merge_method: ties
base_model: DavidAU/L3.1-Dark-Planet-SpinFire-Uncensored-8B
dtype: bfloat16
# NeuralCoder-8B-v1
models:
- model: sethuiyer/Dr.Samantha-8B
parameters:
density: [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1]
weight: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
- model: DreadPoor/CoolerCoder-8B-Model_Stock
parameters:
density: [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
weight: [0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.8, 0.7, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1]
merge_method: ties
base_model: saishf/Neural-SOVLish-Devil-8B-L3
dtype: bfloat16
# EtherealNymph-8B-v1
models:
- model: invisietch/EtherealRainbow-v0.3-8B
- model: ResplendentAI/Nymph_8B
merge_method: slerp
base_model: invisietch/EtherealRainbow-v0.3-8B
dtype: bfloat16
parameters:
t: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
# UmbralDevil-8B-v1
models:
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
- model: mlabonne/Daredevil-8B
merge_method: slerp
base_model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
dtype: bfloat16
parameters:
t: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
# EvilMind-8B-v1
models:
- model: mlabonne/Daredevil-8B
parameters:
weight: [1.0, 0.3, 0.1, 0.0]
density: [0.7, 0.2]
- model: Casual-Autopsy/L3-Umbral-Mind-RP-v3.0-8B
parameters:
weight: [0.1, 0.9, 0.1]
density: 0.5
- model: invisietch/EtherealRainbow-v0.3-8B
parameters:
weight: [0.0, 0.1, 0.3, 1.0]
density: [0.2, 0.7]
merge_method: della_linear
parameters:
epsilon: 0.15
lambda: 1
base_model: ResplendentAI/Nymph_8B
dtype: bfloat16
### ::: Generation 2 merges :
# DevilMind-8B-v1
models:
- model: F:/EvilMind-8B-v1
- model: F:/UmbralDevil-8B-v1
merge_method: slerp
base_model: F:/EvilMind-8B-v1
dtype: bfloat16
parameters:
t: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
# TitanNymph-8B-v1
models:
- model: F:/TitanPlanet-8B-v1
- model: F:/EtherealNymph-8B-v1
merge_method: slerp
base_model: F:/TitanPlanet-8B-v1
dtype: bfloat16
parameters:
t: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
# CursedMatrix-8B-v2
models:
- model: F:/TitanPlanet-8B-v1
parameters:
density: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
weight: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
- model: F:/NeuralCoder-8B-v1
parameters:
density: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
weight: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.1, 0.9, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
merge_method: dare_ties
base_model: F:/CursedMatrix-8B-v1
dtype: bfloat16
# CursedMatrix-8B-v3
models:
- model: F:/TitanPlanet-8B-v1
- model: F:/CursedMatrix-8B-v1
- model: F:/EtherealNymph-8B-v1
merge_method: model_stock
base_model: F:/CursedMatrix-8B-v2
dtype: bfloat16
### ::: Generation 3 merges :
# CursedMatrix-8B-v4
models:
- model: F:/CursedMatrix-8B-v3
parameters:
weight: 0.8
- model: F:/TitanNymph-8B-v1
parameters:
weight: 0.4
- model: DreadPoor/Trinas_Nectar-8B-model_stock
parameters:
weight: 0.3
- model: F:/DevilMind-8B-v1
parameters:
weight: 0.2
merge_method: task_arithmetic
base_model: F:/CursedMatrix-8B-v3
dtype: bfloat16
# CursedMatrix-8B-v4-rev2
models:
- model: F:/CursedMatrix-8B-v3
parameters:
weight: 0.8
- model: F:/TitanNymph-8B-v1
parameters:
weight: 0.4
- model: DreadPoor/Trinas_Nectar-8B-model_stock
parameters:
weight: 0.3
- model: F:/DevilMind-8B-v1
parameters:
weight: 0.2
merge_method: task_arithmetic
base_model: F:/CursedMatrix-8B-v1
dtype: bfloat16
# CursedMatrix-8B-v5
models:
- model: F:/CursedMatrix-8B-v4-rev2
merge_method: slerp
base_model: F:/CursedMatrix-8B-v4
dtype: bfloat16
parameters:
t: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
### ::: Generation 4 merges :
# CursedMatrix-8B-v6
models:
- model: jeiku/Chaos_RP_l3_8B
- model: ZeroXClem/Llama-3-8B-ProLong-SAO-Roleplay-512k
merge_method: model_stock
base_model: F:/CursedMatrix-8B-v5
dtype: bfloat16
# CursedMatrix-8B-v6-rev2
models:
- model: win10/ArliAI-RPMax-v1.3-merge-8B
- model: v000000/L3.1-Storniitova-8B
- model: d0rj/Llama-3-8B-saiga-suzume-ties
merge_method: model_stock
base_model: F:/CursedMatrix-8B-v5
dtype: bfloat16
# CursedMatrix-8B-v6-rev3
models:
- model: Casual-Autopsy/L3-Super-Nova-RP-8B
- model: IlyaGusev/saiga_llama3_8b
merge_method: model_stock
base_model: F:/CursedMatrix-8B-v5
dtype: bfloat16
# CursedMatrix-8B-v7
models:
- model: F:/CursedMatrix-8B-v6
parameters:
weight: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
density: [0.05, 0.25]
- model: F:/CursedMatrix-8B-v6-rev3
parameters:
weight: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
density: [0.25, 0.05]
merge_method: ties
base_model: F:/CursedMatrix-8B-v6-rev2
dtype: bfloat16
# CursedMatrix-8B-v8
models:
- model: F:/CursedMatrix-8B-v6
- model: F:/CursedMatrix-8B-v6-rev2
- model: F:/CursedMatrix-8B-v6-rev3
merge_method: model_stock
base_model: F:/CursedMatrix-8B-v7
dtype: bfloat16
### ::: Generation 5 merges :
# Cursed-DarkRainbow-8B-v1
models:
- model: invisietch/L3.1-EtherealRainbow-v1.0-rc1-8B
parameters:
weight: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
density: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
- model: rityak/L3.1-DarkStock-8B
parameters:
weight: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
density: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
merge_method: della
parameters:
epsilon: 0.123456789
lambda: 0.987654321
base_model: F:/CursedMatrix-8B-v8
dtype: bfloat16
# Cursed-BlueBaezel-8B-v1
models:
- model: v000000/L3-8B-BlueSerpentine
parameters:
weight: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
density: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
- model: DreadPoor/L3.1-BaeZel-8B-Della
parameters:
weight: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
density: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
merge_method: della
parameters:
epsilon: 0.123456789
lambda: 0.987654321
base_model: F:/CursedMatrix-8B-v8
dtype: bfloat16
# Cursed-SuzumeMaid-8B-v1
models:
- model: d0rj/Llama-3-8B-saiga-suzume-ties
parameters:
weight: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
density: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
- model: bluuwhale/L3-SthenoMaidBlackroot-8B-V1
parameters:
weight: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
density: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
merge_method: della
parameters:
epsilon: 0.123456789
lambda: 0.987654321
base_model: F:/CursedMatrix-8B-v8
dtype: bfloat16
# Cursed-UnalignedSaiga-8B-v1
models:
- model: SicariusSicariiStuff/LLAMA-3_8B_Unaligned_BETA
parameters:
weight: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
density: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
- model: IlyaGusev/saiga_llama3_8b
parameters:
weight: [0.5, 0.4, 0.6, 0.3, 0.7, 0.2, 0.8, 0.2, 0.8, 0.3, 0.7, 0.4, 0.6, 0.5]
density: [0.5, 0.6, 0.4, 0.7, 0.3, 0.8, 0.2, 0.8, 0.2, 0.7, 0.3, 0.6, 0.4, 0.5]
merge_method: della
parameters:
epsilon: 0.123456789
lambda: 0.987654321
base_model: F:/CursedMatrix-8B-v8
dtype: bfloat16
# CursedMatrix-8B-v9
# Final model...
models:
- model: F:/Cursed-UnalignedSaiga-8B-v1
- model: F:/Cursed-DarkRainbow-8B-v1
- model: F:/Cursed-BlueBaezel-8B-v1
- model: F:/Cursed-SuzumeMaid-8B-v1
merge_method: model_stock
base_model: F:/CursedMatrix-8B-v8
dtype: bfloat16
```
>My thanks to the authors of the original models, your work is incredible. Have a good time 🖤