|
--- |
|
license: llama2 |
|
language: |
|
- en |
|
tags: |
|
- RP |
|
- ERP |
|
- chat |
|
- storywriting |
|
--- |
|
# Kitchen Sink 103b |
|
|
|
|
|
This model is a rotating-stack merge of three 70b models, each of which is a merge of multiple merges and finetunes. The result is a large model that contains a little bit of everything - including the kitchen sink. |
|
|
|
Component models for this stack are |
|
- royallab/Aetheria-L2-70B |
|
- lizpreciatior/lzlv_70b_fp16_hf |
|
- Sao10K/WinterGoddess-1.4x-70B-L2 |
|
|
|
Components of those models include Nous-Hermes-Llama2-70b, Xwin-LM-7B-V0.1, Mythospice-70b, Euryale-1.3-L2-70B, tulu-2-dpo-70b, GOAT-70B-Storytelling, Platypus2-70B-instruct, Lila-70B, SunsetBoulevard, and some private LoRAs. |
|
|
|
|
|
# WTF is a rotating-stack merge? |
|
|
|
Jeb Carter found that the performance of a stacked merge could be significantly improved by reversing the model order in the stack, and then doing a linear merge between the original and reversed stacks. That is what I did here, creating three stacked merges with the three source models, and then doing a 1:1:1 linear merge of all three stacks. The exact merge configs can be found in the recipe.txt file. |
|
|