Starcannon-Unleashed-12B-v1.0-GGUF
Quantized
GGUF:
VongolaChouko/Starcannon-Unleashed-12B-v1.0-GGUF
mradermacher/Starcannon-Unleashed-12B-v1.0-GGUF
bartowski/Starcannon-Unleashed-12B-v1.0-GGUF
HUGE THANKS TO mradermacher!! ( ´•̥̥̥o•̥̥̥`)♡(˘̩̩̩̩̩̩ ⌂ ˘̩̩̩̩̩̩) Gosh dang, the fella is fast, I was shook! XD, and to the GOAT, the awesome bartowski! For their GGUF quantizations.
And, thanks to Statuo for providing EXL2 quants! (✿◕ᗜ◕)♡
I was only able to test the model using Q6_K with 24576 context at most due to PC limitations, so please let me know how it fared for you. Hopefully it still works well with higher context!
Recommended settings are here: Settings
Sample Output
Introduction
WARNING: Ramblings incoming. Please continue scrolling down if you wish to skip the boring part ʱªʱªʱª(ᕑᗢूᓫ∗)
Ohh boi, here we are! I'm very happy to share with you the result of countless hours bashing my head on the wall! *:・゚✧(=ఠ్ఠܫఠ్ఠ =)∫
To start up, I want to put a disclaimer. This is the first time I'm attempting to merge a model and I'm in no way an expert when it comes to coding. AT ALL. I believe I didn't understand what on earth I was looking at for like 70% of the time... Err, so there's that! I did test this model out rigorously after executing the merging codes, and so far I loved the results. I was honestly expecting the merge to absolutely fail and be totally incoherent, but thankfully not! The two days of not getting enough sleep is worth it ◝(˃̣̣̥▽˂̣̣̥)/
My goal was to hopefully create something that will get the best parts from each finetune/merge, where one model can cover for the other's weak points.
I am a VERY huge fan of Starcannon v3 because of how in character its responses are. It just hits different. It's like the model is the character itself, not ACTING as the character. That's why it always feels sad whenever it starts deteriorating, like I'm observing my beloved character die. No matter what adjustment I did to the context, it won't stay coherent to reach 16K context. On the other hand, I love NemoMix Unleashed for its awesome stability at much longer contexts and its nature to progress the story forward even without prompting. It feels nice that it can stay coherent and stable even after reaching past the context size I set. I also find its ability to read between the lines great. So I figured, why not just marry the two to get the best of both worlds?
I would honestly love to do this again if I can because there's one too many times I found something I like in another model and then on another and wished so desperately they would just marry each other and have kids! XD
So please let me know how it fared for my first attempt!
I also want to learn how to finetune myself in addition to merging, but I don't think my PC is capable enough to endure it. I think it almost croaked on me when I did this merge, and my SDD cried, so maybe I'll just do it some other time when I have free time and more resources to spend.
And thus, I was finally able to merge my favorite models after hours of research, tutorials, asking annoying questions to the community (that no one replied to (´;︵;`)), and coding hell. Here we are!
°˖✧It's all ABSOLUTELY worth it!✧˖°
Instruct
Both ChatML and Mistral should work fine. Personally, I tested this using ChatML. I found that I like the model's responses better when I use this format. Try to test it out and observe which one you like best. :D
Settings
I recommend using these settings: Starcannon-Unleashed-12B-v1.0-ST-Formatting-2024-10-29.json
IMPORTANT: Open Silly Tavern and use "Master Import", which can be found under "A" tab — Advanced Formatting. Replace the "INSERT WORLD HERE" placeholders with the world/universe in which your character belongs to. If not applicable, just remove that part.
Check your User Settings and set "Example Messages Behavior" to "Never include examples", in order to prevent the Examples of Dialogue from getting sent two times in the context. People reported that if not set, this results in <|im_end|> tokens being outputted. Refer to this post for more info.
Temperature 1.15 - 1.25 is good, but lower should also work well, as long as you also tweak the Min P and XTC to ensure the model won't choke. Play around with it to see what suits your taste.
This is a modified version of MarinaraSpaghetti's Mistral-Small-Correct.json, transformed into ChatML.
You can find the original version here: MarinaraSpaghetti/SillyTavern-Settings
Tips
- Examples of Dialogue and First Message are very important. The model will copy the style you wrote in these sections. So for example, if you want short outputs, make Examples of Dialogue and First Message short, and if you want longer outputs, make sure your examples have full paragraphs, composed of several sentences.
- If your Examples of Dialogue and First Message are short/concise but the model still rambles, lower Temperature in small increments, but keep Min P and XTC as is first. Test the result and adjust them to your liking. If it still rambles make XTC Threshold higher.
- Utilize Author's Note In-chat @ Depth 2 as System if you want the instruction to have greater impact on the next response. If you want something exciting and spontaneous, you can try out this note I used when I tested out the model: "Scenario: Spontaneous. {{char}} has full autonomy to do anything they wish and progress the interaction in any way they like."
Credits
A very huge thank you to MarinaraSpaghetti and Nothing is Real!! (灬^ω^灬)ノ~ ♡ (´。• ᵕ •。`) ♡ I really fell in love with your models and it inspired me to learn how to make this one, and boi was it worth it! °˖✧◝(TT▿TT)◜✧˖°
Merge Details
This is a merge of pre-trained language models created using mergekit.
Merge Method
This model was merged using the della_linear merge method using G:\text-generation-webui\models\MarinaraSpaghetti_NemoMix-Unleashed-12B as a base.
Models Merged
The following models were included in the merge:
- G:\text-generation-webui\models\Nothingiisreal_MN-12B-Starcannon-v3
Configuration
The following YAML configuration was used to produce this model:
base_model: G:\text-generation-webui\models\MarinaraSpaghetti_NemoMix-Unleashed-12B
dtype: bfloat16
merge_method: della_linear
parameters:
epsilon: 0.05
int8_mask: 1.0
lambda: 1.0
slices:
- sources:
- layer_range: [0, 40]
model: G:\text-generation-webui\models\MarinaraSpaghetti_NemoMix-Unleashed-12B
parameters:
density: 0.65
weight: 0.4
- layer_range: [0, 40]
model: G:\text-generation-webui\models\Nothingiisreal_MN-12B-Starcannon-v3
parameters:
density: 0.55
weight: 0.6
- Downloads last month
- 585