--- license: apache-2.0 pipeline_tag: text-generation tags: - chat base_model: - Gryphe/Pantheon-RP-1.6-12b-Nemo - Sao10K/MN-12B-Lyra-v3 - anthracite-org/magnum-v2.5-12b-kto - nbeerbower/mistral-nemo-bophades-12B --- Mostly quanting this because I want to see how it performs. [This is the 8bpw EXL2 quant of this model. For the original model, go here.](https://huggingface.co/Luni/StarDust-12b-v1)
[For the 6bpw version, go here.](https://huggingface.co/Statuo/Stardust-12b-MN-EXL2-6bpw)
[For the 4bpw version, go here](https://huggingface.co/Statuo/Stardust-12b-MN-EXL2-4bpw)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6303fa71fc783bfc7443e7ae/qRsB-uefbKKrAqxknbWtN.png) # StarDust-12b-v1 ## Quants - GGUF: [mradermacher/StarDust-12b-v1-GGUF](https://huggingface.co/mradermacher/StarDust-12b-v1-GGUF) - weighted/imatrix GGUF [mradermacher/StarDust-12b-v1-i1-GGUF](https://huggingface.co/mradermacher/StarDust-12b-v1-i1-GGUF) - exl2: [lucyknada/Luni_StarDust-12b-v1-exl2](https://huggingface.co/lucyknada/Luni_StarDust-12b-v1-exl2) ## Description | Usecase The result of this merge is in my opinion a more vibrant and less generic sonnet inspired prose, it's able to be gentle and harsh where asked. I've personally been trying to get a more spice while also compensating for the Magnum-v2.5 having the issue on my end that it simply won't stop yapping. - This model is intended to be used as a Role-playing model. - Its direct conversational output is... I can't even say it's luck, it's just not made for it. - Extension to Conversational output: The Model is designed for roleplay, direct instructing or general purpose is NOT recommended. ## Initial Feedback Initial feedback shows that the model has a tendency to promote flirting. If this becomes too much try to steer the model with a system prompt to focus on SFW and on-flirty interactions. ## Prompting ### Edit: ChatML has proven to be the BEST choice. Both Mistral and ChatML should work though I had better results with ChatML: ChatML Example: ```py """<|im_start|>user Hi there!<|im_end|> <|im_start|>assistant Nice to meet you!<|im_end|> <|im_start|>user Can I ask a question?<|im_end|> <|im_start|>assistant """ ``` ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Sao10K/MN-12B-Lyra-v3](https://huggingface.co/Sao10K/MN-12B-Lyra-v3) as a base. ### Models Merged The following models were included in the merge: * [Gryphe/Pantheon-RP-1.6-12b-Nemo](https://huggingface.co/Gryphe/Pantheon-RP-1.6-12b-Nemo) * [anthracite-org/magnum-v2.5-12b-kto](https://huggingface.co/anthracite-org/magnum-v2.5-12b-kto) * [nbeerbower/mistral-nemo-bophades-12B](https://huggingface.co/nbeerbower/mistral-nemo-bophades-12B) * [Sao10K/MN-12B-Lyra-v3](https://huggingface.co/Sao10K/MN-12B-Lyra-v3) ### Special Thanks Special thanks to the SillyTilly and myself for helping me find the energy to finish this.