StarDust-12b-v1
Quants
- GGUF: mradermacher/StarDust-12b-v1-GGUF
- weighted/imatrix GGUF mradermacher/StarDust-12b-v1-i1-GGUF
- exl2: lucyknada/Luni_StarDust-12b-v1-exl2
Description | Usecase
The result of this merge is in my opinion a more vibrant and less generic sonnet inspired prose, it's able to be gentle and harsh where asked. I've personally been trying to get a more spice while also compensating for the Magnum-v2.5 having the issue on my end that it simply won't stop yapping.
- This model is intended to be used as a Role-playing model.
- Its direct conversational output is... I can't even say it's luck, it's just not made for it.
- Extension to Conversational output: The Model is designed for roleplay, direct instructing or general purpose is NOT recommended.
Initial Feedback
Initial feedback shows that the model has a tendency to promote flirting. If this becomes too much try to steer the model with a system prompt to focus on SFW and on-flirty interactions.
Prompting
Edit: ChatML has proven to be the BEST choice.
Both Mistral and ChatML should work though I had better results with ChatML: ChatML Example:
"""<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant
"""
Merge Details
Merge Method
This model was merged using the DARE TIES merge method using Sao10K/MN-12B-Lyra-v3 as a base.
Models Merged
The following models were included in the merge:
- Gryphe/Pantheon-RP-1.6-12b-Nemo
- anthracite-org/magnum-v2.5-12b-kto
- nbeerbower/mistral-nemo-bophades-12B
- Sao10K/MN-12B-Lyra-v3
Special Thanks
Special thanks to the SillyTilly and myself for helping me find the energy to finish this.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 23.17 |
IFEval (0-Shot) | 54.59 |
BBH (3-Shot) | 34.45 |
MATH Lvl 5 (4-Shot) | 5.97 |
GPQA (0-shot) | 3.47 |
MuSR (0-shot) | 13.76 |
MMLU-PRO (5-shot) | 26.80 |
- Downloads last month
- 26
Model tree for Luni/StarDust-12b-v1
Evaluation results
- strict accuracy on IFEval (0-Shot)Open LLM Leaderboard54.590
- normalized accuracy on BBH (3-Shot)Open LLM Leaderboard34.450
- exact match on MATH Lvl 5 (4-Shot)Open LLM Leaderboard5.970
- acc_norm on GPQA (0-shot)Open LLM Leaderboard3.470
- acc_norm on MuSR (0-shot)Open LLM Leaderboard13.760
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard26.800