--- base_model: royallab/MN-LooseCannon-12B-v2 language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - mistral - trl - sft --- # Uploaded model - **Developed by:** Trappu - **License:** apache-2.0 - **Finetuned from model :** royallab/MN-LooseCannon-12B-v2 # Details This model was trained on my own little dataset free of synthetic data, which focuses solely on storywriting and scenrio prompting (Example: `[ Scenario: bla bla bla; Tags: bla bla bla ]`), I don't really recommend this model due to its nature and obvious flaws (rampant impersonation, stupid, etc...). It's a a one-trick pony and will be really rough for the average LLM user to handle. Instead, I recommend you guys use [Magnum-Picaro-0.7-v2-12b](https://huggingface.co/Trappu/Magnum-Picaro-0.7-v2-12b). The idea was to have Magnum work as some sort of stabilizer to fix the issues that emerge from the lack of multiturn/smart data in Picaro's dataset. It worked, I think. I enjoy the outputs and it's smart enough to work with. # Prompting If for some reason, you still want to try this model over Magnum-Picaro, it was trained on chatml with no system prompts, so below is the recommended prompt formatting. ``` <|im_start|>user bla bla bla<|im_end|> <|im_start|>assistant bla bla bla you!<|im_end|> ``` For SillyTavern users: [Instruct template](https://firebasestorage.googleapis.com/v0/b/koios-academy.appspot.com/o/trappu%2FChatML%20custom%20Instruct%20template.json?alt=media&token=9142757f-811c-460c-ad0e-d04951b1687f) [Context template](https://firebasestorage.googleapis.com/v0/b/koios-academy.appspot.com/o/trappu%2FChatML%20custom%20context%20template.json?alt=media&token=0926fc67-fa9f-4c86-ad16-8c7c4c8e0b64) [Settings preset](https://firebasestorage.googleapis.com/v0/b/koios-academy.appspot.com/o/trappu%2FHigh%20temp%20-%20Min%20P%20(4).json?alt=media&token=ac569562-af11-4da1-83c1-d86b25bb4fe1) The above settings are the ones I recommend. Temp = 1.2 Min P = 0.1 DRY Rep Pen: Multiplier = 0.8, Base = 1.75, Allowed Length = 2, Penalty Range = 1024 Little guide on useful samplers and how to import settings presets and instruct/context templates and other stuff people might find useful [here](https://rentry.co/PygmalionFAQ#q-what-are-the-best-settings-for-rpadventurenarrationchatting) Every other sampler neutralized.