Text Generation
Safetensors
9 languages
mistral
conversational
Edit model card

image/png

Flipping the training process that created Crimson Dawn on it's head, I present to you, Azure Dusk! While both models are built using Mistral-Nemo-Base-2407; Azure Dusk's training methodology was instruct first, then RP dataset applied after, however, the end goal reamains the same AI should not be a boring bland generic assistant, but something that you can connect with on a more personal level. Something that can be interesting in a Roleplay, but useful as an assistant too.

Quants!

full / exl2 / gguf

Prompting

Azure Dusk was trained with the Mistral Instruct template, therefore it should be prompted in a similar way that you would prompt any other mistral based model.

"<s>[INST] Prompt goes here [/INST]<\s>"

Context and Instruct

Magnum-123B-Context.json
Magnum-123B-Instruct.json
*** NOTE ***
There have been reports of the quantized model misbehaving with the mistral prompt, if you are seeing issues it may be worth trying ChatML Context and Instruct templates. If you are using GGUF I strongly advise using ChatML, for some reason that quantization performs better using ChatML.

Current Top Sampler Settings

Violet_Twilight-Nitral-Special- Considered the best settings!
Crimson_Dawn-Nitral-Special
Crimson_Dawn-Magnum-Style

Tokenizer

If you are using SillyTavern, please set the tokenizer to API (WebUI/ koboldcpp)

Training

Training was done twice over 2 epochs each on two 2x NVIDIA A6000 GPUs using LoRA. A two-phased approach was used in which the base model was trained 2 epochs on Instruct data, the LoRA was then applied to base. Finally, the new modified base was trained 2 epochs on RP, and the new RP LoRA was applied to the modified base, resulting in what you see here.

Built with Axolotl

Special Thanks

Special thanks to my friends over at Anthracite! Without their help and Kalomaze starting the synthetic data script, none of this would have been possible. Also want to thank my friends in The Chaotic Neutrals for their friendship, support, and guidance.

Downloads last month
70
Safetensors
Model size
12.2B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Epiculous/Azure_Dusk-v0.1

Quantizations
1 model

Datasets used to train Epiculous/Azure_Dusk-v0.1

Collection including Epiculous/Azure_Dusk-v0.1