---
base_model: Epiculous/Violet_Twilight-v0.2
license: apache-2.0
inference: false
tags:
- mistral
- nemo
- roleplay
- sillytavern
- gguf
---
**Model name:**
Violet_Twilight-v0.2
**Description:**
"Now for something a bit different, Violet_Twilight-v0.2! This model is a SLERP merge of Azure_Dusk-v0.2 and Crimson_Dawn-v0.2!"
– by Author.
Use the **ChatML** prompt format.
> [!IMPORTANT]
> **[Added]**
> ARM quants:
> `"Q4_0_4_4", "Q4_0_4_8", "Q4_0_8_8"`
> [!TIP]
> **Presets:**
> You can use ChatML presets within SillyTavern and adjust from there.
> Alternatively, check out [Virt-io's ChatML v1.9 presets here](https://huggingface.co/Virt-io/SillyTavern-Presets/tree/main/Prompts/ChatML/v1.9), make sure you read the [repository page for how to use them properly](https://huggingface.co/Virt-io/SillyTavern-Presets/).
> The author also provides links to custom sampler presets [in the model page here](https://huggingface.co/Epiculous/Violet_Twilight-v0.2#current-top-sampler-settings).
> [!NOTE]
> Original model page:
> https://huggingface.co/Epiculous/Violet_Twilight-v0.2
>
> Quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp)-[b3829](https://github.com/ggerganov/llama.cpp/releases/tag/b3829):
> ```
> 1. Base⇢ Convert-GGUF(FP16)⇢ Generate-Imatrix-Data(FP16)
> 2. Base⇢ Convert-GGUF(BF16)⇢ Use-Imatrix-Data(FP16)⇢ Quantize-GGUF(Imatrix-Quants)
> ```
>
![waifu-model-image/png](https://cdn-uploads.huggingface.co/production/uploads/64adfd277b5ff762771e4571/P962FQhRG4I8nbU_DJolY.png)