|
--- |
|
base_model: Epiculous/Violet_Twilight-v0.2 |
|
license: apache-2.0 |
|
inference: false |
|
tags: |
|
- mistral |
|
- nemo |
|
- roleplay |
|
- sillytavern |
|
- gguf |
|
--- |
|
|
|
**Model name:** <br> |
|
Violet_Twilight-v0.2 |
|
|
|
**Description:** <br> |
|
"Now for something a bit different, Violet_Twilight-v0.2! This model is a SLERP merge of Azure_Dusk-v0.2 and Crimson_Dawn-v0.2!" <br> |
|
– by Author. <br> |
|
|
|
Use the **ChatML** prompt format. <br> |
|
|
|
> [!IMPORTANT] |
|
> **[Added]** <br> |
|
> ARM quants: <br> |
|
> `"Q4_0_4_4", "Q4_0_4_8", "Q4_0_8_8"` |
|
|
|
> [!TIP] |
|
> **Presets:** <br> |
|
> You can use ChatML presets within SillyTavern and adjust from there. <br> |
|
> Alternatively, check out [Virt-io's ChatML v1.9 presets here](https://huggingface.co/Virt-io/SillyTavern-Presets/tree/main/Prompts/ChatML/v1.9), make sure you read the [repository page for how to use them properly](https://huggingface.co/Virt-io/SillyTavern-Presets/). <br> |
|
> The author also provides links to custom sampler presets [in the model page here](https://huggingface.co/Epiculous/Violet_Twilight-v0.2#current-top-sampler-settings). |
|
|
|
> [!NOTE] |
|
> Original model page: <br> |
|
> https://huggingface.co/Epiculous/Violet_Twilight-v0.2 |
|
> |
|
> Quantized using [llama.cpp](https://github.com/ggerganov/llama.cpp)-[b3829](https://github.com/ggerganov/llama.cpp/releases/tag/b3829): <br> |
|
> ``` |
|
> 1. Base⇢ Convert-GGUF(FP16)⇢ Generate-Imatrix-Data(FP16) |
|
> 2. Base⇢ Convert-GGUF(BF16)⇢ Use-Imatrix-Data(FP16)⇢ Quantize-GGUF(Imatrix-Quants) |
|
> ``` |
|
> |
|
![waifu-model-image/png](https://cdn-uploads.huggingface.co/production/uploads/64adfd277b5ff762771e4571/P962FQhRG4I8nbU_DJolY.png) |