File size: 1,520 Bytes
3442b60
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
---
license: cc-by-nc-4.0
datasets:
- Himitsui/Lewd-Assistant-v1
language:
- en
base_model: mistralai/Mixtral-8x7B-v0.1
---

Quantized using 200 samples of 8192 tokens from an RP-oriented [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) dataset.

Branches:
- `main` -- `measurement.json`
- `2.25b6h` -- 2.25bpw, 6bit lm_head
- `3.7b6h` -- 3.7bpw, 6bit lm_head
- `6b6h` -- 6bpw, 6bit lm_head


Requires ExllamaV2 version 0.0.12 and up.

Original model link: [Sao10K/Solstice-Mixtral-v1](Sao10K/Solstice-Mixtral-v1)

Original model README below.

***

![MIMI](https://huggingface.co/Sao10K/Solstice-Mixtral-v1/resolve/main/mimi.jpg)

GGUF: https://huggingface.co/Sao10K/Solstice-Mixtral-v1-GGUF

[Solstice-11B-v1](https://huggingface.co/Sao10K/Solstice-11B-v1) but on Mixtral. More info there.

Experimental. May or may not be good, Mixtral training is... difficult to work with.

Trained with Vicuna / ShareGPT Format, but Alpaca Instruct should work fine too.

***

As per usual, handles itself fine in NSFW Scenarios, after all, it is trained in lewd outputs. A bit of a weird behaviour where it is reluctant in zero-shot settings, but in actual roleplays / usage? It's fine.


Pretty nice. Using Vicuna gave slightly better outputs than Alpaca, but it may be a minor difference?

I like that it stays in character.

I like using Universal-Light preset in SillyTavern.

***

I really appreciate your feedback / supportive comments. They keep me going.

***

Support me [here](https://ko-fi.com/sao10k) :)