rAIfle commited on
Commit
3442b60
1 Parent(s): 7d3c8a7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -0
README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ datasets:
4
+ - Himitsui/Lewd-Assistant-v1
5
+ language:
6
+ - en
7
+ base_model: mistralai/Mixtral-8x7B-v0.1
8
+ ---
9
+
10
+ Quantized using 200 samples of 8192 tokens from an RP-oriented [PIPPA](https://huggingface.co/datasets/royallab/PIPPA-cleaned) dataset.
11
+
12
+ Branches:
13
+ - `main` -- `measurement.json`
14
+ - `2.25b6h` -- 2.25bpw, 6bit lm_head
15
+ - `3.7b6h` -- 3.7bpw, 6bit lm_head
16
+ - `6b6h` -- 6bpw, 6bit lm_head
17
+
18
+
19
+ Requires ExllamaV2 version 0.0.12 and up.
20
+
21
+ Original model link: [Sao10K/Solstice-Mixtral-v1](Sao10K/Solstice-Mixtral-v1)
22
+
23
+ Original model README below.
24
+
25
+ ***
26
+
27
+ ![MIMI](https://huggingface.co/Sao10K/Solstice-Mixtral-v1/resolve/main/mimi.jpg)
28
+
29
+ GGUF: https://huggingface.co/Sao10K/Solstice-Mixtral-v1-GGUF
30
+
31
+ [Solstice-11B-v1](https://huggingface.co/Sao10K/Solstice-11B-v1) but on Mixtral. More info there.
32
+
33
+ Experimental. May or may not be good, Mixtral training is... difficult to work with.
34
+
35
+ Trained with Vicuna / ShareGPT Format, but Alpaca Instruct should work fine too.
36
+
37
+ ***
38
+
39
+ As per usual, handles itself fine in NSFW Scenarios, after all, it is trained in lewd outputs. A bit of a weird behaviour where it is reluctant in zero-shot settings, but in actual roleplays / usage? It's fine.
40
+
41
+
42
+ Pretty nice. Using Vicuna gave slightly better outputs than Alpaca, but it may be a minor difference?
43
+
44
+ I like that it stays in character.
45
+
46
+ I like using Universal-Light preset in SillyTavern.
47
+
48
+ ***
49
+
50
+ I really appreciate your feedback / supportive comments. They keep me going.
51
+
52
+ ***
53
+
54
+ Support me [here](https://ko-fi.com/sao10k) :)