Update README.md
Browse files
README.md
CHANGED
@@ -6,99 +6,8 @@ tags:
|
|
6 |
- sillytavern
|
7 |
---
|
8 |
|
9 |
-
|
10 |
-
> **Support:** <br>
|
11 |
-
> My upload speeds have been cooked and unstable lately. <br>
|
12 |
-
> Realistically I'd need to move to get a better provider. <br>
|
13 |
-
> If you **want** and you are able to... <br>
|
14 |
-
> You can [**support my various endeavors here (Ko-fi)**](https://ko-fi.com/Lewdiculous). <br>
|
15 |
-
> I apologize for disrupting your experience.
|
16 |
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
"This model received the Orthogonal Activation Steering treatment, **meaning it will rarely refuse any request.**"
|
21 |
-
|
22 |
-
> [!IMPORTANT]
|
23 |
-
> **Relevant:** <br>
|
24 |
-
> These quants have been done after the fixes from [**llama.cpp/pull/6920**](https://github.com/ggerganov/llama.cpp/pull/6920) have been merged. <br>
|
25 |
-
> Use **KoboldCpp** version **1.64** or higher, make sure you're up-to-date.
|
26 |
-
|
27 |
-
> [!WARNING]
|
28 |
-
> Compatible SillyTavern presets [here (simple)](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B/tree/main/Official%20Poppy%20Porpoise%20ST%20Presets)) or [here (Virt's Roleplay Presets - recommended)](https://huggingface.co/Virt-io/SillyTavern-Presets). <br>
|
29 |
-
> Use the latest version of KoboldCpp. **Use the provided presets for testing.** <br>
|
30 |
-
> Feedback and support for the Authors is always welcome. <br>
|
31 |
-
> If there are any issues or questions let me know.
|
32 |
-
|
33 |
-
> [!NOTE]
|
34 |
-
> For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** quant for up to 12288 context sizes.
|
35 |
-
|
36 |
-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/JUxfdTot7v7LTdIGYyzYM.png)
|
37 |
-
|
38 |
-
**Original model information:**
|
39 |
-
|
40 |
-
## Lumimaid 0.1
|
41 |
-
|
42 |
-
<center><div style="width: 100%;">
|
43 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/d3QMaxy3peFTpSlWdWF-k.png" style="display: block; margin: auto;">
|
44 |
-
</div></center>
|
45 |
-
|
46 |
-
This model uses the Llama3 **prompting format**
|
47 |
-
|
48 |
-
Llama3 trained on our RP datasets, we tried to have a balance between the ERP and the RP, not too horny, but just enough.
|
49 |
-
|
50 |
-
We also added some non-RP dataset, making the model less dumb overall. It should look like a 40%/60% ratio for Non-RP/RP+ERP data.
|
51 |
-
|
52 |
-
This model includes the new Luminae dataset from Ikari.
|
53 |
-
|
54 |
-
This model have received the Orthogonal Activation Steering treatment, meaning it will rarely refuse any request.
|
55 |
-
|
56 |
-
If you consider trying this model please give us some feedback either on the Community tab on hf or on our [Discord Server](https://discord.gg/MtCVRWTZXY).
|
57 |
-
|
58 |
-
## Credits:
|
59 |
-
- Undi
|
60 |
-
- IkariDev
|
61 |
-
|
62 |
-
## Description
|
63 |
-
|
64 |
-
This repo contains FP16 files of Lumimaid-8B-v0.1-OAS.
|
65 |
-
|
66 |
-
Switch: [8B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1) - [70B](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1) - [70B-alt](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-alt) - [8B-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS) - [70B-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-70B-v0.1-OAS)
|
67 |
-
|
68 |
-
## Training data used:
|
69 |
-
- [Aesir datasets](https://huggingface.co/MinervaAI)
|
70 |
-
- [NoRobots](https://huggingface.co/datasets/Doctor-Shotgun/no-robots-sharegpt)
|
71 |
-
- [limarp](https://huggingface.co/datasets/lemonilia/LimaRP) - 8k ctx
|
72 |
-
- [toxic-dpo-v0.1-sharegpt](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-sharegpt)
|
73 |
-
- [ToxicQAFinal](https://huggingface.co/datasets/NobodyExistsOnTheInternet/ToxicQAFinal)
|
74 |
-
- Luminae-i1 (70B/70B-alt) (i2 was not existing when the 70b started training) | Luminae-i2 (8B) (this one gave better results on the 8b) - Ikari's Dataset
|
75 |
-
- [Squish42/bluemoon-fandom-1-1-rp-cleaned](https://huggingface.co/datasets/Squish42/bluemoon-fandom-1-1-rp-cleaned) - 50% (randomly)
|
76 |
-
- [NobodyExistsOnTheInternet/PIPPAsharegptv2test](https://huggingface.co/datasets/NobodyExistsOnTheInternet/PIPPAsharegptv2test) - 5% (randomly)
|
77 |
-
- [cgato/SlimOrcaDedupCleaned](https://huggingface.co/datasets/cgato/SlimOrcaDedupCleaned) - 5% (randomly)
|
78 |
-
- Airoboros (reduced)
|
79 |
-
- [Capybara](https://huggingface.co/datasets/Undi95/Capybara-ShareGPT/) (reduced)
|
80 |
-
|
81 |
-
|
82 |
-
## Models used (only for 8B)
|
83 |
-
|
84 |
-
- Initial LumiMaid 8B Finetune
|
85 |
-
- Undi95/Llama-3-Unholy-8B-e4
|
86 |
-
- Undi95/Llama-3-LewdPlay-8B
|
87 |
-
|
88 |
-
## Prompt template: Llama3
|
89 |
-
|
90 |
-
```
|
91 |
-
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
92 |
-
|
93 |
-
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
94 |
-
|
95 |
-
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
96 |
-
|
97 |
-
{output}<|eot_id|>
|
98 |
-
```
|
99 |
-
|
100 |
-
## Others
|
101 |
-
|
102 |
-
Undi: If you want to support us, you can [here](https://ko-fi.com/undiai).
|
103 |
-
|
104 |
-
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek
|
|
|
6 |
- sillytavern
|
7 |
---
|
8 |
|
9 |
+
## Note:
|
|
|
|
|
|
|
|
|
|
|
|
|
10 |
|
11 |
+
This repo hosts only a Q5_K_S iMatrix of [Llama 3 Lumimaid 8B v0.1 OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS).
|
12 |
+
GGUF quant is from [Lewdiculous/Llama-3-Lumimaid-8B-v0.1-OAS-GGUF-IQ-Imatrix](https://huggingface.co/Lewdiculous/Llama-3-Lumimaid-8B-v0.1-OAS-GGUF-IQ-Imatrix).
|
13 |
+
The additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|