Neko-Institute-of-Science
commited on
Commit
•
ef37c45
1
Parent(s):
df1adf0
add tag
Browse files
README.md
CHANGED
@@ -1,6 +1,8 @@
|
|
1 |
---
|
2 |
datasets:
|
3 |
- gozfarb/ShareGPT_Vicuna_unfiltered
|
|
|
|
|
4 |
---
|
5 |
**TEST FINISHED FOR NOW. I MOVED TO 30B training.**
|
6 |
|
@@ -20,10 +22,8 @@ I have since restarted training with the new format B from the tool and it seems
|
|
20 |
|
21 |
# How to test?
|
22 |
1. Download LLaMA-13B-HF: https://huggingface.co/Neko-Institute-of-Science/LLaMA-13B-HF
|
23 |
-
|
24 |
-
|
25 |
-
4. Load ooba: ```python server.py --listen --model vicuna-13b --load-in-8bit --chat --lora checkpoint-xxxx```
|
26 |
-
5. Instruct mode: Vicuna-v1 it will load Vicuna-v0 by defualt
|
27 |
|
28 |
|
29 |
# Training LOG
|
|
|
1 |
---
|
2 |
datasets:
|
3 |
- gozfarb/ShareGPT_Vicuna_unfiltered
|
4 |
+
tags:
|
5 |
+
- uncensored
|
6 |
---
|
7 |
**TEST FINISHED FOR NOW. I MOVED TO 30B training.**
|
8 |
|
|
|
22 |
|
23 |
# How to test?
|
24 |
1. Download LLaMA-13B-HF: https://huggingface.co/Neko-Institute-of-Science/LLaMA-13B-HF
|
25 |
+
3. Load ooba: ```python server.py --listen --model LLaMA-13B-HF --load-in-8bit --chat --lora checkpoint-xxxx```
|
26 |
+
4. Instruct mode: Vicuna-v1
|
|
|
|
|
27 |
|
28 |
|
29 |
# Training LOG
|