Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,44 @@
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
---
|
4 |
+
---
|
5 |
+
license: mit
|
6 |
+
---
|
7 |
+
|
8 |
+
Base Mode: Llama 7B
|
9 |
+
LoRA is fully Merged with llama7b, so you do not need to merge it to load the model.
|
10 |
+
|
11 |
+
Llama DEUS v3 is the largest dataset I've trained on yet, including:
|
12 |
+
|
13 |
+
GPTeacher - General Instruct - Code Instruct - Roleplay Instruct
|
14 |
+
My unreleased Roleplay V2 Instruct
|
15 |
+
GPT4-LLM Uncensored + Unnatural Instructions
|
16 |
+
WizardLM Uncensored
|
17 |
+
CamelAI's 20k Biology, 20k Physics, 20k Chemistry, and 50k Math GPT4 Datasets
|
18 |
+
CodeAlpaca
|
19 |
+
|
20 |
+
This model was trained for 4 epochs over 1 day of training, it's a rank 128 LORA that targets attention heads, LM_Head, and MLP layers
|
21 |
+
|
22 |
+
Prompt format:
|
23 |
+
|
24 |
+
```
|
25 |
+
### Instruction:
|
26 |
+
<prompt>
|
27 |
+
|
28 |
+
### Response:
|
29 |
+
|
30 |
+
```
|
31 |
+
|
32 |
+
or
|
33 |
+
|
34 |
+
```
|
35 |
+
### Instruction:
|
36 |
+
<prompt>
|
37 |
+
|
38 |
+
### Input:
|
39 |
+
<input>
|
40 |
+
|
41 |
+
### Response:
|
42 |
+
|
43 |
+
```
|
44 |
+
|