totally-not-an-llm commited on
Commit
cb4ca0d
1 Parent(s): 601e5a5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +33 -1
README.md CHANGED
@@ -1,3 +1,35 @@
1
  ---
2
- license: other
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: llama2
3
+ datasets:
4
+ - totally-not-an-llm/EverythingLM-data-V3
5
  ---
6
+
7
+ # EverythingLM-13b-V3-16k
8
+
9
+ Introducing EverythingLM, a llama-2 based, general-purpose 13b model with 16k context thanks to LlongMa. The model is trained on the EverythingLM-V3 dataset, more info can be found on the dataset page.
10
+
11
+ The model is completely uncensored.
12
+
13
+ Despite being "uncensored", the base model might be resistant; you might have to prompt-engineer certain prompts.
14
+
15
+ ### Notable features:
16
+ - Automatically triggered CoT reasoning.
17
+ - Verbose and detailed replies.
18
+ - Creative stories.
19
+ - Good prompt understanding.
20
+
21
+ ### Differences from V2:
22
+ - General all around improvements thanks to the new dataset. Check out the dataset for more info
23
+
24
+ ### Prompt format (Alpaca-chat):
25
+
26
+ ```
27
+ USER: <prompt>
28
+ ASSISTANT:
29
+ ```
30
+
31
+ ### Future plans:
32
+ - Highest priority right now is V3.1 with more optimized training and iterative dataset improvements based on testing.
33
+
34
+ ### Note:
35
+ Through testing V2, I realized some alignment data had leaked in, causing the model to be less cooperative then intended. This model should do much better due to stricter filetering.