File size: 1,130 Bytes
601e5a5
cb4ca0d
 
 
601e5a5
cb4ca0d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d338140
cb4ca0d
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
license: llama2
datasets:
- totally-not-an-llm/EverythingLM-data-V3
---

# EverythingLM-13b-V3-16k

Introducing EverythingLM, a llama-2 based, general-purpose 13b model with 16k context thanks to LlongMa.  The model is trained on the EverythingLM-V3 dataset, more info can be found on the dataset page.

The model is completely uncensored.

Despite being "uncensored", the base model might be resistant; you might have to prompt-engineer certain prompts.

### Notable features:
- Automatically triggered CoT reasoning.
- Verbose and detailed replies.
- Creative stories.
- Good prompt understanding.

### Differences from V2:
- General all around improvements thanks to the new dataset.  Check out the dataset for more info.

### Prompt format (Alpaca-chat):

```
USER: <prompt>
ASSISTANT:
```

### Future plans:
- Highest priority right now is V3.1 with more optimized training and iterative dataset improvements based on testing.

### Note:
Through testing V2, I realized some alignment data had leaked in, causing the model to be less cooperative then intended.  This model should do much better due to stricter filetering.