File size: 774 Bytes
83d44e4 d7d69a8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
license: mit
---
---
license: mit
---
Base Mode: Llama 7B
LoRA is fully Merged with llama7b, so you do not need to merge it to load the model.
Llama DEUS v3 is the largest dataset I've trained on yet, including:
GPTeacher - General Instruct - Code Instruct - Roleplay Instruct
My unreleased Roleplay V2 Instruct
GPT4-LLM Uncensored + Unnatural Instructions
WizardLM Uncensored
CamelAI's 20k Biology, 20k Physics, 20k Chemistry, and 50k Math GPT4 Datasets
CodeAlpaca
This model was trained for 4 epochs over 1 day of training, it's a rank 128 LORA that targets attention heads, LM_Head, and MLP layers
Prompt format:
```
### Instruction:
<prompt>
### Response:
```
or
```
### Instruction:
<prompt>
### Input:
<input>
### Response:
```
|