license: mit
Base Mode: Llama 7B LoRA is fully Merged with llama7b, so you do not need to merge it to load the model.
Llama DEUS v3 is the largest dataset I've trained on yet, including:
GPTeacher - General Instruct - Code Instruct - Roleplay Instruct
My unreleased Roleplay V2 Instruct
GPT4-LLM Uncensored + Unnatural Instructions
WizardLM Uncensored
CamelAI's 20k Biology, 20k Physics, 20k Chemistry, and 50k Math GPT4 Datasets
CodeAlpaca
This model was trained for 4 epochs over 1 day of training, it's a rank 128 LORA that targets attention heads, LM_Head, and MLP layers
Prompt format:
### Instruction:
<prompt>
### Response:
or
### Instruction:
<prompt>
### Input:
<input>
### Response:
- Downloads last month
- 15
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.