--- license: gpl language: - en --- This is the GPTQ 4Bit Groupsize 128 Pre-Quantized Model, for the full model in fp32, visit https://huggingface.co/NousResearch/gpt4-x-vicuna-13b As a base model used https://huggingface.co/eachadea/vicuna-13b-1.1 Finetuned on Teknium's GPTeacher dataset, Teknium's unreleased Roleplay v2 dataset, WizardLM Uncensored, GPT-4-LLM Uncensored, and Nous Research Instruct Dataset Approx 180k instructions, all from GPT-4, all cleaned of any OpenAI censorship/"As an AI Language Model" etc. Base model still has OpenAI censorship. Soon, a new version will be released with cleaned vicuna from https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltere Trained on 8 A100-80GB GPUs for 5 epochs following Alpaca deepspeed training code. Prompt format is Alpaca: ``` ### Instruction: ### Response: ``` or ``` ### Instruction: ### Input: ### Response: ``` Nous Research Instruct Dataset will be released soon. GPTeacher, Roleplay v2 by https://huggingface.co/teknium Wizard LM by https://github.com/nlpxucan Nous Research Instruct Dataset by https://huggingface.co/karan4d and https://huggingface.co/huemin Compute provided by our project sponsor https://redmond.ai/