teknium commited on
Commit
689c60d
1 Parent(s): aa64f4a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -3,6 +3,8 @@ license: mit
3
  ---
4
  This is a llama-13B based model that has been converted with GPTQ to 4bit quantized model.
5
 
 
 
6
  Base Model: GPT4-x-Alpaca full fine tune by Chavinlo -> https://huggingface.co/chavinlo/gpt4-x-alpaca
7
  LORA fine tune using the Roleplay Instruct from GPT4 generated dataset -> https://github.com/teknium1/GPTeacher/tree/main/Roleplay
8
  LORA Adapter Only: https://huggingface.co/ZeusLabs/gpt4-x-alpaca-rp-lora - The v2 one -
 
3
  ---
4
  This is a llama-13B based model that has been converted with GPTQ to 4bit quantized model.
5
 
6
+ Load without groupsize flag or you may get OOM
7
+
8
  Base Model: GPT4-x-Alpaca full fine tune by Chavinlo -> https://huggingface.co/chavinlo/gpt4-x-alpaca
9
  LORA fine tune using the Roleplay Instruct from GPT4 generated dataset -> https://github.com/teknium1/GPTeacher/tree/main/Roleplay
10
  LORA Adapter Only: https://huggingface.co/ZeusLabs/gpt4-x-alpaca-rp-lora - The v2 one -