TheBloke commited on
Commit
f07e7d6
1 Parent(s): a0008f7

Upload new GPTQs with varied parameters

Browse files
Files changed (1) hide show
  1. README.md +7 -4
README.md CHANGED
@@ -1,6 +1,11 @@
1
  ---
 
 
2
  inference: false
 
 
3
  license: other
 
4
  ---
5
 
6
  <!-- header start -->
@@ -38,7 +43,6 @@ A chat between a curious user and an artificial intelligence assistant. The assi
38
 
39
  USER: {prompt}
40
  ASSISTANT:
41
-
42
  ```
43
 
44
  ## Provided files
@@ -51,8 +55,8 @@ Each separate quant is in a different branch. See below for instructions on fet
51
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
52
  | main | 4 | 128 | False | 7.45 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
53
  | gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
54
- | gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
55
- | gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order androup size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
56
  | gptq-8bit--1g-actorder_True | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
57
  | gptq-8bit-128g-actorder_False | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
58
 
@@ -128,7 +132,6 @@ prompt_template=f'''A chat between a curious user and an artificial intelligence
128
 
129
  USER: {prompt}
130
  ASSISTANT:
131
-
132
  '''
133
 
134
  print("\n\n*** Generate:")
 
1
  ---
2
+ datasets:
3
+ - ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
4
  inference: false
5
+ language:
6
+ - en
7
  license: other
8
+ model_type: llama
9
  ---
10
 
11
  <!-- header start -->
 
43
 
44
  USER: {prompt}
45
  ASSISTANT:
 
46
  ```
47
 
48
  ## Provided files
 
55
  | ------ | ---- | ---------- | -------------------- | --------- | ------------------- | --------- | ----------- |
56
  | main | 4 | 128 | False | 7.45 GB | True | GPTQ-for-LLaMa | Most compatible option. Good inference speed in AutoGPTQ and GPTQ-for-LLaMa. Lower inference quality than other options. |
57
  | gptq-4bit-32g-actorder_True | 4 | 32 | True | 8.00 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 32g gives highest possible inference quality, with maximum VRAM usage. Poor AutoGPTQ CUDA speed. |
58
+ | gptq-4bit-64g-actorder_True | 4 | 64 | True | 7.51 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 64g uses less VRAM than 32g, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
59
+ | gptq-4bit-128g-actorder_True | 4 | 128 | True | 7.26 GB | True | AutoGPTQ | 4-bit, with Act Order and group size. 128g uses even less VRAM, but with slightly lower accuracy. Poor AutoGPTQ CUDA speed. |
60
  | gptq-8bit--1g-actorder_True | 8 | None | True | 13.36 GB | False | AutoGPTQ | 8-bit, with Act Order. No group size, to lower VRAM requirements and to improve AutoGPTQ speed. |
61
  | gptq-8bit-128g-actorder_False | 8 | 128 | False | 13.65 GB | False | AutoGPTQ | 8-bit, with group size 128g for higher inference quality and without Act Order to improve AutoGPTQ speed. |
62
 
 
132
 
133
  USER: {prompt}
134
  ASSISTANT:
 
135
  '''
136
 
137
  print("\n\n*** Generate:")