TheBloke commited on
Commit
e7825ef
1 Parent(s): ca3a92f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -7
README.md CHANGED
@@ -42,6 +42,19 @@ This repo contains GPTQ model files for [Mistral AI's Mistral 7B v0.1](https://h
42
 
43
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  <!-- description end -->
46
  <!-- repositories-available start -->
47
  ## Repositories available
@@ -70,7 +83,7 @@ Multiple quantisation parameters are provided, to allow you to choose the best o
70
 
71
  Each separate quant is in a different branch. See below for instructions on fetching from different branches.
72
 
73
- All recent GPTQ files are made with AutoGPTQ, and all files in non-main branches are made with AutoGPTQ. Files in the `main` branch which were uploaded before August 2023 were made with GPTQ-for-LLaMa.
74
 
75
  <details>
76
  <summary>Explanation of GPTQ parameters</summary>
@@ -164,6 +177,10 @@ Note that using Git with HF repos is strongly discouraged. It will be much slowe
164
  <!-- README_GPTQ.md-text-generation-webui start -->
165
  ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
166
 
 
 
 
 
167
  Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
168
 
169
  It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
@@ -187,10 +204,11 @@ It is strongly recommended to use the text-generation-webui one-click-installers
187
 
188
  ### Install the necessary packages
189
 
190
- Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
191
 
192
  ```shell
193
- pip3 install transformers optimum
 
194
  pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
195
  ```
196
 
@@ -251,11 +269,8 @@ print(pipe(prompt_template)[0]['generated_text'])
251
  <!-- README_GPTQ.md-compatibility start -->
252
  ## Compatibility
253
 
254
- The files provided are tested to work with AutoGPTQ, both via Transformers and using AutoGPTQ directly. They should also work with [Occ4m's GPTQ-for-LLaMa fork](https://github.com/0cc4m/KoboldAI).
255
-
256
- [ExLlama](https://github.com/turboderp/exllama) is compatible with Llama models in 4-bit. Please see the Provided Files table above for per-file compatibility.
257
 
258
- [Huggingface Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) is compatible with all GPTQ models.
259
  <!-- README_GPTQ.md-compatibility end -->
260
 
261
  <!-- footer start -->
 
42
 
43
  Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
44
 
45
+ ### GPTQs will work in Transformers only - and requires Transformers from Github
46
+
47
+ At the time of writing (September 28th), AutoGPTQ has not yet added support for the new Mistral models.
48
+
49
+ These GPTQs were made directly from Transformers, and so can only be loaded via the Transformers interface. They can't be loaded directly from AutoGPTQ.
50
+
51
+ In addition, you will need to install Transformers from Github, with:
52
+ ```
53
+ pip3 install git+https://github.com/huggingface/transformers.git@72958fcd3c98a7afdc61f953aa58c544ebda2f79
54
+ ```
55
+
56
+ <!-- description end -->
57
+
58
  <!-- description end -->
59
  <!-- repositories-available start -->
60
  ## Repositories available
 
83
 
84
  Each separate quant is in a different branch. See below for instructions on fetching from different branches.
85
 
86
+ These files were made with Transformers 4.34.0.dev0, from commit 72958fcd3c98a7afdc61f953aa58c544ebda2f79.
87
 
88
  <details>
89
  <summary>Explanation of GPTQ parameters</summary>
 
177
  <!-- README_GPTQ.md-text-generation-webui start -->
178
  ## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
179
 
180
+ NOTE: These models haven't been tested in text-generation-webui. But I hope they will work.
181
+
182
+ You will need to use **Loader: Transformers**. AutoGPTQ will not work. I don't know about ExLlama - it might work as this model is so similar to Llama; let me know if it does!
183
+
184
  Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
185
 
186
  It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
 
204
 
205
  ### Install the necessary packages
206
 
207
+ Requires: Transformers 4.34.0.dev0 from Github source, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
208
 
209
  ```shell
210
+ pip3 install optimum
211
+ pip3 install git+https://github.com/huggingface/transformers.git@72958fcd3c98a7afdc61f953aa58c544ebda2f79
212
  pip3 install auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/ # Use cu117 if on CUDA 11.7
213
  ```
214
 
 
269
  <!-- README_GPTQ.md-compatibility start -->
270
  ## Compatibility
271
 
272
+ The files provided are only tested to work with Transformers 4.34.0.dev0 as of commit 72958fcd3c98a7afdc61f953aa58c544ebda2f79.
 
 
273
 
 
274
  <!-- README_GPTQ.md-compatibility end -->
275
 
276
  <!-- footer start -->