e-valente commited on
Commit
537e09d
1 Parent(s): a8b892f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -20
README.md CHANGED
@@ -72,9 +72,9 @@ Here is an incomplate list of clients and libraries that are known to support GG
72
  <!-- repositories-available start -->
73
  ## Repositories available
74
 
75
- * [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Llama-2-7b-Chat-AWQ)
76
- * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GPTQ)
77
- * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF)
78
  * [Meta Llama 2's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
79
  <!-- repositories-available end -->
80
 
@@ -119,18 +119,18 @@ Refer to the Provided Files table below to see what files use which methods, and
119
 
120
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
121
  | ---- | ---- | ---- | ---- | ---- | ----- |
122
- | [llama-2-7b-chat.Q2_K.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
123
- | [llama-2-7b-chat.Q3_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
124
- | [llama-2-7b-chat.Q3_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
125
- | [llama-2-7b-chat.Q3_K_L.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
126
- | [llama-2-7b-chat.Q4_0.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
127
- | [llama-2-7b-chat.Q4_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
128
- | [llama-2-7b-chat.Q4_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
129
- | [llama-2-7b-chat.Q5_0.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
130
- | [llama-2-7b-chat.Q5_K_S.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
131
- | [llama-2-7b-chat.Q5_K_M.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
132
- | [llama-2-7b-chat.Q6_K.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
133
- | [llama-2-7b-chat.Q8_0.gguf](https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
134
 
135
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
136
 
@@ -150,7 +150,7 @@ The following clients/libraries will automatically download models for you, prov
150
 
151
  ### In `text-generation-webui`
152
 
153
- Under Download Model, you can enter the model repo: TheBloke/Llama-2-7b-Chat-GGUF and below it, a specific filename to download, such as: llama-2-7b-chat.q4_K_M.gguf.
154
 
155
  Then click Download.
156
 
@@ -165,7 +165,7 @@ pip3 install huggingface-hub>=0.17.1
165
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
166
 
167
  ```shell
168
- huggingface-cli download TheBloke/Llama-2-7b-Chat-GGUF llama-2-7b-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
169
  ```
170
 
171
  <details>
@@ -174,7 +174,7 @@ huggingface-cli download TheBloke/Llama-2-7b-Chat-GGUF llama-2-7b-chat.Q4_K_M.gg
174
  You can also download multiple files at once with a pattern:
175
 
176
  ```shell
177
- huggingface-cli download TheBloke/Llama-2-7b-Chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
178
  ```
179
 
180
  For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
@@ -188,7 +188,7 @@ pip3 install hf_transfer
188
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
189
 
190
  ```shell
191
- HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Llama-2-7b-Chat-GGUF llama-2-7b-chat.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
192
  ```
193
 
194
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
@@ -241,7 +241,7 @@ CT_METAL=1 pip install ctransformers>=0.2.24 --no-binary ctransformers
241
  from ctransformers import AutoModelForCausalLM
242
 
243
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
244
- llm = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-7b-Chat-GGUF", model_file="llama-2-7b-chat.q4_K_M.gguf", model_type="llama", gpu_layers=50)
245
 
246
  print(llm("AI is going to"))
247
  ```
 
72
  <!-- repositories-available start -->
73
  ## Repositories available
74
 
75
+ * [AWQ model(s) for GPU inference.](https://huggingface.co/e-valente/Llama-2-7b-Chat-AWQ)
76
+ * [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/e-valente/Llama-2-7b-Chat-GPTQ)
77
+ * [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/e-valente/Llama-2-7b-Chat-GGUF)
78
  * [Meta Llama 2's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
79
  <!-- repositories-available end -->
80
 
 
119
 
120
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
121
  | ---- | ---- | ---- | ---- | ---- | ----- |
122
+ | [llama-2-7b-chat.Q2_K.gguf](https://huggingface.co/e-valente/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q2_K.gguf) | Q2_K | 2 | 2.83 GB| 5.33 GB | smallest, significant quality loss - not recommended for most purposes |
123
+ | [llama-2-7b-chat.Q3_K_S.gguf](https://huggingface.co/e-valente/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| 5.45 GB | very small, high quality loss |
124
+ | [llama-2-7b-chat.Q3_K_M.gguf](https://huggingface.co/e-valente/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
125
+ | [llama-2-7b-chat.Q3_K_L.gguf](https://huggingface.co/e-valente/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| 6.10 GB | small, substantial quality loss |
126
+ | [llama-2-7b-chat.Q4_0.gguf](https://huggingface.co/e-valente/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| 6.33 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
127
+ | [llama-2-7b-chat.Q4_K_S.gguf](https://huggingface.co/e-valente/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| 6.36 GB | small, greater quality loss |
128
+ | [llama-2-7b-chat.Q4_K_M.gguf](https://huggingface.co/e-valente/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
129
+ | [llama-2-7b-chat.Q5_0.gguf](https://huggingface.co/e-valente/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| 7.15 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
130
+ | [llama-2-7b-chat.Q5_K_S.gguf](https://huggingface.co/e-valente/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| 7.15 GB | large, low quality loss - recommended |
131
+ | [llama-2-7b-chat.Q5_K_M.gguf](https://huggingface.co/e-valente/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
132
+ | [llama-2-7b-chat.Q6_K.gguf](https://huggingface.co/e-valente/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q6_K.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
133
+ | [llama-2-7b-chat.Q8_0.gguf](https://huggingface.co/e-valente/Llama-2-7b-Chat-GGUF/blob/main/llama-2-7b-chat.Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| 9.66 GB | very large, extremely low quality loss - not recommended |
134
 
135
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
136
 
 
150
 
151
  ### In `text-generation-webui`
152
 
153
+ Under Download Model, you can enter the model repo: e-valente/Llama-2-7b-Chat-GGUF and below it, a specific filename to download, such as: llama-2-7b-chat.q4_K_M.gguf.
154
 
155
  Then click Download.
156
 
 
165
  Then you can download any individual model file to the current directory, at high speed, with a command like this:
166
 
167
  ```shell
168
+ huggingface-cli download e-valente/Llama-2-7b-Chat-GGUF llama-2-7b-chat.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
169
  ```
170
 
171
  <details>
 
174
  You can also download multiple files at once with a pattern:
175
 
176
  ```shell
177
+ huggingface-cli download e-valente/Llama-2-7b-Chat-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
178
  ```
179
 
180
  For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
 
188
  And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
189
 
190
  ```shell
191
+ HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download e-valente/Llama-2-7b-Chat-GGUF llama-2-7b-chat.q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
192
  ```
193
 
194
  Windows CLI users: Use `set HUGGINGFACE_HUB_ENABLE_HF_TRANSFER=1` before running the download command.
 
241
  from ctransformers import AutoModelForCausalLM
242
 
243
  # Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
244
+ llm = AutoModelForCausalLM.from_pretrained("e-valente/Llama-2-7b-Chat-GGUF", model_file="llama-2-7b-chat.q4_K_M.gguf", model_type="llama", gpu_layers=50)
245
 
246
  print(llm("AI is going to"))
247
  ```