TheBloke commited on
Commit
9dea9ab
1 Parent(s): efb0778

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -118,8 +118,8 @@ Refer to the Provided Files table below to see what files use which methods, and
118
  | [nous-puffin-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.38 GB| 43.88 GB | medium, balanced quality - recommended |
119
  | [nous-puffin-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
120
  | [nous-puffin-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
121
- | nous-puffin-70b.Q6_K.bin | q6_K | 6 | 56.82 GB | 59.32 GB | very large, extremely low quality loss |
122
- | nous-puffin-70b.Q8_0.bin | q8_0 | 8 | 73.29 GB | 75.79 GB | very large, extremely low quality loss - not recommended |
123
 
124
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
125
 
 
118
  | [nous-puffin-70b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q4_K_M.gguf) | Q4_K_M | 4 | 41.38 GB| 43.88 GB | medium, balanced quality - recommended |
119
  | [nous-puffin-70b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q5_K_S.gguf) | Q5_K_S | 5 | 47.46 GB| 49.96 GB | large, low quality loss - recommended |
120
  | [nous-puffin-70b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Nous-Puffin-70B-GGUF/blob/main/nous-puffin-70b.Q5_K_M.gguf) | Q5_K_M | 5 | 48.75 GB| 51.25 GB | large, very low quality loss - recommended |
121
+ | nous-puffin-70b.Q6_K.gguf | q6_K | 6 | 56.82 GB | 59.32 GB | very large, extremely low quality loss |
122
+ | nous-puffin-70b.Q8_0.gguf | q8_0 | 8 | 73.29 GB | 75.79 GB | very large, extremely low quality loss - not recommended |
123
 
124
  **Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
125