Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,9 @@ This repository contains CPU-optimized GGUF quantizations of the Meta-Llama-3.1-
|
|
22 |
>>
|
23 |
>>Feel free to paste these all in at once or one at a time
|
24 |
|
25 |
-
### Q4_0_48 (CPU Optimized)
|
|
|
|
|
26 |
|
27 |
```bash
|
28 |
aria2c -x 16 -s 16 -k 1M -o meta-405b-inst-cpu-optimized-q4048-00001-of-00006.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-inst-cpu-optimized-q4048-00001-of-00006.gguf
|
@@ -33,7 +35,25 @@ aria2c -x 16 -s 16 -k 1M -o meta-405b-inst-cpu-optimized-q4048-00005-of-00006.gg
|
|
33 |
aria2c -x 16 -s 16 -k 1M -o meta-405b-inst-cpu-optimized-q4048-00006-of-00006.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-inst-cpu-optimized-q4048-00006-of-00006.gguf
|
34 |
```
|
35 |
|
36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
```verilog
|
39 |
aria2c -x 16 -s 16 -k 1M -o meta-405b-inst-cpu-2kmix8k-00001-of-00004.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-inst-cpu-2kmix8k-00001-of-00004.gguf
|
@@ -42,6 +62,14 @@ aria2c -x 16 -s 16 -k 1M -o meta-405b-inst-cpu-2kmix8k-00003-of-00004.gguf https
|
|
42 |
aria2c -x 16 -s 16 -k 1M -o meta-405b-inst-cpu-2kmix8k-00004-of-00004.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-inst-cpu-2kmix8k-00004-of-00004.gguf
|
43 |
```
|
44 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
### BF16 Version
|
46 |
|
47 |
```bash
|
|
|
22 |
>>
|
23 |
>>Feel free to paste these all in at once or one at a time
|
24 |
|
25 |
+
### Q4_0_48 (CPU Optimized) Example response of 20000 token prompt:
|
26 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6379683a81c1783a4a2ddba8/DD71wAB7DlQBmTG8wVaWS.png)
|
27 |
+
|
28 |
|
29 |
```bash
|
30 |
aria2c -x 16 -s 16 -k 1M -o meta-405b-inst-cpu-optimized-q4048-00001-of-00006.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-inst-cpu-optimized-q4048-00001-of-00006.gguf
|
|
|
35 |
aria2c -x 16 -s 16 -k 1M -o meta-405b-inst-cpu-optimized-q4048-00006-of-00006.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-inst-cpu-optimized-q4048-00006-of-00006.gguf
|
36 |
```
|
37 |
|
38 |
+
|
39 |
+
### IQ4_XS Version - Fastest for CPU/GPU (Size: ~212 GB)
|
40 |
+
```bash
|
41 |
+
aria2c -x 16 -s 16 -k 1M -o meta-405b-cpu-i1-q4xs-00001-of-00005.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-cpu-i1-q4xs-00001-of-00005.gguf
|
42 |
+
aria2c -x 16 -s 16 -k 1M -o meta-405b-cpu-i1-q4xs-00002-of-00005.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-cpu-i1-q4xs-00002-of-00005.gguf
|
43 |
+
aria2c -x 16 -s 16 -k 1M -o meta-405b-cpu-i1-q4xs-00003-of-00005.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-cpu-i1-q4xs-00003-of-00005.gguf
|
44 |
+
aria2c -x 16 -s 16 -k 1M -o meta-405b-cpu-i1-q4xs-00004-of-00005.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-cpu-i1-q4xs-00004-of-00005.gguf
|
45 |
+
aria2c -x 16 -s 16 -k 1M -o meta-405b-cpu-i1-q4xs-00005-of-00005.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-cpu-i1-q4xs-00005-of-00005.gguf
|
46 |
+
```
|
47 |
+
|
48 |
+
### 1-bit Custom Per Weight Quantization (Size: ~103 GB)
|
49 |
+
```bash
|
50 |
+
aria2c -x 16 -s 16 -k 1M -o meta-405b-1bit-00001-of-00003.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-1bit-00001-of-00003.gguf
|
51 |
+
aria2c -x 16 -s 16 -k 1M -o meta-405b-1bit-00002-of-00003.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-1bit-00002-of-00003.gguf
|
52 |
+
aria2c -x 16 -s 16 -k 1M -o meta-405b-1bit-00003-of-00003.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-1bit-00003-of-00003.gguf
|
53 |
+
```
|
54 |
+
|
55 |
+
Note: Sizes are approximate and converted to GB (1 GB = 1024 MiB).
|
56 |
+
### Q2K-Q8 Mixed 2bit 8bit I wrote myself. This is the smallest coherent one I could make WITHOUT imatrix
|
57 |
|
58 |
```verilog
|
59 |
aria2c -x 16 -s 16 -k 1M -o meta-405b-inst-cpu-2kmix8k-00001-of-00004.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-inst-cpu-2kmix8k-00001-of-00004.gguf
|
|
|
62 |
aria2c -x 16 -s 16 -k 1M -o meta-405b-inst-cpu-2kmix8k-00004-of-00004.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-inst-cpu-2kmix8k-00004-of-00004.gguf
|
63 |
```
|
64 |
|
65 |
+
### Same as above but with higher quality iMatrix Q2K-Q8 (Size: ~154 GB) USE THIS ONE
|
66 |
+
```bash
|
67 |
+
aria2c -x 16 -s 16 -k 1M -o meta-405b-cpu-imatrix-2k-00001-of-00004.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-cpu-imatrix-2k-00001-of-00004.gguf
|
68 |
+
aria2c -x 16 -s 16 -k 1M -o meta-405b-cpu-imatrix-2k-00002-of-00004.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-cpu-imatrix-2k-00002-of-00004.gguf
|
69 |
+
aria2c -x 16 -s 16 -k 1M -o meta-405b-cpu-imatrix-2k-00003-of-00004.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-cpu-imatrix-2k-00003-of-00004.gguf
|
70 |
+
aria2c -x 16 -s 16 -k 1M -o meta-405b-cpu-imatrix-2k-00004-of-00004.gguf https://huggingface.co/nisten/meta-405b-instruct-cpu-optimized-gguf/resolve/main/meta-405b-cpu-imatrix-2k-00004-of-00004.gguf
|
71 |
+
```
|
72 |
+
|
73 |
### BF16 Version
|
74 |
|
75 |
```bash
|