qwp4w3hyb commited on
Commit
08cc1ea
1 Parent(s): 350d7a1

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: alpindale/WizardLM-2-8x22B
3
+ tags:
4
+ - wizardlm
5
+ - microsoft
6
+ - instruct
7
+ - finetune
8
+ - gguf
9
+ - importance matrix
10
+ - imatrix
11
+ model-index:
12
+ - name: Not-WizardLM-2-8x22B-iMat-GGUF
13
+ results: []
14
+ license: apache-2.0
15
+ ---
16
+
17
+ # WizardLM-2-8x22B GGUF quants based on reupload at alpindale/WizardLM-2-8x22B
18
+
19
+ ## GGUFs created with an importance matrix (details below)
20
+
21
+ This is based on a reupload by an alternate source as microsoft deleted the model shortly after release, I will validate checksums after it is released again, to see if MS did any changes.
22
+
23
+ Source Model: [alpindale/WizardLM-2-8x22B](https://huggingface.co/lucyknada/microsoft_WizardLM-2-7B)
24
+
25
+ Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [5dc9dd7152dedc6046b646855585bd070c91e8c8](https://github.com/ggerganov/llama.cpp/commit/5dc9dd7152dedc6046b646855585bd070c91e8c8) (master from 2024-04-09)
26
+
27
+ Imatrix was generated from the f16 gguf via this command:
28
+
29
+ ./imatrix -c 512 -m $out_path/$base_quant_name -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
30
+
31
+ Using the dataset from [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)