qwp4w3hyb commited on
Commit
48d44e2
1 Parent(s): c8a48c1

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: lucyknada/microsoft_WizardLM-2-7B
3
+ tags:
4
+ - wizardlm
5
+ - microsoft
6
+ - instruct
7
+ - finetune
8
+ - gguf
9
+ - importance matrix
10
+ - imatrix
11
+ model-index:
12
+ - name: unofficial-WizardLM-2-7B-iMat-GGUF
13
+ results: []
14
+ license: apache-2.0
15
+ ---
16
+
17
+ # WizardLM-2-7B based on reupload at lucyknada/microsoft_WizardLM-2-7B - GGUFs created with an importance matrix
18
+
19
+ This is based on a reupload by an alternate source as microsoft deleted the model shortly after release, I will validate checksums after it is released again.
20
+
21
+ Source Model: [lucyknada/microsoft_WizardLM-2-7B](https://huggingface.co/lucyknada/microsoft_WizardLM-2-7B)
22
+
23
+ Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [5dc9dd7152dedc6046b646855585bd070c91e8c8](https://github.com/ggerganov/llama.cpp/commit/5dc9dd7152dedc6046b646855585bd070c91e8c8) (master from 2024-04-09)
24
+
25
+ Imatrix was generated from the f16 gguf via this command:
26
+
27
+ ./imatrix -c 512 -m $out_path/$base_quant_name -f $llama_cpp_path/groups_merged.txt -o $out_path/imat-f16-gmerged.dat
28
+
29
+ Using the dataset from [here](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)