ZeroWw commited on
Commit
4705c37
1 Parent(s): 35ec7f5

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Qwen2.5-1.5B-Instruct.fq8.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Qwen2.5-1.5B-Instruct.silly.gguf filter=lfs diff=lfs merge=lfs -text
Qwen2.5-1.5B-Instruct.fq8.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d36cbe54bf264f578d1c8b424de40209f3c8cd6ac4e59de070378063ce78153a
3
+ size 1865360896
Qwen2.5-1.5B-Instruct.silly.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7325c1aed30f0808c7a5e31a977dd00284f8f5a137bcfd38eb19a35823598941
3
+ size 1865360896
README.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: mit
4
+ language:
5
+ - en
6
+ pipeline_tag: text-generation
7
+ ---
8
+
9
+ ZeroWw 'SILLY' version.
10
+ The original model has been quantized (fq8 version)
11
+ and a percentage of it's tensors have
12
+ been modified adding some noise.
13
+
14
+ Full colab: https://colab.research.google.com/drive/1a7seagBzu5l3k3FL4SFk0YJocl7nsDJw?usp=sharing
15
+
16
+ Fast colab: https://colab.research.google.com/drive/1SDD7ox21di_82Y9v68AUoy0PhkxwBVvN?usp=sharing
17
+
18
+ Original reddit post: https://www.reddit.com/r/LocalLLaMA/comments/1ec0s8p/i_made_a_silly_test/
19
+
20
+ I created a program to randomize the weights of a model. The program has 2 parameters: the percentage of weights to modify and the percentage of the original value to randmly apply to each weight.
21
+
22
+ At the end I check the resulting GGUF file for binary differences.
23
+ In this example I set to modify 100% of the weights of Mistral 7b Instruct v0.3 by a maximum of 15% deviation.
24
+
25
+ Since the deviation is calculated on the F32 weights, when quantized to Q8\_0 this changes.
26
+ So, in the end I got a file that compared to the original has:
27
+
28
+ Bytes Difference percentage: 73.04%
29
+
30
+ Average value divergence: 2.98%
31
+
32
+ The cool thing is that chatting with the model I see no apparent difference and the model still works nicely as the original.
33
+
34
+ Since I am running everything on CPU, I could not run perplexity scores or anything computing intensive.
35
+
36
+ As a small test, I asked the model a few questions (like the history of the roman empire) and then fact check its answer using a big model. No errors were detected.
37
+
38
+ Update: all procedure tested and created on COLAB.
39
+
40
+ Created on: Fri Oct 25, 11:11:39