ZeroWw commited on
Commit
487225a
1 Parent(s): 596fffd

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ gemma-2-2b-it.fq8.gguf filter=lfs diff=lfs merge=lfs -text
37
+ gemma-2-2b-it.silly.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: mit
4
+ language:
5
+ - en
6
+ pipeline_tag: text-generation
7
+ ---
8
+
9
+ ZeroWw 'SILLY' version.
10
+ The original model has been quantized (fq8 version)
11
+ and a percentage of it's tensors have
12
+ been modified adding some noise.
13
+
14
+ Full colab: https://colab.research.google.com/drive/1a7seagBzu5l3k3FL4SFk0YJocl7nsDJw?usp=sharing
15
+
16
+ Fast colab: https://colab.research.google.com/drive/1SDD7ox21di_82Y9v68AUoy0PhkxwBVvN?usp=sharing
17
+
18
+ Original reddit post: https://www.reddit.com/r/LocalLLaMA/comments/1ec0s8p/i_made_a_silly_test/
19
+
20
+ I created a program to randomize the weights of a model. The program has 2 parameters: the percentage of weights to modify and the percentage of the original value to randmly apply to each weight.
21
+
22
+ At the end I check the resulting GGUF file for binary differences.
23
+ In this example I set to modify 100% of the weights of Mistral 7b Instruct v0.3 by a maximum of 15% deviation.
24
+
25
+ Since the deviation is calculated on the F32 weights, when quantized to Q8\_0 this changes.
26
+ So, in the end I got a file that compared to the original has:
27
+
28
+ Bytes Difference percentage: 73.04%
29
+
30
+ Average value divergence: 2.98%
31
+
32
+ The cool thing is that chatting with the model I see no apparent difference and the model still works nicely as the original.
33
+
34
+ Since I am running everything on CPU, I could not run perplexity scores or anything computing intensive.
35
+
36
+ As a small test, I asked the model a few questions (like the history of the roman empire) and then fact check its answer using a big model. No errors were detected.
37
+
38
+ Update: all procedure tested and created on COLAB.
39
+
40
+ Created on: Thu Aug 01, 11:11:31
gemma-2-2b-it.fq8.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ad7ed5d441e9b05aaa973d2f4df841e00ff6574b6086151ca7f64d786650ac72
3
+ size 3337455200
gemma-2-2b-it.silly.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87fe4d87edb0d61765091330d5f6e002306a87f4ed1ffa2784d52b515da38aa7
3
+ size 3337455200