LilyWinter commited on
Commit
af646d0
1 Parent(s): 874bf57

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +58 -0
README.md ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - merge
4
+ license: other
5
+ ---
6
+
7
+ # QuartetAnemoi-70B-t0.0001
8
+
9
+ A sequential merge using a custom algorithm (NearSwap) of:
10
+ - [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf)
11
+ - [Sao10K/WinterGoddess-1.4x-70B-L2](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2)
12
+ - [Aurora-Nights-70B-v1.0](https://huggingface.co/sophosympatheia/Aurora-Nights-70B-v1.0)
13
+ - [Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1)
14
+
15
+ <br/>
16
+
17
+ In our testing, this model seems like much more of a storyteller than a coder, as might be expected. We were impressed that, unlike most models, at the end of a story it did not often use cliches such as "In the end", "And so", "beacon of hope", etc.
18
+
19
+ <br/>
20
+ <br/>
21
+
22
+ # NearSwap Algorithm
23
+
24
+ NearSwap retains most of the weights of the base model (Miqu), but when a weight is similar between the two, it is interpolated to the secondary model value. A parameter *t* specifies the sameness threshold. When the distance between two values is below *t*, the weight from the secondary model is used.
25
+
26
+ This version of the model uses *t* = 0.0001. At this *t*, about 0.8% of weights are fully switched to the secondary model during each pass. Model quality rapidly degrades above *t* = 0.0025:
27
+
28
+ - *t* = 0.0001 (~0.8% full swap): This model
29
+ - *t* = 0.0003 (~2% full swap)
30
+ - *t* = 0.001 (~10% full swap): [BoreanGale-70B](https://huggingface.co/alchemonaut/BoreanGale-70B)
31
+ - *t* = 0.0025 (~18% full swap): Generates one paragraph okay, but then reverts to garbage
32
+ - *t* = 0.005 (~35% full swap): Garbage; semi-related word lists
33
+ - *t* = 0.01 (~55% full swap): Garbage; pseudorandom tokens output
34
+
35
+ For QuartetAnemoi-70B-t0.0001, the three secondary models were each merged sequentially with *t* = 0.0001.
36
+
37
+ NearSwap implementation:
38
+ ```
39
+ t: Union[float, np.ndarray],
40
+ v0: Union[np.ndarray, torch.Tensor],
41
+ v1: Union[np.ndarray, torch.Tensor],
42
+ ...
43
+ lweight = numpy.absolute(v0-v1)
44
+ lweight = t / lweight
45
+ lweight = numpy.nan_to_num(lweight, nan=1.0, posinf=1.0, neginf=1.0)
46
+ numpy.clip(lweight, a_min=0.0, a_max=1.0, out=lweight)
47
+ res = lerp(lweight,v0,v1)
48
+ ```
49
+ <br/>
50
+ <br/>
51
+
52
+
53
+ # License and Use
54
+
55
+ Since the ultimate origin of Miqu is at this time unknown beyond speculation, this model is for noncommercial research use only.
56
+
57
+ <br/>
58
+ <br/>