llmixer commited on
Commit
7f9e2cd
1 Parent(s): 6d18a6f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md CHANGED
@@ -29,6 +29,56 @@ Vicuna and Alpaca.
29
  # Merge process
30
  The models used in the merge are [Xwin-LM-70b-v0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1), [Euryale-1.3-70b](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B), [Platypus2-70b-instruct](https://huggingface.co/garage-bAInd/Platypus2-70B-instruct) and [WinterGoddess-1.4x-70b](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2).
31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  # Acknowledgements
33
  [@Xwin-LM](https://huggingface.co/Xwin-LM) For creating Xwin
34
 
 
29
  # Merge process
30
  The models used in the merge are [Xwin-LM-70b-v0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1), [Euryale-1.3-70b](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B), [Platypus2-70b-instruct](https://huggingface.co/garage-bAInd/Platypus2-70B-instruct) and [WinterGoddess-1.4x-70b](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2).
31
 
32
+ Merge configuration:
33
+ ```
34
+ slices:
35
+ - sources:
36
+ - model: Xwin-LM/Xwin-LM-70B-V0.1
37
+ layer_range: [0,12]
38
+ - sources:
39
+ - model: Sao10K/Euryale-1.3-L2-70B
40
+ layer_range: [9,14]
41
+ - sources:
42
+ - model: Xwin-LM/Xwin-LM-70B-V0.1
43
+ layer_range: [12,62]
44
+ - sources:
45
+ - model: Sao10K/Euryale-1.3-L2-70B
46
+ layer_range: [54,71]
47
+ - sources:
48
+ - model: Xwin-LM/Xwin-LM-70B-V0.1
49
+ layer_range: [62,80]
50
+ merge_method: passthrough
51
+ dtype: float16
52
+ ---
53
+ slices:
54
+ - sources:
55
+ - model: garage-bAInd/Platypus2-70B-instruct
56
+ layer_range: [0,12]
57
+ - sources:
58
+ - model: Sao10K/WinterGoddess-1.4x-70B-L2
59
+ layer_range: [9,14]
60
+ - sources:
61
+ - model: garage-bAInd/Platypus2-70B-instruct
62
+ layer_range: [12,62]
63
+ - sources:
64
+ - model: Sao10/WinterGoddess-1.4x-70B-L2
65
+ layer_range: [54,71]
66
+ - sources:
67
+ - model: garage-bAInd/Platypus2-70B-instruct
68
+ layer_range: [62,80]
69
+ merge_method: passthrough
70
+ dtype: float16
71
+ ---
72
+ models:
73
+ - model: llmixer/BigWeave-v8-90b
74
+ parameters:
75
+ weight: 0.5
76
+ density: 0.5
77
+ merge_method: dare_ties
78
+ base_model: llmixer/BigWeave-v6-90b
79
+ dtype: float16
80
+ ```
81
+
82
  # Acknowledgements
83
  [@Xwin-LM](https://huggingface.co/Xwin-LM) For creating Xwin
84