grimjim commited on
Commit
8c82de2
1 Parent(s): dd984f3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +55 -55
README.md CHANGED
@@ -1,55 +1,55 @@
1
- ---
2
- base_model:
3
- - princeton-nlp/gemma-2-9b-it-SimPO
4
- - HODACHI/EZO-Common-9B-gemma-2-it
5
- library_name: transformers
6
- tags:
7
- - mergekit
8
- - merge
9
- license: gemma
10
- pipeline_tag: text-generation
11
- ---
12
- # Kitsunebi-v1-Gemma2-8k-9B
13
-
14
- This repo contains a merge of pre-trained Gemma 2 9B Instruct language models created using [mergekit](https://github.com/cg123/mergekit).
15
-
16
- None of the components of this merge were trained for roleplay nor intended for it. Despite this, the resulting model can be used effectively for that function. The virtue of this model lies in its coherence, as opposed to textual richness.
17
-
18
- This project utilizes HODACHI/EZO-Common-9B-gemma-2-it, a model based on gemma-2 and fine-tuned by Axcxept co., ltd. Its primary goal was to perform well in Japanese language tasks. Model training leveraged context-based synthesized instruction pre-training data for supervised multitask pre-training [(abstract)](https://arxiv.org/abs/2406.14491).
19
-
20
- We also used princeton-nlp/gemma-2-9b-it-SimPO, a demonstration of Simple Preference Optimization [(abstract)][https://arxiv.org/abs/2405.14734].
21
-
22
- ## Merge Details
23
- ### Merge Method
24
-
25
- This model was merged using the SLERP merge method.
26
-
27
- ### Models Merged
28
-
29
- The following models were included in the merge:
30
- * [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO)
31
- * [HODACHI/EZO-Common-9B-gemma-2-it](https://huggingface.co/HODACHI/EZO-Common-9B-gemma-2-it)
32
-
33
- ### Configuration
34
-
35
- The following YAML configuration was used to produce this model:
36
-
37
- ```yaml
38
- slices:
39
- - sources:
40
- - model: princeton-nlp/gemma-2-9b-it-SimPO
41
- layer_range: [0, 42]
42
- - model: HODACHI/EZO-Common-9B-gemma-2-it
43
- layer_range: [0, 42]
44
- merge_method: slerp
45
- base_model: HODACHI/EZO-Common-9B-gemma-2-it
46
- parameters:
47
- t:
48
- - filter: self_attn
49
- value: [0, 0.5, 0.3, 0.7, 1]
50
- - filter: mlp
51
- value: [1, 0.5, 0.7, 0.3, 0]
52
- - value: 0.5
53
- dtype: bfloat16
54
-
55
- ```
 
1
+ ---
2
+ base_model:
3
+ - princeton-nlp/gemma-2-9b-it-SimPO
4
+ - HODACHI/EZO-Common-9B-gemma-2-it
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ license: gemma
10
+ pipeline_tag: text-generation
11
+ ---
12
+ # Kitsunebi-v1-Gemma2-8k-9B
13
+
14
+ This repo contains a merge of pre-trained Gemma 2 9B Instruct language models created using [mergekit](https://github.com/cg123/mergekit).
15
+
16
+ None of the components of this merge were trained for roleplay nor intended for it. Despite this, the resulting model can be used effectively for that function. The virtue of this model lies in its coherence, as opposed to textual richness.
17
+
18
+ This project utilizes HODACHI/EZO-Common-9B-gemma-2-it, a model based on gemma-2 and fine-tuned by Axcxept co., ltd. Its primary goal was to perform well in Japanese language tasks. Model training leveraged context-based synthesized instruction pre-training data for supervised multitask pre-training [(abstract)](https://arxiv.org/abs/2406.14491).
19
+
20
+ We also used princeton-nlp/gemma-2-9b-it-SimPO, a demonstration of Simple Preference Optimization [(abstract)](https://arxiv.org/abs/2405.14734).
21
+
22
+ ## Merge Details
23
+ ### Merge Method
24
+
25
+ This model was merged using the SLERP merge method.
26
+
27
+ ### Models Merged
28
+
29
+ The following models were included in the merge:
30
+ * [princeton-nlp/gemma-2-9b-it-SimPO](https://huggingface.co/princeton-nlp/gemma-2-9b-it-SimPO)
31
+ * [HODACHI/EZO-Common-9B-gemma-2-it](https://huggingface.co/HODACHI/EZO-Common-9B-gemma-2-it)
32
+
33
+ ### Configuration
34
+
35
+ The following YAML configuration was used to produce this model:
36
+
37
+ ```yaml
38
+ slices:
39
+ - sources:
40
+ - model: princeton-nlp/gemma-2-9b-it-SimPO
41
+ layer_range: [0, 42]
42
+ - model: HODACHI/EZO-Common-9B-gemma-2-it
43
+ layer_range: [0, 42]
44
+ merge_method: slerp
45
+ base_model: HODACHI/EZO-Common-9B-gemma-2-it
46
+ parameters:
47
+ t:
48
+ - filter: self_attn
49
+ value: [0, 0.5, 0.3, 0.7, 1]
50
+ - filter: mlp
51
+ value: [1, 0.5, 0.7, 0.3, 0]
52
+ - value: 0.5
53
+ dtype: bfloat16
54
+
55
+ ```