louisbrulenaudet commited on
Commit
bb689f5
1 Parent(s): 15db1e9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -2
README.md CHANGED
@@ -6,6 +6,9 @@ tags:
6
  - WizardLM/WizardMath-7B-V1.1
7
  - cognitivecomputations/WestLake-7B-v2-laser
8
  - CultriX/NeuralTrix-7B-dpo
 
 
 
9
  base_model:
10
  - louisbrulenaudet/Pearl-7B-slerp
11
  - WizardLM/WizardMath-7B-V1.1
@@ -14,9 +17,39 @@ base_model:
14
  license: apache-2.0
15
  language:
16
  - en
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  ---
 
18
 
19
- # Pearl-7B-0211-ties
20
 
21
  Pearl-7B-0211-ties is a merge of the following models:
22
  * [louisbrulenaudet/Pearl-7B-slerp](https://huggingface.co/louisbrulenaudet/Pearl-7B-slerp)
@@ -24,6 +57,35 @@ Pearl-7B-0211-ties is a merge of the following models:
24
  * [cognitivecomputations/WestLake-7B-v2-laser](https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser)
25
  * [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo)
26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  ## Configuration
28
 
29
  ```yaml
@@ -76,4 +138,21 @@ pipeline = transformers.pipeline(
76
 
77
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
78
  print(outputs[0]["generated_text"])
79
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - WizardLM/WizardMath-7B-V1.1
7
  - cognitivecomputations/WestLake-7B-v2-laser
8
  - CultriX/NeuralTrix-7B-dpo
9
+ - chemistry
10
+ - biology
11
+ - math
12
  base_model:
13
  - louisbrulenaudet/Pearl-7B-slerp
14
  - WizardLM/WizardMath-7B-V1.1
 
17
  license: apache-2.0
18
  language:
19
  - en
20
+ library_name: transformers
21
+ pipeline_tag: text-generation
22
+ model-index:
23
+ - name: Pearl-7B-0210-ties
24
+ results:
25
+ - task:
26
+ type: text-generation
27
+ metrics:
28
+ - name: Average
29
+ type: Average
30
+ value: 74.66
31
+ - name: ARC
32
+ type: ARC
33
+ value: 71.08
34
+ - name: GSM8K
35
+ type: GSM8K
36
+ value: 69.98
37
+ - name: Winogrande
38
+ type: Winogrande
39
+ value: 83.98
40
+ - name: TruthfulQA
41
+ type: TruthfulQA
42
+ value: 70.47
43
+ - name: HellaSwag
44
+ type: HellaSwag
45
+ value: 88.63
46
+ source:
47
+ name: Open LLM Leaderboard
48
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
49
  ---
50
+ <center><img src='https://i.imgur.com/0xFTuAX.png' width='450px'></center>
51
 
52
+ # Pearl-7B-0211-ties, an xtraordinary 7B model
53
 
54
  Pearl-7B-0211-ties is a merge of the following models:
55
  * [louisbrulenaudet/Pearl-7B-slerp](https://huggingface.co/louisbrulenaudet/Pearl-7B-slerp)
 
57
  * [cognitivecomputations/WestLake-7B-v2-laser](https://huggingface.co/cognitivecomputations/WestLake-7B-v2-laser)
58
  * [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo)
59
 
60
+ Evaluation
61
+
62
+ The evaluation was performed using the HuggingFace Open LLM Leaderboard.
63
+
64
+ | Model | Average | ARC | HellaSwag | MMLU | TruthfulQA | Winogrande | GSM8K | #Params (B) |
65
+ |--------------------------------------------------|---------|-------|-----------|-------|------------|------------|-------|--------------|
66
+ | **louisbrulenaudet/Pearl-34B-ties** | **75.48** | 70.99 | 84.83 | **76.63** | 70.32 | 82.64 | 67.48 | 34.39 |
67
+ | **louisbrulenaudet/Pearl-7B-0211-ties** | **75.11** | **71.42** | **88.86** | 63.91 | **71.46** | **84.37** | 70.66 | 7.24 |
68
+ | NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO | 73.35 | 71.08 | 87.29 | 72.17 | 54.83 | 83.11 | 71.65 | 46.7 |
69
+ | argilla/notus-8x7b-experiment | 73.18 | 70.99 | 87.73 | 71.33 | 65.79 | 81.61 | 61.64 | 46.7 |
70
+ | **louisbrulenaudet/Pearl-7B-slerp** | 72.75 | 68.00 | 87.16 | 64.04 | 62.35 | 81.29 | **73.62** | 7.24 |
71
+ | mistralai/Mixtral-8x7B-Instruct-v0.1 | 72.7 | 70.14 | 87.55 | 71.4 | 64.98 | 81.06 | 61.11 | 46.7 |
72
+ | microsoft/Orca-2-13b | 61.98 | 60.92 | 79.85 | 60.3 | 56.42 | 76.56 | 37.83 | 13 |
73
+ | microsoft/phi-2 | 61.33 | 61.09 | 75.11 | 58.11 | 44.47 | 74.35 | 54.81 | 2.78 |
74
+
75
+ ### Ties merging
76
+
77
+ TIES-Merging is a method designed to facilitate the efficient merging of multiple task-specific models into a consolidated multitask model. It addresses two primary challenges encountered in the process of model merging with a focus on maintaining objectivity.
78
+
79
+ One key challenge tackled by TIES-Merging involves addressing redundancy in model parameters. This is achieved by identifying and eliminating redundant parameters within task-specific models, emphasizing the changes made during fine-tuning and selectively retaining the top-k% most significant changes while discarding the rest.
80
+
81
+ Another challenge pertains to conflicts arising from disagreements between parameter signs across different models. TIES-Merging resolves these conflicts by creating a unified sign vector representing the most dominant direction of change across all models.
82
+
83
+ The TIES-Merging process consists of three steps:
84
+
85
+ - Trim: Reduces redundancy in task-specific models by retaining a fraction of the most significant parameters (density parameter) and resetting the remaining parameters to zero.
86
+ - Elect Sign: Resolves sign conflicts across different models by creating a unified sign vector based on the most dominant direction (positive or negative) in terms of cumulative magnitude.
87
+ - Disjoint Merge: Averages parameter values aligned with the unified sign vector, excluding zero values.
88
+
89
  ## Configuration
90
 
91
  ```yaml
 
138
 
139
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
140
  print(outputs[0]["generated_text"])
141
+ ```
142
+
143
+ ## Citing & Authors
144
+
145
+ If you use this code in your research, please use the following BibTeX entry.
146
+
147
+ ```BibTeX
148
+ @misc{louisbrulenaudet2023,
149
+ author = {Louis Brulé Naudet},
150
+ title = {Pearl-7B-0211-ties, an xtraordinary 7B model},
151
+ year = {2023}
152
+ howpublished = {\url{https://huggingface.co/louisbrulenaudet/Pearl-7B-0211-ties}},
153
+ }
154
+ ```
155
+
156
+ ## Feedback
157
+
158
+ If you have any feedback, please reach out at [[email protected]](mailto:[email protected]).