Update README.md
Browse files
README.md
CHANGED
@@ -4,4 +4,30 @@ language:
|
|
4 |
- en
|
5 |
---
|
6 |
|
7 |
-
This model is intended to be a strong base suitable for downstream fine-tuning on a variety of tasks. Based on our internal evaluations, we believe it's one of the strongest models for most down-stream tasks. You can read more about our development and evaluation process [here](https://openpipe.ai/blog/mistral-7b-fine-tune-optimized).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
- en
|
5 |
---
|
6 |
|
7 |
+
This model is intended to be a strong base suitable for downstream fine-tuning on a variety of tasks. Based on our internal evaluations, we believe it's one of the strongest models for most down-stream tasks. You can read more about our development and evaluation process [here](https://openpipe.ai/blog/mistral-7b-fine-tune-optimized).
|
8 |
+
|
9 |
+
---
|
10 |
+
[Mergekit](https://github.com/cg123/mergekit) config used to create this model:
|
11 |
+
|
12 |
+
```yaml
|
13 |
+
slices:
|
14 |
+
- sources:
|
15 |
+
- model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
|
16 |
+
layer_range: [0, 32]
|
17 |
+
- model: Q-bert/MetaMath-Cybertron-Starling
|
18 |
+
layer_range: [0, 32]
|
19 |
+
merge_method: slerp
|
20 |
+
base_model: mistralai/Mistral-7B-v0.1
|
21 |
+
parameters:
|
22 |
+
t:
|
23 |
+
- filter: self_attn
|
24 |
+
value: [0, 0.5, 0.3, 0.7, 1]
|
25 |
+
- filter: mlp
|
26 |
+
value: [1, 0.5, 0.7, 0.3, 0]
|
27 |
+
- value: 0.5 # fallback for rest of tensors
|
28 |
+
dtype: bfloat16
|
29 |
+
```
|
30 |
+
|
31 |
+
|
32 |
+
---
|
33 |
+
*Update*: It appears that https://huggingface.co/Weyaxi/Seraph-7B was merged from the same base models using the same [mergekit](https://github.com/cg123/mergekit) defaults as this model. So major credit goes to @Weyaxi both for creating one of the base merges this model was merged from, as well as being the first one to perform this exact merge as well!
|