lodrick-the-lafted commited on
Commit
3701aee
1 Parent(s): f3a3e21

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -34
README.md CHANGED
@@ -1,42 +1,14 @@
1
  ---
2
- base_model: []
 
 
 
3
  library_name: transformers
4
  tags:
5
  - mergekit
6
  - merge
7
-
8
  ---
9
- # magstral-123b
10
-
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
-
13
- ## Merge Details
14
- ### Merge Method
15
-
16
- This model was merged using the SLERP merge method.
17
-
18
- ### Models Merged
19
-
20
- The following models were included in the merge:
21
- * mistral-large
22
- * magnum-v2-123b
23
-
24
- ### Configuration
25
-
26
- The following YAML configuration was used to produce this model:
27
 
28
- ```yaml
29
- slices:
30
- - sources:
31
- - model: mistral-large
32
- layer_range: [0, 88]
33
- - model: magnum-v2-123b
34
- layer_range: [0, 88]
35
- merge_method: slerp
36
- base_model: mistral-large
37
- parameters:
38
- t:
39
- - value: 0.5
40
- dtype: bfloat16
41
 
42
- ```
 
1
  ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - anthracite-org/magnum-v2-123b
5
+ - mistralai/Mistral-Large-Instruct-2407
6
  library_name: transformers
7
  tags:
8
  - mergekit
9
  - merge
 
10
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
+ How much introduction do you need? You know what it is. If you want something that's closer to regular-flavor Mistral Large, here you go.
 
 
 
 
 
 
 
 
 
 
 
 
13
 
14
+ A basic af 50/50 slerp merge of [anthracite-org/magnum-v2-123b](https://huggingface.co/anthracite-org/magnum-v2-123b) with [mistralai/Mistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407)