EryriLabs commited on
Commit
6277c2c
1 Parent(s): 55ef9ac

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -7
README.md CHANGED
@@ -13,9 +13,7 @@ tags:
13
 
14
  </figure>
15
 
16
- This is a merge of pre-trained language models abacusai_Llama-3-Smaug-8B and cognitivecomputations_dolphin-2.9-llama3-8b created using [mergekit](https://github.com/cg123/mergekit).
17
-
18
-
19
 
20
 
21
  ## Merge Details
@@ -26,7 +24,7 @@ This model was merged using the SLERP merge method.
26
  ### Models Merged
27
 
28
  The following models were included in the merge:
29
- * abacusai_Llama-3-Smaug-8B
30
  * cognitivecomputations_dolphin-2.9-llama3-8b
31
 
32
  ### Configuration
@@ -36,12 +34,12 @@ The following YAML configuration was used to produce this model:
36
  ```yaml
37
  slices:
38
  - sources:
39
- - model: cognitivecomputations_dolphin-2.9-llama3-8b
40
  layer_range: [0, 32]
41
- - model: abacusai_Llama-3-Smaug-8B
42
  layer_range: [0, 32]
43
  merge_method: slerp
44
- base_model: cognitivecomputations_dolphin-2.9-llama3-8b
45
  parameters:
46
  t:
47
  - filter: self_attn
 
13
 
14
  </figure>
15
 
16
+ This is a merge of pre-trained language models abacusai/Llama-3-Smaug-8B and cognitivecomputations/dolphin-2.9-llama3-8b created using [mergekit](https://github.com/cg123/mergekit).
 
 
17
 
18
 
19
  ## Merge Details
 
24
  ### Models Merged
25
 
26
  The following models were included in the merge:
27
+ * abacusai/Llama-3-Smaug-8B
28
  * cognitivecomputations_dolphin-2.9-llama3-8b
29
 
30
  ### Configuration
 
34
  ```yaml
35
  slices:
36
  - sources:
37
+ - model: cognitivecomputations/dolphin-2.9-llama3-8b
38
  layer_range: [0, 32]
39
+ - model: abacusai/Llama-3-Smaug-8B
40
  layer_range: [0, 32]
41
  merge_method: slerp
42
+ base_model: cognitivecomputations/dolphin-2.9-llama3-8b
43
  parameters:
44
  t:
45
  - filter: self_attn