davzoku commited on
Commit
dc7509a
1 Parent(s): 14664f3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +36 -4
README.md CHANGED
@@ -1,17 +1,48 @@
1
  ---
 
 
 
 
 
 
 
2
  base_model:
3
  - davzoku/cria-llama2-7b-v1.3
4
  library_name: transformers
5
  tags:
6
  - mergekit
7
  - merge
 
8
 
9
  ---
10
- # Untitled Model (1)
11
 
12
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
13
 
14
- ## Merge Details
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ### Merge Method
16
 
17
  This model was merged using the passthrough merge method.
@@ -23,7 +54,8 @@ The following models were included in the merge:
23
 
24
  ### Configuration
25
 
26
- The following YAML configuration was used to produce this model:
 
27
 
28
  ```yaml
29
  # https://huggingface.co/Undi95/Mistral-11B-v0.1
 
1
  ---
2
+ inference: false
3
+ language: en
4
+ license: llama2
5
+ model_type: llama
6
+ datasets:
7
+ - mlabonne/CodeLlama-2-20k
8
+ pipeline_tag: text-generation
9
  base_model:
10
  - davzoku/cria-llama2-7b-v1.3
11
  library_name: transformers
12
  tags:
13
  - mergekit
14
  - merge
15
+ - llama-2
16
 
17
  ---
18
+ # FrankenCRIA v1.3-m.1
19
 
 
20
 
21
+
22
+ ## What is FrankenCRIA?
23
+
24
+ > krē-ə plural crias. : a baby llama, alpaca, vicuña, or guanaco.
25
+
26
+ <p align="center">
27
+ <img src="https://raw.githubusercontent.com/davzoku/cria/main/assets/" width="300" height="300" alt="FrankenCRIA Logo"> <br>
28
+ <i>This is a frankenmerge of [davzoku/cria-llama2-7b-v1.3](https://huggingface.co/davzoku/cria-llama2-7b-v1.3) created using [mergekit](https://github.com/cg123/mergekit).</i>
29
+ </p>
30
+
31
+ This configuration is the same as [Undi95/Mistral-11B-v0.1](https://huggingface.co/Undi95/Mistral-11B-v0.1), [mlabonne/FrankenBeagle14-11B](https://huggingface.co/mlabonne/FrankenBeagle14-11B) and the DUS technique used in [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0).
32
+
33
+
34
+ Please be aware that this model is highly experimental, and no further training has been conducted following the merge.
35
+ Therefore, the model performance may not meet expectations, as described in the [SOLAR paper](https://arxiv.org/abs/2312.15166)
36
+
37
+ ## 📦 FrankenCRIA Model Release
38
+
39
+ FrankenCRIA v1.3 comes with several variants.
40
+
41
+ - [davzoku/frankencria-llama2-11b-v1.3-m.1](https://huggingface.co/davzoku/frankencria-llama2-11b-v1.3-m.1): 11B FrankenMerge inspired by [Undi95/Mistral-11B-v0.1](https://huggingface.co/Undi95/Mistral-11B-v0.1)
42
+ - [davzoku/frankencria-llama2-11b-v1.3-m.2](https://huggingface.co/davzoku/frankencria-llama2-12.5b-v1.3-m.2): 12.5B interleaving FrankenMerge inspired by [vilm/vinallama-12.5b-chat-DUS](https://huggingface.co/vilm/vinallama-12.5b-chat-DUS)
43
+
44
+
45
+ ## 🧩 Merge Details
46
  ### Merge Method
47
 
48
  This model was merged using the passthrough merge method.
 
54
 
55
  ### Configuration
56
 
57
+ The following YAML configuration was used to produce this model.
58
+
59
 
60
  ```yaml
61
  # https://huggingface.co/Undi95/Mistral-11B-v0.1