Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,25 @@ tags:
|
|
8 |
|
9 |
# Chimera-Apex-7B
|
10 |
|
11 |
-
Chimera-Apex-7B is
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
|
13 |
## 🧩 Configuration
|
14 |
|
@@ -21,4 +39,6 @@ merge_method: model_stock
|
|
21 |
base_model: cognitivecomputations/dolphin-2.0-mistral-7b
|
22 |
dtype: bfloat16
|
23 |
|
24 |
-
```
|
|
|
|
|
|
8 |
|
9 |
# Chimera-Apex-7B
|
10 |
|
11 |
+
Chimera-Apex-7B is an experimental large language model (LLM) created by merging several high-performance models with the goal of achieving exceptional capabilities.
|
12 |
+
|
13 |
+
### Tasks:
|
14 |
+
|
15 |
+
Due to the inclusion of various models, Chimera-Apex-7B is intended to be a general-purpose model capable of handling a wide range of tasks, including:
|
16 |
+
|
17 |
+
- Conversation
|
18 |
+
- Question Answering
|
19 |
+
- Code Generation
|
20 |
+
- (Possibly) NSFW content generation
|
21 |
+
|
22 |
+
|
23 |
+
### Limitations:
|
24 |
+
|
25 |
+
- As an experimental model, Chimera-Apex-7B's outputs may not always be perfect or accurate.
|
26 |
+
- The merged models might introduce biases present in their training data.
|
27 |
+
- It's important to be aware of this limitation when interpreting its outputs.
|
28 |
+
|
29 |
+
|
30 |
|
31 |
## 🧩 Configuration
|
32 |
|
|
|
39 |
base_model: cognitivecomputations/dolphin-2.0-mistral-7b
|
40 |
dtype: bfloat16
|
41 |
|
42 |
+
```
|
43 |
+
|
44 |
+
Chimera-Apex-7B is a merge of the following models using [mergekit](https://github.com/cg123/mergekit):
|