Update README.md
Browse files
README.md
CHANGED
@@ -10,13 +10,30 @@ base_model:
|
|
10 |
- mlabonne/NeuralOmniBeagle-7B
|
11 |
---
|
12 |
|
|
|
|
|
13 |
# Monarch-7B
|
14 |
|
|
|
|
|
15 |
Monarch-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
16 |
* [mlabonne/OmniTruthyBeagle-7B-v0](https://huggingface.co/mlabonne/OmniTruthyBeagle-7B-v0)
|
17 |
* [mlabonne/NeuBeagle-7B](https://huggingface.co/mlabonne/NeuBeagle-7B)
|
18 |
* [mlabonne/NeuralOmniBeagle-7B](https://huggingface.co/mlabonne/NeuralOmniBeagle-7B)
|
19 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
## 𧩠Configuration
|
21 |
|
22 |
```yaml
|
|
|
10 |
- mlabonne/NeuralOmniBeagle-7B
|
11 |
---
|
12 |
|
13 |
+
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/zDCZ6uIu68k1JeCOa9bHl.jpeg)
|
14 |
+
|
15 |
# Monarch-7B
|
16 |
|
17 |
+
**Update 13/02/24: Monarch-7B is the best-performing model on the YALL leaderboard.**
|
18 |
+
|
19 |
Monarch-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
20 |
* [mlabonne/OmniTruthyBeagle-7B-v0](https://huggingface.co/mlabonne/OmniTruthyBeagle-7B-v0)
|
21 |
* [mlabonne/NeuBeagle-7B](https://huggingface.co/mlabonne/NeuBeagle-7B)
|
22 |
* [mlabonne/NeuralOmniBeagle-7B](https://huggingface.co/mlabonne/NeuralOmniBeagle-7B)
|
23 |
|
24 |
+
## π Evaluation
|
25 |
+
|
26 |
+
The evaluation was performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval) on Nous suite. See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
|
27 |
+
|
28 |
+
| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|
29 |
+
|---|---:|---:|---:|---:|---:|
|
30 |
+
| [**Monarch-7B**](https://huggingface.co/mlabonne/Monarch-7B) [π](https://gist.github.com/mlabonne/0b8d057c5ece41e0290580a108c7a093) | **62.68** | **45.48** | **77.07** | **78.04** | **50.14** |
|
31 |
+
| [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) [π](https://gist.github.com/mlabonne/88b21dd9698ffed75d6163ebdc2f6cc8) | 52.42 | 42.75 | 72.99 | 52.99 | 40.94 |
|
32 |
+
| [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B) [π](https://gist.github.com/mlabonne/14687f1eb3425b166db511f31f8e66f6) | 53.51 | 43.67 | 73.24 | 55.37 | 41.76 |
|
33 |
+
| [mlabonne/NeuralBeagle14-7B](https://huggingface.co/mlabonne/NeuralBeagle14-7B) [π](https://gist.github.com/mlabonne/ad0c665bbe581c8420136c3b52b3c15c) | 60.25 | 46.06 | 76.77 | 70.32 | 47.86 |
|
34 |
+
| [eren23/dpo-binarized-NeuralTrix-7B](https://huggingface.co/eren23/dpo-binarized-NeuralTrix-7B) [π](https://gist.github.com/CultriX-Github/dbdde67ead233df0c7c56f1b091f728c) | 62.5 | 44.57 | 76.34 | 79.81 | 49.27 |
|
35 |
+
| [CultriX/NeuralTrix-7B-dpo](https://huggingface.co/CultriX/NeuralTrix-7B-dpo) [π](https://gist.github.com/CultriX-Github/df0502599867d4043b45d9dafb5976e8) | 62.5 | 44.61 | 76.33 | 79.8 | 49.24 |
|
36 |
+
|
37 |
## 𧩠Configuration
|
38 |
|
39 |
```yaml
|