RichardErkhov
commited on
Commit
•
aa7b106
1
Parent(s):
1b23520
uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
fairseq-dense-2.7B - bnb 8bits
|
11 |
+
- Model creator: https://huggingface.co/KoboldAI/
|
12 |
+
- Original model: https://huggingface.co/KoboldAI/fairseq-dense-2.7B/
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+
Original model description:
|
18 |
+
---
|
19 |
+
language: en
|
20 |
+
---
|
21 |
+
This is a Hugging Face transformers-compatible conversion of the original dense 2.7B-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md.
|
22 |
+
|
23 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
24 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-2.7B)
|
25 |
+
|
26 |
+
| Metric | Value |
|
27 |
+
|-----------------------|---------------------------|
|
28 |
+
| Avg. | 33.67 |
|
29 |
+
| ARC (25-shot) | 33.79 |
|
30 |
+
| HellaSwag (10-shot) | 65.74 |
|
31 |
+
| MMLU (5-shot) | 26.44 |
|
32 |
+
| TruthfulQA (0-shot) | 34.57 |
|
33 |
+
| Winogrande (5-shot) | 63.93 |
|
34 |
+
| GSM8K (5-shot) | 0.0 |
|
35 |
+
| DROP (3-shot) | 11.24 |
|
36 |
+
|
37 |
+
|