File size: 1,067 Bytes
d62d83b
 
 
 
8c01c6d
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
---
language: en
---
This is a Hugging Face transformers-compatible conversion of the original dense 6.7B-parameter model from the paper "[Efficient Large Scale Language Modeling with Mixtures of Experts](https://arxiv.org/abs/2112.10684)" from Artetxe et al. Please refer to the original model card, which can be found at https://github.com/facebookresearch/fairseq/blob/main/examples/moe_lm/model_card.md.

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_KoboldAI__fairseq-dense-6.7B)

| Metric                | Value                     |
|-----------------------|---------------------------|
| Avg.                  | 36.09   |
| ARC (25-shot)         | 39.42          |
| HellaSwag (10-shot)   | 71.26    |
| MMLU (5-shot)         | 26.91         |
| TruthfulQA (0-shot)   | 32.73   |
| Winogrande (5-shot)   | 65.27   |
| GSM8K (5-shot)        | 0.0        |
| DROP (3-shot)         | 17.05         |