leaderboard-pr-bot's picture
Adding Evaluation Results
b5dc527 verified
|
raw
history blame
4.24 kB
---
license: mit
model-index:
- name: openchat-mistral-7b-reproduce
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 26.62
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=clowman/openchat-mistral-7b-reproduce
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 27.96
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=clowman/openchat-mistral-7b-reproduce
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 24.17
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=clowman/openchat-mistral-7b-reproduce
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 48.36
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=clowman/openchat-mistral-7b-reproduce
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 50.67
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=clowman/openchat-mistral-7b-reproduce
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 0.0
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=clowman/openchat-mistral-7b-reproduce
name: Open LLM Leaderboard
---
A reproduction of https://github.com/imoneoi/openchat.
Training command:
```bash
deepspeed --num_gpus=8 --module ochat.training_deepspeed.train \
--model_path imone/Mistral_7B_with_EOT_token \
--data_prefix ./data/ \
--save_path ./checkpoints/mistral-7b/ \
--batch_max_len 77824 \
--epochs 10 \
--save_every 1 \
--deepspeed \
--deepspeed_config deepspeed_config.json
```
`deepspeed_config.json`:
```json
{
"bf16": {
"enabled": true
},
"zero_optimization": {
"stage": 2
},
"gradient_clipping": 1.0,
"gradient_accumulation_steps": 1,
"train_micro_batch_size_per_gpu": 1,
"steps_per_print": 100,
"wall_clock_breakdown": false
}
```
Training data is https://huggingface.co/datasets/openchat/openchat_sharegpt4_dataset
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_clowman__openchat-mistral-7b-reproduce)
| Metric |Value|
|---------------------------------|----:|
|Avg. |29.63|
|AI2 Reasoning Challenge (25-Shot)|26.62|
|HellaSwag (10-Shot) |27.96|
|MMLU (5-Shot) |24.17|
|TruthfulQA (0-shot) |48.36|
|Winogrande (5-shot) |50.67|
|GSM8k (5-shot) | 0.00|