The "microsoft/Orca-2-13b" model fully fine-tuned on HuggingFaceH4/no_robots, totally-not-an-llm/EverythingLM-data-V3, LDJnr/Capybara, LDJnr/Pure-Dove, LDJnr/LessWrong-Amplify-Instruct, LDJnr/Verified-Camel, mlabonne/guanaco-llama2-1k, and OpenAssistant/oasst_top1_2023-08-25. This model achieved a test loss of 0.39 on LDJnr/Verified-Camel.
Make sure to comply with the microsoft research license. Please read it before using this model.
This model was trained on the ChatML prompt template.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 56.15 |
AI2 Reasoning Challenge (25-Shot) | 60.41 |
HellaSwag (10-Shot) | 80.46 |
MMLU (5-Shot) | 59.51 |
TruthfulQA (0-shot) | 54.01 |
Winogrande (5-shot) | 77.43 |
GSM8k (5-shot) | 5.08 |
- Downloads last month
- 4,195
Model tree for Locutusque/Orca-2-13b-SFT-v6
Datasets used to train Locutusque/Orca-2-13b-SFT-v6
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard60.410
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard80.460
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard59.510
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard54.010
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard77.430
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard5.080