|
--- |
|
tags: |
|
- yi |
|
- moe |
|
license: apache-2.0 |
|
--- |
|
|
|
this is a DPO fine-tuned MoE model for [TomGrc/FusionNet_34Bx2_MoE_v0.1](https://huggingface.co/TomGrc/FusionNet_34Bx2_MoE_v0.1) |
|
|
|
|
|
``` |
|
DPO Trainer |
|
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023. |
|
``` |
|
|
|
Metrics |
|
[Metrics](https://huggingface.co/cloudyu/4bit_quant_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO/blob/main/4bit.vs.16.jpg) |
|
|
|
Metrics |
|
[Metrics](https://huggingface.co/cloudyu/4bit_quant_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO/blob/main/4bit.vs.16.jpg) |
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cloudyu__TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |77.91| |
|
|AI2 Reasoning Challenge (25-Shot)|74.06| |
|
|HellaSwag (10-Shot) |86.74| |
|
|MMLU (5-Shot) |76.65| |
|
|TruthfulQA (0-shot) |72.24| |
|
|Winogrande (5-shot) |83.35| |
|
|GSM8k (5-shot) |74.45| |
|
|
|
|