File size: 1,300 Bytes
cd29cfa
 
 
 
918f189
cd29cfa
 
 
 
 
 
 
 
 
 
704e126
 
a542d33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
---
tags:
- yi
- moe
license: apache-2.0
---

this is a DPO fine-tuned MoE model for [TomGrc/FusionNet_34Bx2_MoE_v0.1](https://huggingface.co/TomGrc/FusionNet_34Bx2_MoE_v0.1)


   ```
DPO Trainer
TRL supports the DPO Trainer for training language models from preference data, as described in the paper Direct Preference Optimization: Your Language Model is Secretly a Reward Model by Rafailov et al., 2023. 
   ```

Metrics
[Metrics](https://huggingface.co/cloudyu/4bit_quant_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO/blob/main/4bit.vs.16.jpg)

Metrics
[Metrics](https://huggingface.co/cloudyu/4bit_quant_TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO/blob/main/4bit.vs.16.jpg)

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_cloudyu__TomGrc_FusionNet_34Bx2_MoE_v0.1_DPO_f16)

|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |77.91|
|AI2 Reasoning Challenge (25-Shot)|74.06|
|HellaSwag (10-Shot)              |86.74|
|MMLU (5-Shot)                    |76.65|
|TruthfulQA (0-shot)              |72.24|
|Winogrande (5-shot)              |83.35|
|GSM8k (5-shot)                   |74.45|