Edit model card

This is the merged model for LoRA https://huggingface.co/Yhyu13/phi-2-sft-dpo-gpt4_en-ep1-lora

This model is a dpo improvement to this base model https://huggingface.co/Yhyu13/phi-2-sft-alpaca_gpt4_en-ep1 who achieve better than text-davinci-003 on AlpcaEval judged by ChatGPT.

AlpacaEval

Quote from this discussion https://huggingface.co/microsoft/phi-2/discussions/38

Since phi2 requires remote code which HF open llm leaderboard would not accept at this moment,

I ran phi2 and my dpo to the AlpcaEval benchmark

https://tatsu-lab.github.io/alpaca_eval/

Here is result evaluated by chatpgpt https://github.com/tatsu-lab/alpaca_eval/pull/183

                       win_rate  standard_error  n_total  avg_length
gpt4                      73.79            1.54      805        1365
claude                    70.37            1.60      805        1082
chatgpt                   66.09            1.66      805         811
wizardlm-13b              65.16            1.67      805         985
vicuna-13b                64.10            1.69      805        1037
guanaco-65b               62.36            1.71      805        1249
oasst-rlhf-llama-33b      62.05            1.71      805        1079
alpaca-farm-ppo-human     60.25            1.72      805         803
falcon-40b-instruct       56.52            1.74      805         662
phi-2-alpaca-gpt4-dpo(new)55.60            1.75      804        4532
phi-2-alpaca-gpt4(new)    54.23            1.75      804        1138
text_davinci_003          50.00            0.00      805         307
alpaca-7b                 45.22            1.74      805         396
phi-2(new)                43.79            1.74      805         924
text_davinci_001          28.07            1.56      805         296

phi-2-alpaca-gpt4-dpo is only slightly better than my previous sft phi-2-alpaca-gpt4, when evaluted by chatgpt, but the dpo tuned model outputs significantly longer result!

Downloads last month
19
Safetensors
Model size
2.78B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Yhyu13/phi-2-sft-dpo-gpt4_en-ep1

Merges
3 models
Quantizations
1 model