Evaluation Result

#15
by tanliboy - opened

I conducted an evaluation of the model, and while the inference speed is impressive, I was unable to replicate the performance results reported in the paper. Below are the results I got:

Groups Version Filter n-shot Metric Value Stderr
mmlu 1 none acc 0.5486 ± 0.0040
- humanities 1 none acc 0.4997 ± 0.0069
- other 1 none acc 0.6308 ± 0.0083
- social sciences 1 none acc 0.6406 ± 0.0085
- stem 1 none acc 0.4510 ± 0.0086
Tasks Version Filter n-shot Metric Value Stderr
hellaswag 1 none 0 acc 0.6188 ± 0.0048
none 0 acc_norm 0.8026 ± 0.0040

The difference might be due to differences in evaluation settings. Overall, the model's performance seems outdated compared to the latest models. Do we have any plans to release an updated version on Griffin architecture?

Sign up or log in to comment