Added evaluation results
Browse files
README.md
CHANGED
@@ -684,7 +684,316 @@ The dataset does not allow for external contributions.
|
|
684 |
|
685 |
## Evaluation
|
686 |
|
687 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
688 |
|
689 |
---
|
690 |
|
|
|
684 |
|
685 |
## Evaluation
|
686 |
|
687 |
+
Evaluation is done using the Language Model Evaluation Harness (Gao et al., 2024). We evaluate on a set of tasks taken from [SpanishBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/spanish_bench), [CatalanBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/catalan_bench), [BasqueBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/basque_bench) and [GalicianBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/galician_bench). We also use English tasks already available on the LM Evaluation Harness. These benchmarks include both new and existing tasks and datasets. In the tables below, we include the results in a selection of evaluation datasets that represent model's performance across a variety of tasks within these benchmarks.
|
688 |
+
|
689 |
+
We only use tasks that are either human generated, human translated, or with a strong human-in-the-loop (i.e., machine translation followed by professional revision or machine generation followed by human revision and annotation). This is the reason behind the variety in number of tasks reported across languages. As more tasks that fulfill these requirements are published, we will update the presented results. We also intend to expand the evaluation to other languages, as long as the datasets meet our quality standards.
|
690 |
+
|
691 |
+
During the implementation of the evaluation we observed a series of issues worth considering when replicating and interpreting the results presented. These issues include ≈1.5% variances in performance in some tasks depending on the version of the `transformers` library used, and depending on the use (or lack of use) of tensor parallelism when loading a model. When implementing existing tasks, we carry out a comprehensive quality evaluation of the dataset, the Harness task itself, and what kind of input models see during evaluation. Our implementation (see links above) addresses multiple existing problems such as errors in datasets and prompts, and lack of pre-processing. All this means that results will vary if using other Harness implementations, and may slightly vary depending on the replication setup.
|
692 |
+
|
693 |
+
It should be noted that these results are subject to all the drawbacks of every current gold-standard evaluation, and that the figures do not fully represent the models capabilities and potential. We thus advise caution when reading and interpreting the results.
|
694 |
+
|
695 |
+
A full list of results compared to other baselines, a discussion of the model's performance across tasks and its implications, and details regarding problem-solving with task implementation will soon be available in the technical report.
|
696 |
+
|
697 |
+
All results reported below are on a 5-shot setting.
|
698 |
+
|
699 |
+
#### Spanish
|
700 |
+
|
701 |
+
<table><thead>
|
702 |
+
<tr>
|
703 |
+
<th>Category</th>
|
704 |
+
<th>Task</th>
|
705 |
+
<th>Metric</th>
|
706 |
+
<th>Result</th>
|
707 |
+
</tr></thead>
|
708 |
+
<tbody>
|
709 |
+
<tr>
|
710 |
+
<td>Commonsense Reasoning</td>
|
711 |
+
<td>xstorycloze_es</td>
|
712 |
+
<td>acc</td>
|
713 |
+
<td>64.92</td>
|
714 |
+
</tr>
|
715 |
+
<tr>
|
716 |
+
<td rowspan="2">NLI</td>
|
717 |
+
<td>wnli_es</td>
|
718 |
+
<td>acc</td>
|
719 |
+
<td>54.93</td>
|
720 |
+
</tr>
|
721 |
+
<tr>
|
722 |
+
<td>xnli_es</td>
|
723 |
+
<td>acc</td>
|
724 |
+
<td>44.98</td>
|
725 |
+
</tr>
|
726 |
+
<tr>
|
727 |
+
<td>Paraphrasing</td>
|
728 |
+
<td>paws_es</td>
|
729 |
+
<td>acc</td>
|
730 |
+
<td>52.05</td>
|
731 |
+
</tr>
|
732 |
+
<tr>
|
733 |
+
<td>QA</td>
|
734 |
+
<td>xquad_es</td>
|
735 |
+
<td>acc</td>
|
736 |
+
<td>54.32</td>
|
737 |
+
</tr>
|
738 |
+
<tr>
|
739 |
+
<td>Translation</td>
|
740 |
+
<td>flores_es</td>
|
741 |
+
<td>bleu</td>
|
742 |
+
<td>11.46</td>
|
743 |
+
</tr>
|
744 |
+
</tbody>
|
745 |
+
</table>
|
746 |
+
|
747 |
+
#### Catalan
|
748 |
+
|
749 |
+
<table><thead>
|
750 |
+
<tr>
|
751 |
+
<th>Category</th>
|
752 |
+
<th>Task</th>
|
753 |
+
<th>Metric</th>
|
754 |
+
<th>Result</th>
|
755 |
+
</tr></thead>
|
756 |
+
<tbody>
|
757 |
+
<tr>
|
758 |
+
<td rowspan="2">Commonsense Reasoning</td>
|
759 |
+
<td>copa_ca</td>
|
760 |
+
<td>acc</td>
|
761 |
+
<td>68.80</td>
|
762 |
+
</tr>
|
763 |
+
<tr>
|
764 |
+
<td>xstorycloze_ca</td>
|
765 |
+
<td>acc</td>
|
766 |
+
<td>65.72</td>
|
767 |
+
</tr>
|
768 |
+
<tr>
|
769 |
+
<td rowspan="2">NLI</td>
|
770 |
+
<td>wnli_ca</td>
|
771 |
+
<td>acc</td>
|
772 |
+
<td>56.34</td>
|
773 |
+
</tr>
|
774 |
+
<tr>
|
775 |
+
<td>xnli_ca</td>
|
776 |
+
<td>acc</td>
|
777 |
+
<td>48.07</td>
|
778 |
+
</tr>
|
779 |
+
<tr>
|
780 |
+
<td rowspan="2">Paraphrasing</td>
|
781 |
+
<td>parafraseja</td>
|
782 |
+
<td>acc</td>
|
783 |
+
<td>58.55</td>
|
784 |
+
</tr>
|
785 |
+
<tr>
|
786 |
+
<td>paws_ca</td>
|
787 |
+
<td>acc</td>
|
788 |
+
<td>55.15</td>
|
789 |
+
</tr>
|
790 |
+
<tr>
|
791 |
+
<td rowspan="5">QA</td>
|
792 |
+
<td>arc_ca_easy</td>
|
793 |
+
<td>acc</td>
|
794 |
+
<td>54.76</td>
|
795 |
+
</tr>
|
796 |
+
<tr>
|
797 |
+
<td>arc_ca_challenge</td>
|
798 |
+
<td>acc</td>
|
799 |
+
<td>30.55</td>
|
800 |
+
</tr>
|
801 |
+
<tr>
|
802 |
+
<td>openbookqa_ca</td>
|
803 |
+
<td>acc</td>
|
804 |
+
<td>27.40</td>
|
805 |
+
</tr>
|
806 |
+
<tr>
|
807 |
+
<td>piqa_ca</td>
|
808 |
+
<td>acc</td>
|
809 |
+
<td>62.89</td>
|
810 |
+
</tr>
|
811 |
+
<tr>
|
812 |
+
<td>siqa_ca</td>
|
813 |
+
<td>acc</td>
|
814 |
+
<td>41.91</td>
|
815 |
+
</tr>
|
816 |
+
<tr>
|
817 |
+
<td>Translation</td>
|
818 |
+
<td>flores_ca</td>
|
819 |
+
<td>bleu</td>
|
820 |
+
<td>14.70</td>
|
821 |
+
</tr>
|
822 |
+
</tbody></table>
|
823 |
+
|
824 |
+
#### Basque
|
825 |
+
|
826 |
+
<table><thead>
|
827 |
+
<tr>
|
828 |
+
<th>Category</th>
|
829 |
+
<th>Task</th>
|
830 |
+
<th>Metric</th>
|
831 |
+
<th>Result</th>
|
832 |
+
</tr></thead>
|
833 |
+
<tbody>
|
834 |
+
<tr>
|
835 |
+
<td rowspan="2">Commonsense Reasoning</td>
|
836 |
+
<td>xcopa_eu</td>
|
837 |
+
<td>acc</td>
|
838 |
+
<td>55.60</td>
|
839 |
+
</tr>
|
840 |
+
<tr>
|
841 |
+
<td>xstorycloze_eu</td>
|
842 |
+
<td>acc</td>
|
843 |
+
<td>57.64</td>
|
844 |
+
</tr>
|
845 |
+
<tr>
|
846 |
+
<td rowspan="2">NLI</td>
|
847 |
+
<td>wnli_eu</td>
|
848 |
+
<td>acc</td>
|
849 |
+
<td>56.34</td>
|
850 |
+
</tr>
|
851 |
+
<tr>
|
852 |
+
<td>xnli_eu</td>
|
853 |
+
<td>acc</td>
|
854 |
+
<td>39.78</td>
|
855 |
+
</tr>
|
856 |
+
<tr>
|
857 |
+
<td rowspan="3">QA</td>
|
858 |
+
<td>eus_exams</td>
|
859 |
+
<td>acc</td>
|
860 |
+
<td>23.72</td>
|
861 |
+
</tr>
|
862 |
+
<tr>
|
863 |
+
<td>eus_proficiency</td>
|
864 |
+
<td>acc</td>
|
865 |
+
<td>23.37</td>
|
866 |
+
</tr>
|
867 |
+
<tr>
|
868 |
+
<td>eus_trivia</td>
|
869 |
+
<td>acc</td>
|
870 |
+
<td>27.58</td>
|
871 |
+
</tr>
|
872 |
+
<tr>
|
873 |
+
<td>Reading Comprehension</td>
|
874 |
+
<td>eus_reading</td>
|
875 |
+
<td>acc</td>
|
876 |
+
<td>27.84</td>
|
877 |
+
</tr>
|
878 |
+
<tr>
|
879 |
+
<td>Translation</td>
|
880 |
+
<td>flores_eu</td>
|
881 |
+
<td>bleu</td>
|
882 |
+
<td>3.58</td>
|
883 |
+
</tr>
|
884 |
+
</tbody></table>
|
885 |
+
|
886 |
+
#### Galician
|
887 |
+
|
888 |
+
<table><thead>
|
889 |
+
<tr>
|
890 |
+
<th>Category</th>
|
891 |
+
<th>Task</th>
|
892 |
+
<th>Metric</th>
|
893 |
+
<th>Result</th>
|
894 |
+
</tr></thead>
|
895 |
+
<tbody>
|
896 |
+
<tr>
|
897 |
+
<td rowspan="2">Paraphrasing</td>
|
898 |
+
<td>parafrases_gl</td>
|
899 |
+
<td>acc</td>
|
900 |
+
<td>54.08</td>
|
901 |
+
</tr>
|
902 |
+
<tr>
|
903 |
+
<td>paws_gl</td>
|
904 |
+
<td>acc</td>
|
905 |
+
<td>53.30</td>
|
906 |
+
</tr>
|
907 |
+
<tr>
|
908 |
+
<td>QA</td>
|
909 |
+
<td>openbookqa_gl</td>
|
910 |
+
<td>acc</td>
|
911 |
+
<td>30.80</td>
|
912 |
+
</tr>
|
913 |
+
<tr>
|
914 |
+
<td>Translation</td>
|
915 |
+
<td>flores_gl</td>
|
916 |
+
<td>bleu</td>
|
917 |
+
<td>12.86</td>
|
918 |
+
</tr>
|
919 |
+
</tbody>
|
920 |
+
</table>
|
921 |
+
|
922 |
+
#### English
|
923 |
+
|
924 |
+
<table><thead>
|
925 |
+
<tr>
|
926 |
+
<th>Category</th>
|
927 |
+
<th>Task</th>
|
928 |
+
<th>Metric</th>
|
929 |
+
<th>Result</th>
|
930 |
+
</tr></thead>
|
931 |
+
<tbody>
|
932 |
+
<tr>
|
933 |
+
<td rowspan="2">Commonsense Reasoning</td>
|
934 |
+
<td>copa</td>
|
935 |
+
<td>acc</td>
|
936 |
+
<td>83.00</td>
|
937 |
+
</tr>
|
938 |
+
<tr>
|
939 |
+
<td>xstorycloze_en</td>
|
940 |
+
<td>acc</td>
|
941 |
+
<td>73.06</td>
|
942 |
+
</tr>
|
943 |
+
<tr>
|
944 |
+
<td rowspan="2">NLI</td>
|
945 |
+
<td>wnli</td>
|
946 |
+
<td>acc</td>
|
947 |
+
<td>56.34</td>
|
948 |
+
</tr>
|
949 |
+
<tr>
|
950 |
+
<td>xnli_en</td>
|
951 |
+
<td>acc</td>
|
952 |
+
<td>47.35</td>
|
953 |
+
</tr>
|
954 |
+
<tr>
|
955 |
+
<td>Paraphrasing</td>
|
956 |
+
<td>paws *</td>
|
957 |
+
<td>acc</td>
|
958 |
+
<td>55.95</td>
|
959 |
+
</tr>
|
960 |
+
<tr>
|
961 |
+
<td rowspan="6">QA</td>
|
962 |
+
<td>arc_easy</td>
|
963 |
+
<td>acc</td>
|
964 |
+
<td>74.07</td>
|
965 |
+
</tr>
|
966 |
+
<tr>
|
967 |
+
<td>arc_challenge</td>
|
968 |
+
<td>acc</td>
|
969 |
+
<td>37.63</td>
|
970 |
+
</tr>
|
971 |
+
<tr>
|
972 |
+
<td>openbookqa</td>
|
973 |
+
<td>acc</td>
|
974 |
+
<td>28.00</td>
|
975 |
+
</tr>
|
976 |
+
<tr>
|
977 |
+
<td>piqa</td>
|
978 |
+
<td>acc</td>
|
979 |
+
<td>74.86</td>
|
980 |
+
</tr>
|
981 |
+
<tr>
|
982 |
+
<td>social_iqa</td>
|
983 |
+
<td>acc</td>
|
984 |
+
<td>46.62</td>
|
985 |
+
</tr>
|
986 |
+
<tr>
|
987 |
+
<td>squad_en **</td>
|
988 |
+
<td>acc</td>
|
989 |
+
<td>44.38</td>
|
990 |
+
</tr>
|
991 |
+
</tbody></table>
|
992 |
+
|
993 |
+
\* Current LM Evaluation Harness implementation is lacking correct pre-processing. These results are obtained with adequate pre-processing.
|
994 |
+
|
995 |
+
\*\* This task is not yet available in the official Harness, we hope to add it soon.
|
996 |
+
|
997 |
|
998 |
---
|
999 |
|