sethuiyer commited on
Commit
22a3f05
1 Parent(s): ed3f069

Create EVAL.md

Browse files
Files changed (1) hide show
  1. EVAL.md +82 -0
EVAL.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Nous Benchmark
2
+
3
+ | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
4
+ |---------------------------------------------------------|------:|------:|---------:|-------:|------:|
5
+ |[Nandine-7b](https://huggingface.co/sethuiyer/Nandine-7b)| 43.54| 76.41| 61.73| 45.27| 56.74|
6
+
7
+ ### AGIEval
8
+ | Task |Version| Metric |Value| |Stderr|
9
+ |------------------------------|------:|--------|----:|---|-----:|
10
+ |agieval_aqua_rat | 0|acc |23.62|± | 2.67|
11
+ | | |acc_norm|22.05|± | 2.61|
12
+ |agieval_logiqa_en | 0|acc |37.94|± | 1.90|
13
+ | | |acc_norm|38.71|± | 1.91|
14
+ |agieval_lsat_ar | 0|acc |26.09|± | 2.90|
15
+ | | |acc_norm|22.61|± | 2.76|
16
+ |agieval_lsat_lr | 0|acc |47.45|± | 2.21|
17
+ | | |acc_norm|50.00|± | 2.22|
18
+ |agieval_lsat_rc | 0|acc |60.97|± | 2.98|
19
+ | | |acc_norm|59.85|± | 2.99|
20
+ |agieval_sat_en | 0|acc |77.18|± | 2.93|
21
+ | | |acc_norm|77.67|± | 2.91|
22
+ |agieval_sat_en_without_passage| 0|acc |45.63|± | 3.48|
23
+ | | |acc_norm|45.15|± | 3.48|
24
+ |agieval_sat_math | 0|acc |35.91|± | 3.24|
25
+ | | |acc_norm|32.27|± | 3.16|
26
+
27
+ Average: 43.54%
28
+
29
+ ### GPT4All
30
+ | Task |Version| Metric |Value| |Stderr|
31
+ |-------------|------:|--------|----:|---|-----:|
32
+ |arc_challenge| 0|acc |63.74|± | 1.40|
33
+ | | |acc_norm|63.99|± | 1.40|
34
+ |arc_easy | 0|acc |85.94|± | 0.71|
35
+ | | |acc_norm|83.50|± | 0.76|
36
+ |boolq | 1|acc |87.80|± | 0.57|
37
+ |hellaswag | 0|acc |67.50|± | 0.47|
38
+ | | |acc_norm|85.31|± | 0.35|
39
+ |openbookqa | 0|acc |38.20|± | 2.18|
40
+ | | |acc_norm|49.40|± | 2.24|
41
+ |piqa | 0|acc |82.97|± | 0.88|
42
+ | | |acc_norm|84.33|± | 0.85|
43
+ |winogrande | 0|acc |80.51|± | 1.11|
44
+
45
+ Average: 76.41%
46
+
47
+ ### TruthfulQA
48
+ | Task |Version|Metric|Value| |Stderr|
49
+ |-------------|------:|------|----:|---|-----:|
50
+ |truthfulqa_mc| 1|mc1 |45.78|± | 1.74|
51
+ | | |mc2 |61.73|± | 1.54|
52
+
53
+ Average: 61.73%
54
+
55
+ ### Bigbench
56
+ | Task |Version| Metric |Value| |Stderr|
57
+ |------------------------------------------------|------:|---------------------|----:|---|-----:|
58
+ |bigbench_causal_judgement | 0|multiple_choice_grade|57.89|± | 3.59|
59
+ |bigbench_date_understanding | 0|multiple_choice_grade|65.58|± | 2.48|
60
+ |bigbench_disambiguation_qa | 0|multiple_choice_grade|38.76|± | 3.04|
61
+ |bigbench_geometric_shapes | 0|multiple_choice_grade|20.06|± | 2.12|
62
+ | | |exact_str_match | 5.85|± | 1.24|
63
+ |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|30.20|± | 2.06|
64
+ |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|20.71|± | 1.53|
65
+ |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|52.67|± | 2.89|
66
+ |bigbench_movie_recommendation | 0|multiple_choice_grade|43.60|± | 2.22|
67
+ |bigbench_navigate | 0|multiple_choice_grade|50.50|± | 1.58|
68
+ |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|73.15|± | 0.99|
69
+ |bigbench_ruin_names | 0|multiple_choice_grade|46.65|± | 2.36|
70
+ |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|25.25|± | 1.38|
71
+ |bigbench_snarks | 0|multiple_choice_grade|75.14|± | 3.22|
72
+ |bigbench_sports_understanding | 0|multiple_choice_grade|73.12|± | 1.41|
73
+ |bigbench_temporal_sequences | 0|multiple_choice_grade|47.20|± | 1.58|
74
+ |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|23.04|± | 1.19|
75
+ |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|18.69|± | 0.93|
76
+ |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|52.67|± | 2.89|
77
+
78
+ Average: 45.27%
79
+
80
+ Average score: 56.74%
81
+
82
+ Elapsed time: 01:47:54