macadeliccc commited on
Commit
ef51c98
1 Parent(s): 9d96129

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -1
README.md CHANGED
@@ -55,5 +55,85 @@ SOLAR-10.7B-Capy-v1.0 is also on the way. There could be more depending on perfo
55
 
56
  ## Evaluations
57
 
58
- TODO
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
 
 
55
 
56
  ## Evaluations
57
 
58
+ | Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
59
+ |-------------------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
60
+ |[CapyLake-7B-v2-laser](https://huggingface.co/macadeliccc/CapyLake-7B-v2-laser)| 44.34| 77.77| 68.47| 47.92| 59.62|
61
+
62
+ ### AGIEval
63
+ | Task |Version| Metric |Value| |Stderr|
64
+ |------------------------------|------:|--------|----:|---|-----:|
65
+ |agieval_aqua_rat | 0|acc |28.35|± | 2.83|
66
+ | | |acc_norm|25.98|± | 2.76|
67
+ |agieval_logiqa_en | 0|acc |38.86|± | 1.91|
68
+ | | |acc_norm|39.02|± | 1.91|
69
+ |agieval_lsat_ar | 0|acc |25.22|± | 2.87|
70
+ | | |acc_norm|24.35|± | 2.84|
71
+ |agieval_lsat_lr | 0|acc |50.39|± | 2.22|
72
+ | | |acc_norm|51.57|± | 2.22|
73
+ |agieval_lsat_rc | 0|acc |65.06|± | 2.91|
74
+ | | |acc_norm|63.94|± | 2.93|
75
+ |agieval_sat_en | 0|acc |78.64|± | 2.86|
76
+ | | |acc_norm|78.64|± | 2.86|
77
+ |agieval_sat_en_without_passage| 0|acc |40.78|± | 3.43|
78
+ | | |acc_norm|40.78|± | 3.43|
79
+ |agieval_sat_math | 0|acc |33.64|± | 3.19|
80
+ | | |acc_norm|30.45|± | 3.11|
81
+
82
+ Average: 44.34%
83
+
84
+ ### GPT4All
85
+ | Task |Version| Metric |Value| |Stderr|
86
+ |-------------|------:|--------|----:|---|-----:|
87
+ |arc_challenge| 0|acc |66.89|± | 1.38|
88
+ | | |acc_norm|67.49|± | 1.37|
89
+ |arc_easy | 0|acc |86.70|± | 0.70|
90
+ | | |acc_norm|81.90|± | 0.79|
91
+ |boolq | 1|acc |88.10|± | 0.57|
92
+ |hellaswag | 0|acc |71.45|± | 0.45|
93
+ | | |acc_norm|87.78|± | 0.33|
94
+ |openbookqa | 0|acc |39.80|± | 2.19|
95
+ | | |acc_norm|49.80|± | 2.24|
96
+ |piqa | 0|acc |82.86|± | 0.88|
97
+ | | |acc_norm|84.87|± | 0.84|
98
+ |winogrande | 0|acc |84.45|± | 1.02|
99
+
100
+ Average: 77.77%
101
+
102
+ ### TruthfulQA
103
+ | Task |Version|Metric|Value| |Stderr|
104
+ |-------------|------:|------|----:|---|-----:|
105
+ |truthfulqa_mc| 1|mc1 |53.98|± | 1.74|
106
+ | | |mc2 |68.47|± | 1.53|
107
+
108
+ Average: 68.47%
109
+
110
+ ### Bigbench
111
+
112
+ | Task |Version| Metric |Value| |Stderr|
113
+ |------------------------------------------------|------:|---------------------|----:|---|-----:|
114
+ |bigbench_causal_judgement | 0|multiple_choice_grade|59.47|± | 3.57|
115
+ |bigbench_date_understanding | 0|multiple_choice_grade|64.50|± | 2.49|
116
+ |bigbench_disambiguation_qa | 0|multiple_choice_grade|44.96|± | 3.10|
117
+ |bigbench_geometric_shapes | 0|multiple_choice_grade|22.84|± | 2.22|
118
+ | | |exact_str_match | 2.79|± | 0.87|
119
+ |bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|30.80|± | 2.07|
120
+ |bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|21.57|± | 1.56|
121
+ |bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|56.67|± | 2.87|
122
+ |bigbench_movie_recommendation | 0|multiple_choice_grade|51.60|± | 2.24|
123
+ |bigbench_navigate | 0|multiple_choice_grade|51.00|± | 1.58|
124
+ |bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|70.35|± | 1.02|
125
+ |bigbench_ruin_names | 0|multiple_choice_grade|51.79|± | 2.36|
126
+ |bigbench_salient_translation_error_detection | 0|multiple_choice_grade|35.97|± | 1.52|
127
+ |bigbench_snarks | 0|multiple_choice_grade|79.01|± | 3.04|
128
+ |bigbench_sports_understanding | 0|multiple_choice_grade|75.66|± | 1.37|
129
+ |bigbench_temporal_sequences | 0|multiple_choice_grade|47.90|± | 1.58|
130
+ |bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|23.84|± | 1.21|
131
+ |bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|18.00|± | 0.92|
132
+ |bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|56.67|± | 2.87|
133
+
134
+ Average: 47.92%
135
+
136
+ Average score: 59.62%
137
+
138
+ Elapsed time: 01:57:56
139