leaderboard-pr-bot commited on
Commit
22448a1
1 Parent(s): a480082

Adding Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr/discussions

Files changed (1) hide show
  1. README.md +117 -0
README.md CHANGED
@@ -9,6 +9,109 @@ tags:
9
  - llama2
10
  datasets:
11
  - ai4bharat/indic-instruct-data-v0.1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  # Airavata
@@ -123,3 +226,17 @@ print(outputs)
123
  journal = {arXiv preprint arXiv: 2401.15006}
124
  }
125
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  - llama2
10
  datasets:
11
  - ai4bharat/indic-instruct-data-v0.1
12
+ model-index:
13
+ - name: Airavata
14
+ results:
15
+ - task:
16
+ type: text-generation
17
+ name: Text Generation
18
+ dataset:
19
+ name: AI2 Reasoning Challenge (25-Shot)
20
+ type: ai2_arc
21
+ config: ARC-Challenge
22
+ split: test
23
+ args:
24
+ num_few_shot: 25
25
+ metrics:
26
+ - type: acc_norm
27
+ value: 46.5
28
+ name: normalized accuracy
29
+ source:
30
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ai4bharat/Airavata
31
+ name: Open LLM Leaderboard
32
+ - task:
33
+ type: text-generation
34
+ name: Text Generation
35
+ dataset:
36
+ name: HellaSwag (10-Shot)
37
+ type: hellaswag
38
+ split: validation
39
+ args:
40
+ num_few_shot: 10
41
+ metrics:
42
+ - type: acc_norm
43
+ value: 69.26
44
+ name: normalized accuracy
45
+ source:
46
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ai4bharat/Airavata
47
+ name: Open LLM Leaderboard
48
+ - task:
49
+ type: text-generation
50
+ name: Text Generation
51
+ dataset:
52
+ name: MMLU (5-Shot)
53
+ type: cais/mmlu
54
+ config: all
55
+ split: test
56
+ args:
57
+ num_few_shot: 5
58
+ metrics:
59
+ - type: acc
60
+ value: 43.9
61
+ name: accuracy
62
+ source:
63
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ai4bharat/Airavata
64
+ name: Open LLM Leaderboard
65
+ - task:
66
+ type: text-generation
67
+ name: Text Generation
68
+ dataset:
69
+ name: TruthfulQA (0-shot)
70
+ type: truthful_qa
71
+ config: multiple_choice
72
+ split: validation
73
+ args:
74
+ num_few_shot: 0
75
+ metrics:
76
+ - type: mc2
77
+ value: 40.62
78
+ source:
79
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ai4bharat/Airavata
80
+ name: Open LLM Leaderboard
81
+ - task:
82
+ type: text-generation
83
+ name: Text Generation
84
+ dataset:
85
+ name: Winogrande (5-shot)
86
+ type: winogrande
87
+ config: winogrande_xl
88
+ split: validation
89
+ args:
90
+ num_few_shot: 5
91
+ metrics:
92
+ - type: acc
93
+ value: 68.82
94
+ name: accuracy
95
+ source:
96
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ai4bharat/Airavata
97
+ name: Open LLM Leaderboard
98
+ - task:
99
+ type: text-generation
100
+ name: Text Generation
101
+ dataset:
102
+ name: GSM8k (5-shot)
103
+ type: gsm8k
104
+ config: main
105
+ split: test
106
+ args:
107
+ num_few_shot: 5
108
+ metrics:
109
+ - type: acc
110
+ value: 4.02
111
+ name: accuracy
112
+ source:
113
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ai4bharat/Airavata
114
+ name: Open LLM Leaderboard
115
  ---
116
 
117
  # Airavata
 
226
  journal = {arXiv preprint arXiv: 2401.15006}
227
  }
228
  ```
229
+
230
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
231
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ai4bharat__Airavata)
232
+
233
+ | Metric |Value|
234
+ |---------------------------------|----:|
235
+ |Avg. |45.52|
236
+ |AI2 Reasoning Challenge (25-Shot)|46.50|
237
+ |HellaSwag (10-Shot) |69.26|
238
+ |MMLU (5-Shot) |43.90|
239
+ |TruthfulQA (0-shot) |40.62|
240
+ |Winogrande (5-shot) |68.82|
241
+ |GSM8k (5-shot) | 4.02|
242
+