|
--- |
|
license: agpl-3.0 |
|
tags: |
|
- merge |
|
- wizardlm |
|
- wizardmath |
|
pipeline_tag: text-generation |
|
model-index: |
|
- name: WizardLM-Math-70B-TIES-v0.1 |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 68.52 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-TIES-v0.1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 86.87 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-TIES-v0.1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 69.24 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-TIES-v0.1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 53.61 |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-TIES-v0.1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 82.72 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-TIES-v0.1 |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 27.37 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=MaziyarPanahi/WizardLM-Math-70B-TIES-v0.1 |
|
name: Open LLM Leaderboard |
|
--- |
|
|
|
|
|
|
|
## Examples |
|
|
|
|
|
|
|
There are three killers in a room. Someone enters the room and kills one of them. Nobody leaves the room. How many killers are left in the room? Explain your reasoning step by step. |
|
|
|
``` |
|
Initially, there are three killers in the room. Let's call them Killer A, Killer B, and Killer C. |
|
|
|
Step 1: Someone enters the room and kills one of the killers. We don't know who this person is or which killer they killed. However, we know that one killer is dead. |
|
|
|
Step 2: Now, we have two killers left in the room. These two killers are the ones who were not killed by the person who entered the room. |
|
|
|
Step 3: The person who entered the room is not a killer, as they are not included in the initial three killers. However, they have killed one of the killers, so they are not innocent either. |
|
|
|
Step 4: Since the person who entered the room is not a killer, we can conclude that there are only two killers left in the room after the incident. |
|
|
|
So, the final answer is: There are 2 killers left in the room. |
|
|
|
The answer is: there are 2 killers left in the room. |
|
``` |
|
Answered wrong! |
|
|
|
Mark is faster than Mary , Mary is faster than Joe. Is Joe faster than Mark? Let's think step by step |
|
|
|
``` |
|
irst, we know that Mark is faster than Mary. This means that if they were to race, Mark would finish before Mary. We can represent this relationship as: |
|
|
|
Mark > Mary |
|
|
|
Next, we know that Mary is faster than Joe. This means that if Mary and Joe were to race, Mary would finish before Joe. We can represent this relationship as: |
|
|
|
Mary > Joe |
|
|
|
Now, we want to find out if Joe is faster than Mark. To do this, we can use the transitive property, which states that if A > B and B > C, then A > C. In our case, A represents Mark, B represents Mary, and C represents Joe. So, we can apply the transitive property to our relationships: |
|
|
|
Since Mark > Mary and Mary > Joe, we can conclude that Mark > Joe. |
|
|
|
So, the final answer is: No, Joe is not faster than Mark. |
|
|
|
The answer is: No, Joe is not faster than Mark. |
|
``` |
|
|
|
If I lay 2 wet shirts out in the sun flat on the ground to dry and it takes 5 hours until they are dry, how long would it take to dry 4 shirts of the same type that way simultanously? |
|
|
|
``` |
|
First, we need to understand the relationship between the number of shirts and the time it takes to dry them. Since the shirts are laid out flat on the ground, they have the same surface area exposed to the sun. This means that the drying time is directly proportional to the number of shirts. |
|
|
|
Step 1: Identify the given information. |
|
- Time to dry 2 shirts: 5 hours |
|
- Number of shirts: 2 |
|
|
|
Step 2: Set up a proportion to represent the relationship between the number of shirts and the time it takes to dry them. |
|
- Let x be the time it takes to dry 4 shirts. |
|
- We can write the proportion as: |
|
|
|
(2 shirts / 5 hours) = (4 shirts / x) |
|
|
|
Step 3: Solve for x. |
|
- To solve for x, we can cross-multiply: |
|
|
|
2 shirts * x = 5 hours * 4 shirts |
|
|
|
- Simplify the equation: |
|
|
|
2x = 20 shirt-hours |
|
|
|
- Now, divide both sides by 2 to isolate x: |
|
|
|
x = 10 hours |
|
|
|
So, it would take 10 hours to dry 4 shirts of the same type simultaneously. |
|
|
|
The answer is: 10. |
|
``` |
|
Wrong, should be 5 |
|
|
|
|
|
explain step by step 25-4*2+3=? |
|
``` |
|
First, we need to follow the order of operations (PEMDAS/BODMAS), which stands for Parentheses/Brackets, Exponents/Orders, Multiplication and Division, and Addition and Subtraction. This means we should perform the operations inside the parentheses first, then the exponents, then the multiplication and division, and finally the addition and subtraction. |
|
|
|
Here's the given expression: 25 - 4 * 2 + 3 |
|
|
|
Step 1: Perform the multiplication operation inside the parentheses. |
|
25 - (4 * 2) + 3 |
|
|
|
Step 2: Perform the multiplication operation. |
|
25 - (8) + 3 |
|
|
|
Step 3: Now, we have a subtraction operation and an addition operation. Since they have the same precedence, we can perform them from left to right. |
|
25 - 8 + 3 |
|
|
|
Step 4: Perform the subtraction operation. |
|
17 + 3 |
|
|
|
Step 5: Perform the addition operation. |
|
20 |
|
|
|
So, the final answer is 20. |
|
|
|
The answer is: the final answer is 20. |
|
``` |
|
|
|
## Prompt template |
|
|
|
``` |
|
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" |
|
``` |
|
|
|
or for CoT (❗For the simple math questions, we do NOT recommend to use the CoT prompt.) |
|
|
|
``` |
|
"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response: Let's think step by step." |
|
``` |
|
|
|
## Eval |
|
|
|
```python |
|
{ |
|
"all": { |
|
"acc": 0.6868282613819305, |
|
"acc_stderr": 0.030371866427473967, |
|
"acc_norm": 0.695311288530275, |
|
"acc_norm_stderr": 0.030984285786669577, |
|
"mc1": 0.36964504283965727, |
|
"mc1_stderr": 0.01689818070697388, |
|
"mc2": 0.5360987678643523, |
|
"mc2_stderr": 0.014938153988985473 |
|
}, |
|
"harness|arc:challenge|25": { |
|
"acc": 0.6424914675767918, |
|
"acc_stderr": 0.014005494275916573, |
|
"acc_norm": 0.6851535836177475, |
|
"acc_norm_stderr": 0.01357265770308495 |
|
}, |
|
"harness|hellaswag|10": { |
|
"acc": 0.6836287592113125, |
|
"acc_stderr": 0.004641092001425294, |
|
"acc_norm": 0.8686516630153356, |
|
"acc_norm_stderr": 0.0033709059327855567 |
|
}, |
|
"harness|hendrycksTest-abstract_algebra|5": { |
|
"acc": 0.33, |
|
"acc_stderr": 0.04725815626252605, |
|
"acc_norm": 0.33, |
|
"acc_norm_stderr": 0.04725815626252605 |
|
}, |
|
"harness|hendrycksTest-anatomy|5": { |
|
"acc": 0.6222222222222222, |
|
"acc_stderr": 0.04188307537595852, |
|
"acc_norm": 0.6222222222222222, |
|
"acc_norm_stderr": 0.04188307537595852 |
|
}, |
|
"harness|hendrycksTest-astronomy|5": { |
|
"acc": 0.8026315789473685, |
|
"acc_stderr": 0.03238981601699397, |
|
"acc_norm": 0.8026315789473685, |
|
"acc_norm_stderr": 0.03238981601699397 |
|
}, |
|
"harness|hendrycksTest-business_ethics|5": { |
|
"acc": 0.72, |
|
"acc_stderr": 0.04512608598542127, |
|
"acc_norm": 0.72, |
|
"acc_norm_stderr": 0.04512608598542127 |
|
}, |
|
"harness|hendrycksTest-clinical_knowledge|5": { |
|
"acc": 0.7471698113207547, |
|
"acc_stderr": 0.026749899771241214, |
|
"acc_norm": 0.7471698113207547, |
|
"acc_norm_stderr": 0.026749899771241214 |
|
}, |
|
"harness|hendrycksTest-college_biology|5": { |
|
"acc": 0.8194444444444444, |
|
"acc_stderr": 0.032166008088022675, |
|
"acc_norm": 0.8194444444444444, |
|
"acc_norm_stderr": 0.032166008088022675 |
|
}, |
|
"harness|hendrycksTest-college_chemistry|5": { |
|
"acc": 0.48, |
|
"acc_stderr": 0.050211673156867795, |
|
"acc_norm": 0.48, |
|
"acc_norm_stderr": 0.050211673156867795 |
|
}, |
|
"harness|hendrycksTest-college_computer_science|5": { |
|
"acc": 0.57, |
|
"acc_stderr": 0.04975698519562428, |
|
"acc_norm": 0.57, |
|
"acc_norm_stderr": 0.04975698519562428 |
|
}, |
|
"harness|hendrycksTest-college_mathematics|5": { |
|
"acc": 0.33, |
|
"acc_stderr": 0.04725815626252604, |
|
"acc_norm": 0.33, |
|
"acc_norm_stderr": 0.04725815626252604 |
|
}, |
|
"harness|hendrycksTest-college_medicine|5": { |
|
"acc": 0.6647398843930635, |
|
"acc_stderr": 0.03599586301247077, |
|
"acc_norm": 0.6647398843930635, |
|
"acc_norm_stderr": 0.03599586301247077 |
|
}, |
|
"harness|hendrycksTest-college_physics|5": { |
|
"acc": 0.35294117647058826, |
|
"acc_stderr": 0.047551296160629475, |
|
"acc_norm": 0.35294117647058826, |
|
"acc_norm_stderr": 0.047551296160629475 |
|
}, |
|
"harness|hendrycksTest-computer_security|5": { |
|
"acc": 0.74, |
|
"acc_stderr": 0.04408440022768079, |
|
"acc_norm": 0.74, |
|
"acc_norm_stderr": 0.04408440022768079 |
|
}, |
|
"harness|hendrycksTest-conceptual_physics|5": { |
|
"acc": 0.6893617021276596, |
|
"acc_stderr": 0.03025123757921317, |
|
"acc_norm": 0.6893617021276596, |
|
"acc_norm_stderr": 0.03025123757921317 |
|
}, |
|
"harness|hendrycksTest-econometrics|5": { |
|
"acc": 0.40350877192982454, |
|
"acc_stderr": 0.046151869625837026, |
|
"acc_norm": 0.40350877192982454, |
|
"acc_norm_stderr": 0.046151869625837026 |
|
}, |
|
"harness|hendrycksTest-electrical_engineering|5": { |
|
"acc": 0.6068965517241379, |
|
"acc_stderr": 0.040703290137070705, |
|
"acc_norm": 0.6068965517241379, |
|
"acc_norm_stderr": 0.040703290137070705 |
|
}, |
|
"harness|hendrycksTest-elementary_mathematics|5": { |
|
"acc": 0.4312169312169312, |
|
"acc_stderr": 0.0255064816981382, |
|
"acc_norm": 0.4312169312169312, |
|
"acc_norm_stderr": 0.0255064816981382 |
|
}, |
|
"harness|hendrycksTest-formal_logic|5": { |
|
"acc": 0.5, |
|
"acc_stderr": 0.04472135954999579, |
|
"acc_norm": 0.5, |
|
"acc_norm_stderr": 0.04472135954999579 |
|
}, |
|
"harness|hendrycksTest-global_facts|5": { |
|
"acc": 0.43, |
|
"acc_stderr": 0.049756985195624284, |
|
"acc_norm": 0.43, |
|
"acc_norm_stderr": 0.049756985195624284 |
|
}, |
|
"harness|hendrycksTest-high_school_biology|5": { |
|
"acc": 0.8193548387096774, |
|
"acc_stderr": 0.021886178567172527, |
|
"acc_norm": 0.8193548387096774, |
|
"acc_norm_stderr": 0.021886178567172527 |
|
}, |
|
"harness|hendrycksTest-high_school_chemistry|5": { |
|
"acc": 0.5320197044334976, |
|
"acc_stderr": 0.03510766597959217, |
|
"acc_norm": 0.5320197044334976, |
|
"acc_norm_stderr": 0.03510766597959217 |
|
}, |
|
"harness|hendrycksTest-high_school_computer_science|5": { |
|
"acc": 0.71, |
|
"acc_stderr": 0.045604802157206845, |
|
"acc_norm": 0.71, |
|
"acc_norm_stderr": 0.045604802157206845 |
|
}, |
|
"harness|hendrycksTest-high_school_european_history|5": { |
|
"acc": 0.8121212121212121, |
|
"acc_stderr": 0.03050193405942914, |
|
"acc_norm": 0.8121212121212121, |
|
"acc_norm_stderr": 0.03050193405942914 |
|
}, |
|
"harness|hendrycksTest-high_school_geography|5": { |
|
"acc": 0.8888888888888888, |
|
"acc_stderr": 0.022390787638216773, |
|
"acc_norm": 0.8888888888888888, |
|
"acc_norm_stderr": 0.022390787638216773 |
|
}, |
|
"harness|hendrycksTest-high_school_government_and_politics|5": { |
|
"acc": 0.927461139896373, |
|
"acc_stderr": 0.018718998520678185, |
|
"acc_norm": 0.927461139896373, |
|
"acc_norm_stderr": 0.018718998520678185 |
|
}, |
|
"harness|hendrycksTest-high_school_macroeconomics|5": { |
|
"acc": 0.7230769230769231, |
|
"acc_stderr": 0.022688042352424994, |
|
"acc_norm": 0.7230769230769231, |
|
"acc_norm_stderr": 0.022688042352424994 |
|
}, |
|
"harness|hendrycksTest-high_school_mathematics|5": { |
|
"acc": 0.3333333333333333, |
|
"acc_stderr": 0.028742040903948492, |
|
"acc_norm": 0.3333333333333333, |
|
"acc_norm_stderr": 0.028742040903948492 |
|
}, |
|
"harness|hendrycksTest-high_school_microeconomics|5": { |
|
"acc": 0.8109243697478992, |
|
"acc_stderr": 0.02543511943810537, |
|
"acc_norm": 0.8109243697478992, |
|
"acc_norm_stderr": 0.02543511943810537 |
|
}, |
|
"harness|hendrycksTest-high_school_physics|5": { |
|
"acc": 0.4304635761589404, |
|
"acc_stderr": 0.04042809961395634, |
|
"acc_norm": 0.4304635761589404, |
|
"acc_norm_stderr": 0.04042809961395634 |
|
}, |
|
"harness|hendrycksTest-high_school_psychology|5": { |
|
"acc": 0.8862385321100917, |
|
"acc_stderr": 0.0136136148002328, |
|
"acc_norm": 0.8862385321100917, |
|
"acc_norm_stderr": 0.0136136148002328 |
|
}, |
|
"harness|hendrycksTest-high_school_statistics|5": { |
|
"acc": 0.5879629629629629, |
|
"acc_stderr": 0.03356787758160831, |
|
"acc_norm": 0.5879629629629629, |
|
"acc_norm_stderr": 0.03356787758160831 |
|
}, |
|
"harness|hendrycksTest-high_school_us_history|5": { |
|
"acc": 0.9166666666666666, |
|
"acc_stderr": 0.019398452135813895, |
|
"acc_norm": 0.9166666666666666, |
|
"acc_norm_stderr": 0.019398452135813895 |
|
}, |
|
"harness|hendrycksTest-high_school_world_history|5": { |
|
"acc": 0.8776371308016878, |
|
"acc_stderr": 0.02133174182974679, |
|
"acc_norm": 0.8776371308016878, |
|
"acc_norm_stderr": 0.02133174182974679 |
|
}, |
|
"harness|hendrycksTest-human_aging|5": { |
|
"acc": 0.8161434977578476, |
|
"acc_stderr": 0.025998379092356513, |
|
"acc_norm": 0.8161434977578476, |
|
"acc_norm_stderr": 0.025998379092356513 |
|
}, |
|
"harness|hendrycksTest-human_sexuality|5": { |
|
"acc": 0.8473282442748091, |
|
"acc_stderr": 0.03154521672005472, |
|
"acc_norm": 0.8473282442748091, |
|
"acc_norm_stderr": 0.03154521672005472 |
|
}, |
|
"harness|hendrycksTest-international_law|5": { |
|
"acc": 0.8512396694214877, |
|
"acc_stderr": 0.03248470083807194, |
|
"acc_norm": 0.8512396694214877, |
|
"acc_norm_stderr": 0.03248470083807194 |
|
}, |
|
"harness|hendrycksTest-jurisprudence|5": { |
|
"acc": 0.8148148148148148, |
|
"acc_stderr": 0.03755265865037181, |
|
"acc_norm": 0.8148148148148148, |
|
"acc_norm_stderr": 0.03755265865037181 |
|
}, |
|
"harness|hendrycksTest-logical_fallacies|5": { |
|
"acc": 0.803680981595092, |
|
"acc_stderr": 0.031207970394709225, |
|
"acc_norm": 0.803680981595092, |
|
"acc_norm_stderr": 0.031207970394709225 |
|
}, |
|
"harness|hendrycksTest-machine_learning|5": { |
|
"acc": 0.48214285714285715, |
|
"acc_stderr": 0.047427623612430116, |
|
"acc_norm": 0.48214285714285715, |
|
"acc_norm_stderr": 0.047427623612430116 |
|
}, |
|
"harness|hendrycksTest-management|5": { |
|
"acc": 0.8446601941747572, |
|
"acc_stderr": 0.03586594738573975, |
|
"acc_norm": 0.8446601941747572, |
|
"acc_norm_stderr": 0.03586594738573975 |
|
}, |
|
"harness|hendrycksTest-marketing|5": { |
|
"acc": 0.8846153846153846, |
|
"acc_stderr": 0.020930193185179333, |
|
"acc_norm": 0.8846153846153846, |
|
"acc_norm_stderr": 0.020930193185179333 |
|
}, |
|
"harness|hendrycksTest-medical_genetics|5": { |
|
"acc": 0.72, |
|
"acc_stderr": 0.045126085985421276, |
|
"acc_norm": 0.72, |
|
"acc_norm_stderr": 0.045126085985421276 |
|
}, |
|
"harness|hendrycksTest-miscellaneous|5": { |
|
"acc": 0.8620689655172413, |
|
"acc_stderr": 0.012331009307795663, |
|
"acc_norm": 0.8620689655172413, |
|
"acc_norm_stderr": 0.012331009307795663 |
|
}, |
|
"harness|hendrycksTest-moral_disputes|5": { |
|
"acc": 0.7745664739884393, |
|
"acc_stderr": 0.022497230190967558, |
|
"acc_norm": 0.7745664739884393, |
|
"acc_norm_stderr": 0.022497230190967558 |
|
}, |
|
"harness|hendrycksTest-moral_scenarios|5": { |
|
"acc": 0.5452513966480447, |
|
"acc_stderr": 0.016653875777523995, |
|
"acc_norm": 0.5452513966480447, |
|
"acc_norm_stderr": 0.016653875777523995 |
|
}, |
|
"harness|hendrycksTest-nutrition|5": { |
|
"acc": 0.7581699346405228, |
|
"acc_stderr": 0.024518195641879334, |
|
"acc_norm": 0.7581699346405228, |
|
"acc_norm_stderr": 0.024518195641879334 |
|
}, |
|
"harness|hendrycksTest-philosophy|5": { |
|
"acc": 0.7845659163987139, |
|
"acc_stderr": 0.023350225475471442, |
|
"acc_norm": 0.7845659163987139, |
|
"acc_norm_stderr": 0.023350225475471442 |
|
}, |
|
"harness|hendrycksTest-prehistory|5": { |
|
"acc": 0.8364197530864198, |
|
"acc_stderr": 0.02058146613825712, |
|
"acc_norm": 0.8364197530864198, |
|
"acc_norm_stderr": 0.02058146613825712 |
|
}, |
|
"harness|hendrycksTest-professional_accounting|5": { |
|
"acc": 0.5177304964539007, |
|
"acc_stderr": 0.02980873964223777, |
|
"acc_norm": 0.5177304964539007, |
|
"acc_norm_stderr": 0.02980873964223777 |
|
}, |
|
"harness|hendrycksTest-professional_law|5": { |
|
"acc": 0.5658409387222947, |
|
"acc_stderr": 0.012659033237067253, |
|
"acc_norm": 0.5658409387222947, |
|
"acc_norm_stderr": 0.012659033237067253 |
|
}, |
|
"harness|hendrycksTest-professional_medicine|5": { |
|
"acc": 0.7683823529411765, |
|
"acc_stderr": 0.025626533803777562, |
|
"acc_norm": 0.7683823529411765, |
|
"acc_norm_stderr": 0.025626533803777562 |
|
}, |
|
"harness|hendrycksTest-professional_psychology|5": { |
|
"acc": 0.7532679738562091, |
|
"acc_stderr": 0.0174408203674025, |
|
"acc_norm": 0.7532679738562091, |
|
"acc_norm_stderr": 0.0174408203674025 |
|
}, |
|
"harness|hendrycksTest-public_relations|5": { |
|
"acc": 0.7272727272727273, |
|
"acc_stderr": 0.04265792110940588, |
|
"acc_norm": 0.7272727272727273, |
|
"acc_norm_stderr": 0.04265792110940588 |
|
}, |
|
"harness|hendrycksTest-security_studies|5": { |
|
"acc": 0.8, |
|
"acc_stderr": 0.02560737598657916, |
|
"acc_norm": 0.8, |
|
"acc_norm_stderr": 0.02560737598657916 |
|
}, |
|
"harness|hendrycksTest-sociology|5": { |
|
"acc": 0.8955223880597015, |
|
"acc_stderr": 0.021628920516700643, |
|
"acc_norm": 0.8955223880597015, |
|
"acc_norm_stderr": 0.021628920516700643 |
|
}, |
|
"harness|hendrycksTest-us_foreign_policy|5": { |
|
"acc": 0.93, |
|
"acc_stderr": 0.0256432399976243, |
|
"acc_norm": 0.93, |
|
"acc_norm_stderr": 0.0256432399976243 |
|
}, |
|
"harness|hendrycksTest-virology|5": { |
|
"acc": 0.5240963855421686, |
|
"acc_stderr": 0.03887971849597264, |
|
"acc_norm": 0.5240963855421686, |
|
"acc_norm_stderr": 0.03887971849597264 |
|
}, |
|
"harness|hendrycksTest-world_religions|5": { |
|
"acc": 0.8654970760233918, |
|
"acc_stderr": 0.026168221344662297, |
|
"acc_norm": 0.8654970760233918, |
|
"acc_norm_stderr": 0.026168221344662297 |
|
}, |
|
"harness|truthfulqa:mc|0": { |
|
"mc1": 0.36964504283965727, |
|
"mc1_stderr": 0.01689818070697388, |
|
"mc2": 0.5360987678643523, |
|
"mc2_stderr": 0.014938153988985473 |
|
}, |
|
"harness|winogrande|5": { |
|
"acc": 0.8271507498026835, |
|
"acc_stderr": 0.010626964529971855 |
|
}, |
|
"harness|gsm8k|5": { |
|
"acc": 0.27369219105382864, |
|
"acc_stderr": 0.012281003490963456 |
|
} |
|
} |
|
``` |
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_MaziyarPanahi__WizardLM-Math-70B-TIES-v0.1) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |64.72| |
|
|AI2 Reasoning Challenge (25-Shot)|68.52| |
|
|HellaSwag (10-Shot) |86.87| |
|
|MMLU (5-Shot) |69.24| |
|
|TruthfulQA (0-shot) |53.61| |
|
|Winogrande (5-shot) |82.72| |
|
|GSM8k (5-shot) |27.37| |
|
|
|
|