leaderboard-pr-bot's picture
Adding Evaluation Results
1a8aa54 verified
|
raw
history blame
5.89 kB
metadata
license: cc-by-nc-4.0
model-index:
  - name: Mistral-11B-TestBench9
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 64.08
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Undi95/Mistral-11B-TestBench9
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 84.24
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Undi95/Mistral-11B-TestBench9
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 64
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Undi95/Mistral-11B-TestBench9
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 56.19
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Undi95/Mistral-11B-TestBench9
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 78.45
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Undi95/Mistral-11B-TestBench9
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 16.15
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Undi95/Mistral-11B-TestBench9
          name: Open LLM Leaderboard

Don't mind those at the moment, I need to finetune them for RP, it's just some tests.

WARNING: This model specifically need EOS token I completely forgot to put on the json files, and need to check what was the right ones trough the mix. Please don't use it like this if you really want to review it.

slices:
  - sources:
    - model: "/content/drive/MyDrive/CC-v1.1-7B-bf16"
      layer_range: [0, 24]
  - sources:
    - model: "/content/drive/MyDrive/Zephyr-7B"
      layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16

================================================

slices:
  - sources:
      - model: "/content/drive/MyDrive/Mistral-11B-CC-Zephyr"
        layer_range: [0, 48]
      - model: Undi95/Mistral-11B-OpenOrcaPlatypus
        layer_range: [0, 48]
merge_method: slerp
base_model: "/content/drive/MyDrive/Mistral-11B-CC-Zephyr"
parameters:
  t:
    - value: 0.5 # fallback for rest of tensors
dtype: bfloat16

hf-causal-experimental (pretrained=/content/drive/MyDrive/Mistral-11B-Test), limit: None, provide_description: False, num_fewshot: 0, batch_size: 4

Task Version Metric Value Stderr
arc_challenge 0 acc 0.5623 ± 0.0145
acc_norm 0.5794 ± 0.0144
arc_easy 0 acc 0.8354 ± 0.0076
acc_norm 0.8165 ± 0.0079
hellaswag 0 acc 0.6389 ± 0.0048
acc_norm 0.8236 ± 0.0038
piqa 0 acc 0.8139 ± 0.0091
acc_norm 0.8264 ± 0.0088
truthfulqa_mc 1 mc1 0.3978 ± 0.0171
mc2 0.5607 ± 0.0155
winogrande 0 acc 0.7451 ± 0.0122

image/png

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 53.06
ARC (25-shot) 64.08
HellaSwag (10-shot) 84.24
MMLU (5-shot) 64.0
TruthfulQA (0-shot) 56.19
Winogrande (5-shot) 78.45
GSM8K (5-shot) 16.15
DROP (3-shot) 8.35

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 60.52
AI2 Reasoning Challenge (25-Shot) 64.08
HellaSwag (10-Shot) 84.24
MMLU (5-Shot) 64.00
TruthfulQA (0-shot) 56.19
Winogrande (5-shot) 78.45
GSM8k (5-shot) 16.15