falcon_7b_norobots / README.md
leaderboard-pr-bot's picture
Adding Evaluation Results
d926c90 verified
|
raw
history blame
4.79 kB
metadata
license: apache-2.0
library_name: peft
tags:
  - code
  - instruct
  - falcon
datasets:
  - HuggingFaceH4/no_robots
base_model: tiiuae/falcon-7b
model-index:
  - name: falcon_7b_norobots
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: AI2 Reasoning Challenge (25-Shot)
          type: ai2_arc
          config: ARC-Challenge
          split: test
          args:
            num_few_shot: 25
        metrics:
          - type: acc_norm
            value: 47.87
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=qblocks/falcon_7b_norobots
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: HellaSwag (10-Shot)
          type: hellaswag
          split: validation
          args:
            num_few_shot: 10
        metrics:
          - type: acc_norm
            value: 77.92
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=qblocks/falcon_7b_norobots
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU (5-Shot)
          type: cais/mmlu
          config: all
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 27.94
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=qblocks/falcon_7b_norobots
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: TruthfulQA (0-shot)
          type: truthful_qa
          config: multiple_choice
          split: validation
          args:
            num_few_shot: 0
        metrics:
          - type: mc2
            value: 36.81
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=qblocks/falcon_7b_norobots
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: Winogrande (5-shot)
          type: winogrande
          config: winogrande_xl
          split: validation
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 71.74
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=qblocks/falcon_7b_norobots
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GSM8k (5-shot)
          type: gsm8k
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 4.47
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=qblocks/falcon_7b_norobots
          name: Open LLM Leaderboard

Finetuning Overview:

Model Used: tiiuae/falcon-7b

Dataset: HuggingFaceH4/no_robots

Dataset Insights:

No Robots is a high-quality dataset of 10,000 instructions and demonstrations created by skilled human annotators. This data can be used for supervised fine-tuning (SFT) to make language models follow instructions better.

Finetuning Details:

With the utilization of MonsterAPI's LLM finetuner, this finetuning:

  • Was achieved with great cost-effectiveness.
  • Completed in a total duration of 27mins 26secs for 1 epoch using an A6000 48GB GPU.
  • Costed $0.909 for the entire epoch.

Hyperparameters & Additional Details:

  • Epochs: 1
  • Cost Per Epoch: $0.909
  • Total Finetuning Cost: $0.909
  • Model Path: tiiuae/falcon-7b
  • Learning Rate: 0.0002
  • Data Split: 100% train
  • Gradient Accumulation Steps: 4
  • lora r: 32
  • lora alpha: 64

Prompt Structure

<|system|> <|endoftext|> <|user|> [USER PROMPT]<|endoftext|> <|assistant|> [ASSISTANT ANSWER] <|endoftext|>

Train loss :

training loss

license: apache-2.0

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 44.46
AI2 Reasoning Challenge (25-Shot) 47.87
HellaSwag (10-Shot) 77.92
MMLU (5-Shot) 27.94
TruthfulQA (0-shot) 36.81
Winogrande (5-shot) 71.74
GSM8k (5-shot) 4.47