SeQwence-14Bv2 / README.md
CultriX's picture
Adding Evaluation Results (#1)
a9ababf verified
metadata
library_name: transformers
tags:
  - mergekit
  - merge
base_model:
  - CultriX/Qwen2.5-14B-Wernicke
  - CultriX/Qwestion-14B
  - CultriX/SeQwence-14B
  - CultriX/Qwen2.5-14B-MegaMerge-pt2
  - CultriX/SeQwence-14Bv1
model-index:
  - name: SeQwence-14Bv2
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 57.86
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/SeQwence-14Bv2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 46.53
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/SeQwence-14Bv2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 21.6
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/SeQwence-14Bv2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 14.77
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/SeQwence-14Bv2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 17.55
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/SeQwence-14Bv2
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 48.16
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=CultriX/SeQwence-14Bv2
          name: Open LLM Leaderboard

final_model

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the task arithmetic merge method using CultriX/SeQwence-14B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: CultriX/SeQwence-14B
dtype: bfloat16
merge_method: task_arithmetic
parameters:
  int8_mask: 1.0
  normalize: 1.0
slices:
- sources:
  - layer_range: [0, 8]
    model: CultriX/Qwen2.5-14B-MegaMerge-pt2
    parameters:
      weight: 0.6973896126881656
  - layer_range: [0, 8]
    model: CultriX/SeQwence-14B
    parameters:
      weight: 0.25536014932096784
  - layer_range: [0, 8]
    model: CultriX/Qwen2.5-14B-Wernicke
    parameters:
      weight: 0.024099354110818955
  - layer_range: [0, 8]
    model: CultriX/SeQwence-14Bv1
    parameters:
      weight: 0.062255273414504236
  - layer_range: [0, 8]
    model: CultriX/Qwestion-14B
    parameters:
      weight: 0.19842743525221093
- sources:
  - layer_range: [8, 16]
    model: CultriX/Qwen2.5-14B-MegaMerge-pt2
    parameters:
      weight: 0.16541251205918317
  - layer_range: [8, 16]
    model: CultriX/SeQwence-14B
    parameters:
      weight: -0.11758222851964711
  - layer_range: [8, 16]
    model: CultriX/Qwen2.5-14B-Wernicke
    parameters:
      weight: 0.026110542928974606
  - layer_range: [8, 16]
    model: CultriX/SeQwence-14Bv1
    parameters:
      weight: 0.17351317150552764
  - layer_range: [8, 16]
    model: CultriX/Qwestion-14B
    parameters:
      weight: 0.2189587409844403
- sources:
  - layer_range: [16, 24]
    model: CultriX/Qwen2.5-14B-MegaMerge-pt2
    parameters:
      weight: -0.18585407879293625
  - layer_range: [16, 24]
    model: CultriX/SeQwence-14B
    parameters:
      weight: 0.28979432739572986
  - layer_range: [16, 24]
    model: CultriX/Qwen2.5-14B-Wernicke
    parameters:
      weight: 0.13321246350564858
  - layer_range: [16, 24]
    model: CultriX/SeQwence-14Bv1
    parameters:
      weight: -0.07525163437282778
  - layer_range: [16, 24]
    model: CultriX/Qwestion-14B
    parameters:
      weight: 0.09939146833918691
- sources:
  - layer_range: [24, 32]
    model: CultriX/Qwen2.5-14B-MegaMerge-pt2
    parameters:
      weight: 0.20535780306129478
  - layer_range: [24, 32]
    model: CultriX/SeQwence-14B
    parameters:
      weight: 0.23689447247624298
  - layer_range: [24, 32]
    model: CultriX/Qwen2.5-14B-Wernicke
    parameters:
      weight: 0.08595523000213551
  - layer_range: [24, 32]
    model: CultriX/SeQwence-14Bv1
    parameters:
      weight: 0.32843658569448686
  - layer_range: [24, 32]
    model: CultriX/Qwestion-14B
    parameters:
      weight: 0.5660243716148874
- sources:
  - layer_range: [32, 40]
    model: CultriX/Qwen2.5-14B-MegaMerge-pt2
    parameters:
      weight: 0.4782495451944288
  - layer_range: [32, 40]
    model: CultriX/SeQwence-14B
    parameters:
      weight: 0.04636896831126347
  - layer_range: [32, 40]
    model: CultriX/Qwen2.5-14B-Wernicke
    parameters:
      weight: -0.20847472991447114
  - layer_range: [32, 40]
    model: CultriX/SeQwence-14Bv1
    parameters:
      weight: -0.13710751148654265
  - layer_range: [32, 40]
    model: CultriX/Qwestion-14B
    parameters:
      weight: 0.04879517930226218
- sources:
  - layer_range: [40, 48]
    model: CultriX/Qwen2.5-14B-MegaMerge-pt2
    parameters:
      weight: 0.24947640644399857
  - layer_range: [40, 48]
    model: CultriX/SeQwence-14B
    parameters:
      weight: 0.27995726695330514
  - layer_range: [40, 48]
    model: CultriX/Qwen2.5-14B-Wernicke
    parameters:
      weight: 0.29376471224311385
  - layer_range: [40, 48]
    model: CultriX/SeQwence-14Bv1
    parameters:
      weight: 0.11668812856147562
  - layer_range: [40, 48]
    model: CultriX/Qwestion-14B
    parameters:
      weight: 0.117720095241547

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 34.41
IFEval (0-Shot) 57.86
BBH (3-Shot) 46.53
MATH Lvl 5 (4-Shot) 21.60
GPQA (0-shot) 14.77
MuSR (0-shot) 17.55
MMLU-PRO (5-shot) 48.16