Yislerp-34B / README.md
allknowingroger's picture
Adding Evaluation Results (#1)
d5d9422 verified
---
license: apache-2.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- altomek/YiSM-34B-0rn
- CombinHorizon/YiSM-blossom5.1-34B-SLERP
model-index:
- name: Yislerp-34B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 36.92
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/Yislerp-34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 45.98
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/Yislerp-34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 19.56
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/Yislerp-34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 14.43
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/Yislerp-34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 15.78
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/Yislerp-34B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 41.68
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=allknowingroger/Yislerp-34B
name: Open LLM Leaderboard
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [altomek/YiSM-34B-0rn](https://huggingface.co/altomek/YiSM-34B-0rn)
* [CombinHorizon/YiSM-blossom5.1-34B-SLERP](https://huggingface.co/CombinHorizon/YiSM-blossom5.1-34B-SLERP)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: altomek/YiSM-34B-0rn
layer_range:
- 0
- 60
- model: CombinHorizon/YiSM-blossom5.1-34B-SLERP
layer_range:
- 0
- 60
merge_method: slerp
base_model: altomek/YiSM-34B-0rn
parameters:
t:
- filter: self_attn
value:
- 0
- 0.5
- 0.3
- 0.7
- 1
- filter: mlp
value:
- 1
- 0.5
- 0.7
- 0.3
- 0
- value: 0.38
dtype: bfloat16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_allknowingroger__Yislerp-34B)
| Metric |Value|
|-------------------|----:|
|Avg. |29.06|
|IFEval (0-Shot) |36.92|
|BBH (3-Shot) |45.98|
|MATH Lvl 5 (4-Shot)|19.56|
|GPQA (0-shot) |14.43|
|MuSR (0-shot) |15.78|
|MMLU-PRO (5-shot) |41.68|