Quantizations of https://huggingface.co/icefog72/WestIceLemonTeaRP-32k-7b
From original readme
This is a merge of pre-trained language models created using mergekit.
Merge Details
Prompt template: Alpaca, maybe ChatML
- measurement.json for quanting exl2 included.
thx mradermacher and SilverFan for
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
- IceLemonTeaRP-32k-7b
- WestWizardIceLemonTeaRP
- SeverusWestLake-7B-DPO
- WizardIceLemonTeaRP
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: IceLemonTeaRP-32k-7b
layer_range: [0, 32]
- model: WestWizardIceLemonTeaRP
layer_range: [0, 32]
merge_method: slerp
base_model: IceLemonTeaRP-32k-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 71.27 |
AI2 Reasoning Challenge (25-Shot) | 68.77 |
HellaSwag (10-Shot) | 86.89 |
MMLU (5-Shot) | 64.28 |
TruthfulQA (0-shot) | 62.47 |
Winogrande (5-shot) | 80.98 |
GSM8k (5-shot) | 64.22 |
- Downloads last month
- 820