qwenselfinstructalt
This is a merge of pre-trained language models created using mergekit.
Merge Details
Same idea as Lambent/qwen2.5-14B-selfmerge-A, but training the base model on an ~20M token instruct and continued pretraining dataset first.
Hope is the lightweight instruction tuning might add some synergy with the original instruct.
Testing: eq-bench showed no syntax errors and result was 75.6984, closer to original instruct value of 76.9195 than selfmerge-A (which had 73.8068).
Subsets of mrfakename/Capybara-ShareGPT, abacusai/SystemChat-1.1, anthracite-org/nopm_claude_writing_fixed and fineweb-edu were used for the alternate training.
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Lambent/alternate-instruct-qwen2.5-14B
merge_method: slerp
base_model: Qwen/Qwen2.5-14B-Instruct
parameters:
t:
- value: [0, 0, 0.3, 0.4, 0.5, 0.6, 0.5, 0.4, 0.3, 0, 0]
dtype: bfloat16
- Downloads last month
- 8