--- base_model: [] library_name: transformers tags: - mergekit - merge --- # Qwen2.5-32B-EVA-Instruct-Merge-0.1 This is a merge of EVA 32B 0.1 with Qwen's 32B instruct model, and EVA 0.0, at low weights, using [mergekit](https://github.com/cg123/mergekit). Also see: https://huggingface.co/ParasiticRogue/EVA-Instruct-32B ## Merge Details ### Merge Method This model was merged using the della merge method using /home/a/Models/Raw/Qwen_Qwen2.5-32B as a base. ### Models Merged The following models were included in the merge: * /home/a/Models/Raw/EVA-UNIT-01_EVA-Qwen2.5-32B-v0.1 * /home/a/Models/Raw/Qwen_Qwen2.5-32B-Instruct * /home/a/Models/Raw/EVA-UNIT-01_EVA-Qwen2.5-32B-v0.0 ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: /home/a/Models/Raw/Qwen_Qwen2.5-32B # No parameters necessary for base model - model: /home/a/Models/Raw/EVA-UNIT-01_EVA-Qwen2.5-32B-v0.1 parameters: weight: 0.7 density: 0.7 - model: /home/a/Models/Raw/EVA-UNIT-01_EVA-Qwen2.5-32B-v0.0 parameters: weight: 0.11 density: 0.3 - model: /home/a/Models/Raw/Qwen_Qwen2.5-32B-Instruct parameters: weight: 0.19 density: 0.3 merge_method: della #tokenizer_source: base base_model: /home/a/Models/Raw/Qwen_Qwen2.5-32B parameters: int8_mask: true epsilon: 0.15 lambda: 1 dtype: bfloat16 ```