File size: 1,395 Bytes
3594102 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge
---
# Qwen2.5-32B-EVA-Instruct-Merge-0.1
This is a merge of EVA 32B 0.1 with Qwen's 32B instruct model, and EVA 0.0, at low weights, using [mergekit](https://github.com/cg123/mergekit).
Also see: https://huggingface.co/ParasiticRogue/EVA-Instruct-32B
## Merge Details
### Merge Method
This model was merged using the della merge method using /home/a/Models/Raw/Qwen_Qwen2.5-32B as a base.
### Models Merged
The following models were included in the merge:
* /home/a/Models/Raw/EVA-UNIT-01_EVA-Qwen2.5-32B-v0.1
* /home/a/Models/Raw/Qwen_Qwen2.5-32B-Instruct
* /home/a/Models/Raw/EVA-UNIT-01_EVA-Qwen2.5-32B-v0.0
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: /home/a/Models/Raw/Qwen_Qwen2.5-32B
# No parameters necessary for base model
- model: /home/a/Models/Raw/EVA-UNIT-01_EVA-Qwen2.5-32B-v0.1
parameters:
weight: 0.7
density: 0.7
- model: /home/a/Models/Raw/EVA-UNIT-01_EVA-Qwen2.5-32B-v0.0
parameters:
weight: 0.11
density: 0.3
- model: /home/a/Models/Raw/Qwen_Qwen2.5-32B-Instruct
parameters:
weight: 0.19
density: 0.3
merge_method: della
#tokenizer_source: base
base_model: /home/a/Models/Raw/Qwen_Qwen2.5-32B
parameters:
int8_mask: true
epsilon: 0.15
lambda: 1
dtype: bfloat16
```
|