v000000's picture
Update README.md
043d4ec verified
|
raw
history blame
1.68 kB
---
base_model:
- v000000/Qwen2.5-14B-Gutenberg-1e-Delta
- Qwen/Qwen2.5-14B-Instruct
library_name: transformers
tags:
- mergekit
- merge
- qwen2
- qwen2.5
- dpo
license: apache-2.0
datasets:
- jondurbin/gutenberg-dpo-v0.1
---
# Qwen2.5-14B-Gutenberg-Instruct-Slerpeno
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/PgoZ5eutiHDfBmuoBuDO9.png)
--------------------------------------------------------------------------
## GGUF from mradermacher!
* [GGUF static](https://huggingface.co/mradermacher/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno-GGUF)
* [GGUF Imatrix](https://huggingface.co/mradermacher/Qwen2.5-14B-Gutenberg-Instruct-Slerpeno-i1-GGUF)
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method. (*sophosympatheia gradient*)
### Models Merged
The following models were included in the merge:
* [v000000/Qwen2.5-14B-Gutenberg-1e-Delta](https://huggingface.co/v000000/Qwen2.5-14B-Gutenberg-1e-Delta)
* [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Qwen/Qwen2.5-14B-Instruct
merge_method: slerp
base_model: v000000/Qwen2.5-14B-Gutenberg-1e-Delta
parameters:
t:
- value: [0, 0, 0.3, 0.4, 0.5, 0.6, 0.5, 0.4, 0.3, 0, 0]
dtype: bfloat16
```
*The idea here is that Gutenberg DPO stays in the output/input 100% while merging smoothly with the base instruct model in the deeper layers to heal loss and increase intelligence.*